Artificial intelligence, ChatGPT and the workplace: what it means for businesses

Artificial intelligence, ChatGPT and the workplace: what it means for businesses

Legal Seminar

The launch of ChatGPT by OpenAI last November has firmly put generative artificial intelligence (AI) on the map. The head of Google Sundar Pichai has recently commented that AI is more profound than fire or electricity or anything that humans have done in the past. Further, ChatGPT and tools like it are readily available. To use ChatGPT, all you need is a computer with internet access and an account. ChatGPT is available 24/7 and is open to everyone that accepts its terms.

What is ChatGPT?

ChatGPT is an application that can answer questions and generate human-like prose and responses. Applications such as ChatGPT are "trained" on vast amounts of text or data inputted or taken from the internet as well as books and articles. ChatGPT is so advanced that it can purportedly admit its mistakes, challenge incorrect premises and reject inappropriate requests. 

The potential uses for AI in the business context are vast. For example, generative AI can assist with tasks such as reviewing CVs and then selecting potential candidates, managing contract suites, and performing risk management tasks, to drafting content. ChatGPT could have written this article. That said, there are some challenges and concerns with AI and this article seeks to identify some initial observations if your business is considering the day-to-day use of tools like ChatGPT.

Risk – is it accurate?

How a business engages with generative AI tools like ChatGPT, particularly at this early stage, will depend on many factors including tolerance for risk, the industry and sector the business is in and the type and use of the AI system. AI undoubtably offers huge opportunities but some businesses may be reluctant to invest in or use AI-based system at this relatively early stage. Aside from the points set out below, outputs of AI systems may not always be accurate. As an example, a New York lawyer is facing a court hearing after his firm used ChatGPT for legal research and the court filing referred to a legal case that did not exist!

Data protection and AI: an unhappy couple?

AI systems are trained on large amounts of data, some of which is likely to contain personal data. Any personal data processed by an AI system must be processed in accordance with the requirements of data protection laws including the UK GDPR and EU GDPR. Each of these regimes provides strict transparency requirements and requires organisations to establish a lawful basis when processing any personal data. These requirements will apply both when "training" the underlying system using personal data and then additionally when processing the personal data of users of the system.

Ensuring that each processing activity has a lawful basis for processing and meeting the transparency requirements may be challenging for creators and adopters of AI tools. Further, data protection regulators are signalling that they are taking a hard-line approach to compliance and enforcement.

Automated processing and discrimination

There are specific requirements under data protection laws where automated processing is involved, particularly where processing of any personal data is used to evaluate an individual without human involvement. This can, in turn, lead to profiling. Automated processing has been identified as "high-risk" under data protection laws. Any business using a system that automatically evaluates an individual, for example candidates or applicants, should be mindful of the additional requirements that come with high-risk processing such as Data Privacy Impact Assessments (DPIAs).

There may also be the potential for unfair discrimination claims and other employment-related issues if, for example, underlying systems have been trained on particular data sets which could unfairly discriminate against certain groups or ethnicities.

Intellectual property

Intellectual property considerations are also relevant given that the inputs and outputs of AI will contain and create intellectual property rights. Businesses will need to be comfortable that any data inputted may be used to create further works and, in addition, there may be concerns that outputs may infringe a third party’s intellectual property rights.

Confidential information

Businesses looking to input data will also need to be aware of any confidentiality considerations. For example, is a duty owed to a third party in respect of the data being uploaded and will that duty of confidentiality be breached or will uploading that data compromise the confidentiality of that information?

Legal AI regulation

AI is evolving at an unprecedented rate with different tools, solutions and updates regularly being released. With the speed of technology comes new law and regulations as governments and regulators try to keep pace. The European Commission is proposing to implement the AI Act under which certain systems that present an "unacceptable risk" are banned. Under this act, high-risk applications, for example CV scanning tools that rank job applicants, would be subject to strict legal requirements. In the UK, the government has recently published its policy paper, AI regulation: a pro-innovation approach and is seeking views. The paper notes that the UK needs to act quickly to continue to lead the international conversation on AI governance and demonstrate the value of a “pragmatic, proportionate regulatory approach and support a pro-innovation framework”.

Guidance and terms & conditions

Businesses using or allowing the use of AI systems should bear in mind the above. A good starting point is to have a clear policy on the internal use of AI and to ensure that that policy dovetails with existing policies on data protection and information systems. 

Terms and conditions of AI systems should be reviewed, both to ensure compliance with data protection laws but also to understand proprietary ownership of any output of the system.

We anticipate an increase in due diligence as part of any business or share purchase, with specific warranties dealing with AI systems. Good internal record keeping is likely to greatly assist in dealing with these specific enquiries to show that proper due diligence was undertaken on the particular AI system prior to integration and use.

The speed of change and the potential to impact the workplace with such tools as ChatGPT and AI know no boundaries and we are likely to witness significant developments in this area.

Contact our experts for further advice

Search our site