0330 161 1234
June’s session for senior in-house counsel - hosted by LexisNexis in collaboration with Radius Law and Flex Legal – explored generative artificial intelligence for in-house teams.
Alison Rees-Blanchard, TMT Practice Support Lawyer at LexisNexis UK took us on a whistlestop tour of Generative AI, the key legal issues and what in-house lawyers should be considering in relation to Generative AI now.
This was followed by a panel discussion with James Harper, Head of Legal, Global Nexis Solutions at LexisNexis and Harry Borovick, General Counsel of Luminance, chaired by Shanthini Satyendra, Managing Legal Counsel, Digital Transformation and Technology at Santander.
Alison took us on a whistlestop tour of prominent AI issues, beginning by explaining some key terms in this area.
AI – AI is essentially the ability of a machine to exhibit intelligent behavior e.g. if a machine engaged in conversation without being detected as a machine, it has demonstrated human intelligence and would fall within the definition of AI.
Machine learning – a subset and form of AI which can learn and operate from its input data, as opposed to a result of explicit programming.
Generative AI (GAI) – the umbrella term for any AI capable of generating or creating new and original content. This could include the generation of text, music, coding etc. Put simply, it is a category of AI algorithms that are capable of generating net new outputs based on their training data.
Large language model (LLM) generative AI – types of generative AI that create language or text output. Examples of this type of generative AI include chatbots such as ChatGPT and translators.
For more on AI and key terminology, see these Practice Notes on Lexis+ Practical Guidance: Artificial intelligence and machine learning—an introduction to the technology and Artificial intelligence—overview.
Large language models, put simply, are AI models that have been trained on very large volumes of data with a large number of parameters. They predict the next token or word in a sentence based on the surrounding context within that sentence. They are trained by removing text from an input and requiring the model to predict the missing text based on what it has been told previously as input training data.
ChatGPT is a publicly accessible large language model natural processing tool in the form of an AI chatbot generating conservational text.
For more on ChatGPT, see News Analyses: What is ChatGPT?, ChatGPT—is it legal? and ChatGPT—user beware.
Why all the fuss?
Alison then highlighted some key risks novel to generative artificial intelligence in particular:
Alison went on to outline key issues and questions to consider regarding generative AI, its development and its use.
Personal data – do developers have sufficient rights to include any personal data in the training data for the purpose of training the generative AI model i.e. does the lawful basis on which the data was originally collected and processed still apply for the use of the data as training data? This is one of the reasons why ChatGPT was suspended in Italy recently.
For more information on the Italian ChatGPT ban, see News: Italian DPA bans ChatGPT and opens investigation after data breach, OpenAI’s ChatGPT faces potential finding of major GDPR breach by Italian privacy watchdog and News Analysis: ChatGPT—is it legal?
For more on AI and data protection, see Practice Note on Lexis+ Practical Guidance: Artificial intelligence—data protection.
Appropriate use – is it appropriate to use the AI in the manner that is being deployed? If making a decision about a person which will have a significant impact on them, is that an appropriate use?
Bias – are there biases in the training data which could result in discriminatory outputs?
Explanation of decisions – can a decision be explained, and can we understand why an AI model came to a certain decision?
For more on AI and explainability, see Practice Note: Artificial intelligence—explainability.
Transparency – do we have sufficient transparency over training?
Intellectual Property Rights – where proprietary rights exist in training data, do developers have the right to use that data for training purposes? Does the original owner have any rights in the output as derivative works? In the UK, computer-generated works are capable of copyright protection afforded to the person for whom the necessary arrangements for the creation of the work are undertaken. For works created by generative AI, is this person the model’s developer or the person that came up with the instruction or input prompt?
For more on AI and intellectual property, see Practice Notes on Lexis+ Practical Guidance: Artificial intelligence—intellectual property and News Analyses: Generative AI—is its output protectable by intellectual property rights?, Generative AI and intellectual property rights—the UK government's position and How generative AI tools impact IP rights of users.
Confidential information – companies are exercising caution with regard to inputting confidential information or trade secrets as training data or an instructional prompt. This would most likely destroy its confidential nature and likely put it in the public domain.
Regulation – regulators are considering how to implement guardrails against risks presented by generative AI.
For more on AI and regulation in the UK, see Practice Note on Lexis+ Practical Guidance: Artificial Intelligence—UK regulation and the National AI Strategy and News Analysis: AI regulation in the UK—government White Paper published.
For the EU position, see Practice Note on Lexis+ Practical Guidance: The EU AI Act and News Analyses: Proposed AI Convention—a brief overview, Key amendments by the EU Parliament to the EU AI Act.
For the US position, see News Analysis: NIST Issues Artificial Intelligence Risk Management Framework (AI RMF 1.0).
For more on key legal issues regarding AI, see Practice Note: Artificial intelligence in the EU—the key legal issues
Alison highlighted that in-house lawyers can start putting in place responsible AI principles, focusing on key principles and issues to consider around these principles:
Moving onto the panel discussion, Shanthini Satyendra, Managing Legal Counsel, Digital Transformation and Technology at Santander first highlighted three key functional features of AI which differentiates it from other technology that in-house lawyers may be used to dealing with before posing questions to James Harper, Head of Legal, Global Nexis Solutions at LexisNexis and Harry Borovick, General Counsel for Luminance.
Autonomous nature – the autonomous nature of AI is always highlighted in relevant legislation, as the fact that it can act without being programmed can lead to issues, such as lack of oversight, making it different to technology, which is automated.
Black box nature – AI learns patterns from data without the need for explicit programming to create an output. As Alison highlighted, this can create issues around transparency and how a certain output was reached.
Scale effect – can produce large amounts of work which would otherwise take a lot longer to produce. This potential for scale however can raise issues.
Shanthini also advised not making this technology too technical as ensuring accessibility of issues surrounding AI is key so that the people from the top down understand what questions need to be asked.
James emphasised the importance of context in relation to this question and that this question should be considered from two angles: whether you are thinking about developing a product that includes generative AI; and whether you are thinking about using a generative AI tool in day-to-day work.
Developing a product that uses generative artificial intelligence – product teams should consider the following three step test when developing a product that includes an element of generative AI:
Using generative AI tools in day-to-day work – if using generative AI in your day-to-day work proceed with caution, particularly in the role of a lawyer. James informed the attended of a recent case in the US in which a lawyer used ChatGPT for research which produced hallucinations resulting in the lawyers citing false precedents and leading to sanctions by the US courts.
Harry broadly agreed and acknowledged that taking unverified information is irresponsible legal practice but highlighted the tendency of lawyers to overcomplicate the use of technology. He emphasised that lawyers must not be negligent when researching or carrying out tasks, whether AI is used to do so or not. Responsible use of AI is key.
For more on the use of generative AI, see Precedent on Lexis+: Policy—use of generative artificial intelligence and News Analyses: Investing in generative AI, Understanding and managing the risks in artificial intelligence (AI) technology projects and Cyber attacks using ChatGPT could test data privacy compliance, researchers say.
Harry also emphasised the need to understand why product specific terms are important when purchasing an AI tool to use. When purchasing an AI product with potential personal data, intellectual property and confidentiality implications, clauses should be specific to that risk, what you are expecting from the tool and your business’s needs. Harry advised that a side letter or addendum should be used as a minimum to the purchasing agreement addressing these. Data protection impact assessments are a good starting point when considering these issues.
For more on AI and contractual considerations, see Checklist on Lexis+: Contractual considerations for the procurement of artificial intelligence—checklist.
James’ feelings on AI depended on its use. With regard to lawyers, the role will evolve over time and those who harness AI and use it to their advantage will survive and those that refuse to evolve with the technology will fall to the wayside. James did express that the one thing he thought AI wouldn’t deliver, at least for a considerable period of time, is an EQ led decision and develop the ability to differentiate what is good from bad.
Harry agreed that EQ is not a current strength of AI tools but believed that this skill would develop more quickly than anticipated and that the pace of change is underestimated. He expressed that such EQ is a qualitative review of the input data on a timeline and that if an AI tool seems like it has enough EQ and it is imperceptible, then it exists regardless of whether it was human genuine.
The guest speakers then discussed the terms and conditions of ChatGPT. Put simply, if using a publicly available tool, anything you input is essentially theirs. If using the business connected API, they assure that they won’t use your data or inputs save for any improvement of their models. It is important to be aware that the practical measures of this are unknown as they don’t disclose this. As such, many companies have implemented a blanket ban on ChatGPT use; however Harry considered that this could change if there is an EU-based secure version created.
James highlighted challenges surrounding liability and the ability to prove that your content has been inputted into the tool illegitimately. Once such data has been inputted, it is extremely challenging to then remove and disassociate it. He also highlighted a case in which OpenAI is being sued in the US in relation to ChatGPT hallucinating a case accusing a man of fraud and embezzlement. Based on their terms and US libel law however, James thought the claim had severe challenges.
For more content on artificial intelligence, visit our AI subtopic on Lexis+ Practical Guidance.
* denotes a required field