Generative AI and the future of the legal profession

This is not our latest research. Click below to read our January 2024 generative AI survey.

The legal world is no stranger to technological disruption. In the last few years, we've seen wave after wave of it. Yet, nothing seems to have caused as much as excitement as the rise of generative AI.

For some, adoption might seem like a no-brainer. For others, the risks far outweigh the rewards.

LexisNexis surveyed over 1,000 lawyers and legal professionals throughout the UK to better understand overall awareness of generative AI, how the legal profession is currently using these tools, and how likely they are to adopt them in the future.

Weighing the risks with the rewards

Predictions that artificial intelligence will cause widespread disruption in the legal sector have been on the cards for years.

Yet no one could have predicted just how quickly Open AI chatbot, ChatGPT, has taken off. In February 2023, just three months after launching, the platform reached 100 million users. For comparison, it took TikTok approximately nine months to achieve the same number of users, and Instagram more than two years.

For a time-poor profession, the legal community will undoubtedly be eager to embrace generative AI. It has the potential to fast-track the legal research, summarisation and drafting process, freeing up lawyers' time to focus on higher value services for their clients or organisations. And that's just the start.

But, many in the profession are understandably concerned about the risks that come from the use of AI technology. They are questioning the reliability of the data and legal content generated by the current range of free AI tools.

Today, generative AI tools are still in their infancy, but that won’t remain the case for long. With the right engine sourcing the right content – the nature of legal work is about to be transformed.

Jeff Pfeifer
Chief Product Officer – LexisNexis

The generative AI revolution

Awareness of generative AI in the legal community is high. Nearly nine out of 10 (87%) respondents to the LexisNexis survey were aware of its existence. This rose to 93% for respondents at large law firms, and to 95% for those occupying in-house roles. Check out the introduction to artificial intelligence and machine learning on Lexis+ for more info.

This aligns, identically, with findings from LexisNexis' March 2023 survey of the US market, which also found 87% of respondents were aware of generative AI.

These findings will come as no surprise to most – generative AI has made legal news headlines all year, with many casting polarising predictions on the impact this rapidly accelerating technology will or won't have on the legal sector.

What might surprise some is that the vast majority (95%) of respondents to our UK survey believe generative AI will have a noticeable impact on the law, with 38% believing it will be significant and 11% transformative.

Isabel Parker, partner of Deloitte Legal's Transform to Operate service, believes generative AI has the ability to disrupt the entire foundation of the legal market.

"This could lead to some very positive outcomes: the democratisation of legal advice, universal access to justice, market practice replacing two party negotiations, AI-based case resolution, and productivity transformation for lawyers."

Ben Allgrove, partner and chief innovation officer at Baker McKenzie, says generative AI is different than some of the over-hyped tech developments we have seen in the past.

"It will change how we practice law. One immediate area of focus is on how we might use it to improve the productivity of our people, both our lawyers and our business professionals. While there are, of course, quality and risk issues that need to be solved, we see opportunities across our business to do that."

Two-thirds (67%) of survey participants said they feel mixed about the impact of generative AI on the practice of law, admitting that they can see both the positives and the drawbacks. This was particularly true for respondents from large law firms, with 76% holding these mixed views.

Parker, like many in the legal field, is well across the risks that accompany generative AI technology.

"Hallucinations are, of course, a real issue for a profession that prides itself on the accuracy of its outputs. There are ways to mitigate the risk, such as through quality control by including a human in the loop."

Read: Artificial intelligence—UK regulation and the National AI Strategy

When generative AI tools don't have access to the relevant data, they have a tendency to make up the answers, or hallucinate, says Alison Rees-Blanchard, head of TMT legal guidance at LexisNexis.

"This means any generated output must be checked thoroughly. However, where those tools are trained on a closed and trusted data source, the user can have greater confidence in the generated output and hallucinations will be easier to identify, as verification of the output is made easier."

Kay Firth-Butterfield, executive director of the Centre for Trustworthy Technology, a World Economic Forum Centre for the Fourth Industrial Revolution, says these systems are only as good as the data in them.

"Generative AI tools can give biased and other non-ethical advice and should be used, especially at this early stage, very carefully indeed."

"All the concerns we have had in the past about whether we can design, develop and use AI responsibly are extended by generative systems where we simply cannot interrogate how they have reached a particular answer."

In May 2023, LexisNexis announced the commercial trial of Lexis+ AI, which searches, summarises, and drafts using LexisNexis content. This tool was built with the RELX responsible AI principles in mind, says LexisNexis chief product officer, Jeff Pfeiffer.

"Everything we do considers the real world impact of the solution, it proactively prevents the creation or reinforcement of bias, we ensure that we can always explain how and why our systems work in the way they do, human oversight is built in and that we respect and champion privacy and data governance."

See guidance notes on data protection for artificial intelligence on Lexis+.

When asked about the ethical concerns regarding generative AI on the practice of the law, nine out of 10 respondents (90%) cited concerns. A quarter (26%) had significant concerns and 3% had fundamental concerns.

This is roughly in line with our US survey findings, with 87% of lawyers admitting concerns about the ethical implications of generative AI.

Companies need to start getting policies in place regarding generative AI tools, says Toby Bond, intellectual property partner at Bird & Bird.

"The risk is that generative AI tools are used for a work purpose without a proper assessment of the potential legal or operational issues which may arise."

Training materials—artificial intelligence (AI) in the workplace

One option is to block access to these tools entirely, he says, which risks failing to capitalise on their potential, and falling behind the competitors who do.

To manage both the risk of misuse and underuse of generative AI, Bond recommends formulating an initial policy position and a pathway to expanding use in the future.

While this technology holds huge promise, the legal community doesn't take risk lightly. For the sector to truly embrace generative AI, they would need access to a platform with explainable decision making, operating from a closed and trusted data source, to allow for easier identification of hallucinations, confidence as to the provenance (rights to use) and quality of training data.

Find a legal framework and regulatory guidance around explainability on Lexis+.

LexisNexis has been incorporating AI and large language models into our solutions for years, with GPT and ChatGPT already integrated onto Lexis+.  Sign up to the Lexis+ AI Insider Program to get updates, early feature access and breaking news on the latest AI developments.

The AI insights ALL lawyers needs to know - sign up for free

Putting plans into practice

To remain competitive and meet client expectations, the general consensus is that there's no getting around embracing generative AI.

Nearly three-quarters (70%) of in-house counsel respondents agreed or strongly agreed that law firms should be using cutting-edge technology, including generative AI tools.

This sentiment was shared by the majority of respondents from law firms of all sizes and the Bar – 55% agreed or strongly agreed. This rose to 73% when looking at respondents from large law firms.

When it comes to implementing generative AI, just under half (49%) of in-house counsel expect their law firms to be using generative AI in the next 12 months. Of that percentage, one in 10 (11%) expect firms to be already using generative AI.

Generative AI tools will increasingly form part of both the in-house and private practice toolkit, says Allgrove.

"Clients want their legal services needs met in an efficient, responsive and value-driven way. They do not want "AI powered solutions"; they want the right legal services to meet their needs."

Natalie Salunke, general counsel at Zilch, says her and her team are working with their tech partners to incorporate ChatGPT into their internal processes and customer-facing products.

"We're in a very risk averse industry. Many lawyers have been concerned that using ChatGPT and the like will result in all their data becoming public, or that they won't have ownership rights to the output."

See practice notes for artificial intelligence and intellectual property on Lexis+.

But Salunke believes lawyers owe it to themselves to move past these fears.

"We're already going to see it more in our daily lives. All this change is really scary, but that's not a good enough reason not to embrace it and learn how to incorporate it into making our lives easier."

The firms that fail to adopt generative AI tools will find themselves priced out of certain types of repeatable work, highlights Halliwell from Pinsent Masons.

"Generative AI is going to raise the standard for how law firms add value. Firms without it will struggle to provide the same level of data-driven insight and depth of analysis that clients will come to expect."

While most in-house counsel are in favour of their law firms using generative AI, four out of five (82%) said they expect to be made aware when their firms are using it.

Three-quarters of respondents from law firms were on the same page as their in-house clients, with 75% saying they believe their clients will expect to be made aware of generative AI tools in action. This was particularly true at law firms, with 84% agreeing or strongly agreeing with the statement.

Salunke says she would only expect to be informed when generative AI was being used if it changed the way personal data and confidential information was being processed.

"You don't buy a car and go "oooohh, I wonder what technology is in there?" You just want to make sure that it works, that you're safe and it's going to get you from A to B."

Nearing half of respondents (42%) from in-house roles believe their relationship with external counsel will change as a direct result of generative AI. Lawyers from large law firms are equally as likely to agree.

In-house legal departments should expect their external counsel to be leveraging technology of all kinds for client benefit, including generative AI, says Parker from Deloitte Legal.

"We believe that corporate legal departments should be challenging their service providers on their use of AI and on the benefits that they will receive as a result."

Cooke holds a similar opinion. "“Everything on demand” is the client expectation today," he says. "AI facilitates that standard; firms who continue to try to meet it only with humans will be too slow."

Cooke also believes clients will use law firms less for generic know-how queries, as these will be served by clients’ own models or by cost-effective subscription services.

However, according to Halliwell, generative AI has the ability to enhance client relationships rather than hinder them.

"Firms need to identify the ways in which they can use generative AI to do new things, such as better reporting and analysis, rather than simply introducing risks by attempting to automate tasks which aren't suitable."

Yet Halliwell did warn other law firms looking to introduce new generative AI-led tools to the market to be very careful about the guardrails on their use, and the quality and sources of underlying data.

"If they can’t authenticate those sources, they’re giving themselves a serious “black box” problem."

Final thoughts

It is clear the UK legal sector is excited by the many possibilities generative AI brings to the table.

But many are still cautious about the risks that come alongside this increasingly popular technology – and rightly so.

Generative AI has the potential to save businesses a huge amount of time and money, and if managed poorly, it also has the potential to also cost businesses wasted time and investment.

To carry out the simple and the complex use cases discussed in this report, the legal community needs generative AI tools that are safe and secure, with trusted data and identifiable sources.

While this all seems speculatory – or it might to some – the availability of such platforms is only around the corner.

Survey methodology

The survey was conducted across 1,175 lawyers and legal support workers in the United Kingdom from 24 May to 6 June, 2023. Surveys were conducted in English and respondents were prompted for feedback via Pollfish/Forsta.