Creating a company policy on generative AI

We explore the findings from a recent LexisNexis report and discuss why companies should consider creating a policy to make the most out of generative AI.

woman in red blazer holding white paper

We explore the findings from a recent LexisNexis report and discuss why companies should consider creating a policy to make the most out of generative AI.

The recent LexisNexis report, Generative AI Generative AI and the future of the legal profession the future of the legal profession, sought to gauge the general awareness of generative artificial intelligence (AI) in the legal sector. The report showed, among many other things, a degree of optimism towards AI. The potential for AI is indeed huge. The tech could drastically transform the legal sector, the wider economy, and the entire planet.

But ensuring positive impact depends on minimising risks. So lawyers, law firms, and companies should aim to practice the responsible use of AI. And that will depend largely on two forms of regulation: government regulation and self-regulation. This article focusses on the latter, arguing that companies should create policies to promote the responsible (and effective) use of generative AI.

Download the LexisNexis generative AI survey results here.

The inevitable rise of generative AI

Generative AI platforms (ChatGPT, DALL·E, Jasper, Soundraw, etc) have become a huge talking point in recent weeks and months. Generative AI systems depend on huge data sets, complex algorithms, and machine learning to produce responses to prompts. The benefits of generative AI are substantial: increased productivity, quicker decision-making and problem-solving, reduction of human errors, automation of tedious tasks, increased employee morale, minimised costs, and so on.

It is perhaps no surprise that we are witnessing a huge increase in AI usage. And that increase looks certain to continue across the economy and particularly in the legal sector. According to the LexisNexis report on AI, for example, nearly half (49%) of in-house counsel expect law firms to use generative AI in the next 12 months – an unthinkable statistic even six months ago.

Policy—use of generative artificial intelligence

Ben Allgrove, partner and chief innovation officer at Baker McKenzie, suggests some of the reasons for the inevitable rise of generative AI: ‘Clients want their legal services needs met in an efficient, responsive and value-driven way. They do not want “AI powered solutions”, they want the right legal services to meet their needs.’ Simply put, clients need quick solutions – AI accommodates that need.

The firms that fail to adopt generative AI tools will find themselves priced out of certain types of work, highlights David Halliwell, partner at alternative legal services business, Vario: ‘Generative AI is going to raise the standard for how law firms add value. Firms without it will struggle to provide the same level of data-driven insight and depth of analysis that clients will come to expect.’

The failure to use AI puts companies at a competitive disadvantage. AI will become not simply desirable, but necessary. So the rise, driven by competition, seems inevitable. But, while companies should rightly take advantage of the benefits, they’ll also need to consider the best ways to navigate some potential downsides. That’s why a generative AI policy has become increasingly necessary.

The necessity of a generative AI policy

Companies must ensure they’re practicing the responsible use of AI long into the future. Toby Bond, intellectual property partner at Bird & Bird, says: ‘The risk is that generative AI tools are used for a work purpose without a proper assessment of the potential legal or operational issues which may arise.’ Companies may use generative AI with no cognisance of the wider ethical issues at play.

One option is to block AI tools altogether, Bond says. But that means companies will fail to capitalise on the profound benefits of AI. As mentioned, companies blocking AI – as with any other ground-breaking tech – will fall behind. It’s no surprise, for example, that 70% of in-house counsel agreed or strongly agreed that firms should use cutting-edge technology, including generative AI tools.

Generative AI—is its output protectable by intellectual property rights?

A better option, Bond suggests, is formulating an AI policy that mitigates the risks and promotes the many benefits, that allows companies to prevent harm and reap rewards. And the creation of that policy need not be in-depth, long, or even particularly thorough – at least not at first. Bond, for example, recommends creating an initial policy position and a pathway to expanding use in the future.

How to actually create an AI policy

The look and feel of the AI policy will depend on the shape and size of the company. Smaller companies may simply need to note some core principles, perhaps emphasising the need to maintain mindfulness, privacy, and responsibility. They may also want to ensure that they use platforms that consider real-world impact, take steps to prevent bias, and practice accountability and transparency.

There are some simple steps to take when formulating the AI policy. Start by establishing a working group of various people involved in employment of AI and discuss the potential impacts, ideating about how the policy might initially look. Narrow down the early ideas and cement some policy objectives: these may be overarching, or specific, depending on the needs of the organisation.

Training materials—artificial intelligence (AI) in the workplace

Show why you decided on such policies, with direct reference to specific risks, and establish levels of accountability, ensuring roles and responsibilities are assigned. Then ensure all of that is written, edited, approved, and shared across the entire organisation. Welcome any questions that may arise.

And, finally, remember to regularly revise your generative AI policy. AI moves at such a quick pace, constantly developing, so you’ll need your policy to reflect such developments. AI policies may become obsolete in a matter of weeks, so revision and re-writing is absolutely essential. 

Finally, remember to regularly revise your generative AI policy

AI moves at such a quick pace, constantly developing, so you’ll need your policy to reflect such developments. AI policies may become obsolete in a matter of weeks, so revision and re-writing is absolutely essential.