
With legal leaders under pressure to push faster with AI adoption, the challenge they face now isn’t just choosing which AI tools to use, it is building an operating model that enables organisations to scale their AI initiatives without creating a mess of fragmented systems.
In many cases, the rush to embrace and use AI technology has been disjointed, with inconsistent or absent policies resulting in employees using a mix of professional AI systems alongside public tools with little or no supervision or structure.
“Within any organisation, individuals and teams will independently source their own tools and methods about bringing AI in, and that delivers efficiency gains for those individuals, but it doesn’t really create the transformative value that we think AI has the potential to create,” says Alex Fawcett, VP product for CoCounsel at Thomson Reuters.
Second, this creates potential risks if those employees are using public tools like ChatGPT for legal matters instead of legal-specific platforms that have proper safeguards in place, Fawcett says.
While early, user adoption might in some cases have delivered isolated time savings for individual lawyers, as firms and in-house legal teams tighten up their AI planning, some legal leaders are redefining AI success as systems-level reliability and not just a tool that helps people complete their work faster.
“You’ll only go so far with time saving,” says Alexandra Graydon, associate general counsel at Thomson Reuters. “What you actually want is something that is a more finished, polished output, not just that it’s delivered quickly. That means you’re getting something that’s nearly ready to be delivered, which is a real step change in efficiency – it’s not just about faster drafting.”
To drive this shift from viewing AI as a time-saving tool to one that can transform processes end to end, organisations need to have an AI-ready legal operating model. This means putting in place a clear strategy that outlines how and when lawyers should be using AI.
Such an approach remains relatively uncommon. As many as 43% of organisations are adopting AI without a strategy, according to Thomson Reuters data, and only 22% of organisations have a visible AI strategy. Without that strategy, organisations risk having a scattershot approach where employees use multiple disparate tools, leading to rising AI sprawl.
“It’s about defining what you need AI to improve, prioritising those use cases and then making sure that you’re using the right tools for the right job,” says Graydon.
Taking a strategic approach also makes a difference economically. Organisations that do adopt an AI strategy are 3.5 times more likely to see a return on their AI investments and twice as likely to see revenue growth than organisations that don’t, Thomson Reuters data shows.
Leadership is a key component of this strategy. Law firms are increasingly hiring roles such as chief operating officer or chief transformation officer to help formalise their AI initiatives. Some firms are also bringing in additional AI expertise such as prompt engineers, agent orchestrators and human-in-the-loop specialists to support lawyer adoption.
People need encouraging to use AI as a transformation tool and not just an efficiency tool
This matters because you need a workforce that is AI ready, says Fawcett. While about 96% of legal professionals have awareness around AI, 71% say they don’t feel they have a good understanding of the practical applications that AI enables.
“There’s a gap between awareness and how to generate value,” says Fawcett. “People need encouraging to use AI as a transformation tool and not just an efficiency tool, and really think about how they use it. But underlying it all is really the articulation of a visible AI strategy that comes from the top. Those that do articulate a strategy are much more likely to see revenue going up.”
Part of the strategy involves standardising AI workflows and playbooks for repeatable work so lawyers are using the technology in a structured and consistent way. Organisations need to assess which legal workflows will genuinely benefit from AI, rather than applying it indiscriminately. And it also means putting in place the foundational elements that make scaling AI possible, such as ensuring legal AI systems have access to high quality data and regularly updated templates, clause and precedent libraries.
Adopting a deliberate AI strategy will also ensure systems are connected and not cobbled together in a disjointed way. For Fawcett, having connected systems means two things.
“First it means it needs to be connected to your systems and your data and the way that you work, so you need to make sure the content and knowledge of your firm is available to that system,” he says. “But then you also need to look at the source of the data and the source of the output.”
This means when evaluating AI platforms for legal, the key test is if the underlying data is backed by trusted verifiable content.
“Legal work needs to be accurate,” says Graydon. “It’s got to be grounded in sound legal content. Organisations shouldn’t rush to select a re-skinned LLM where you have no visibility into what it’s basing its answers on. What matters is selecting a system that is grounded in high-quality legal content.”
When selecting professional-grade AI technology, legal leaders need to ensure what they are buying is secure and can be easily integrated into everyday workflows. Fawcett recommends that aside from adopting tools that use advanced reasoning models, to get the most reliable results you also need legal content that is informed by deep subject matter experts.
“You can have all the content in the world, which is the book smarts, but you need the street smarts of the people that have done that job to really drive the value,” he says.
As the legal industry enters the next phase of AI growth, success will not hinge on who can assemble the biggest collection of AI tools, but who can build the most trusted, reliable systems.
With legal leaders under pressure to push faster with AI adoption, the challenge they face now isn’t just choosing which AI tools to use, it is building an operating model that enables organisations to scale their AI initiatives without creating a mess of fragmented systems.
In many cases, the rush to embrace and use AI technology has been disjointed, with inconsistent or absent policies resulting in employees using a mix of professional AI systems alongside public tools with little or no supervision or structure.
“Within any organisation, individuals and teams will independently source their own tools and methods about bringing AI in, and that delivers efficiency gains for those individuals, but it doesn’t really create the transformative value that we think AI has the potential to create,” says Alex Fawcett, VP product for CoCounsel at Thomson Reuters.