Legal leaders have spent years piloting AI. Now comes the harder part: making it business as usual. A legal AI operating model, built on governance, security, training and clear human-in-the-loop rules, is the difference between scattered experiments and responsible, firm-wide adoption at scale.

Legal leaders have spent the past few years experimenting with AI, now they must move beyond product testing and embed AI into the way they work at scale. This means adopting a legal AI operating model that combines data security, robust governance and clear usage policies to ensure organisations can successfully transition from scattered pilot projects to widespread daily use.
“The best approach is to treat AI implementation as a strategic programme,” says Agustin Sanchez, account specialist director at Thomson Reuters. “Those corporate legal departments that are truly succeeding now are the ones that have developed a structured AI operating model, leaving behind this ad hoc experimentation and operationalising AI with intent, by baking AI into everyday work.”
A well-structured AI operating model starts with the basics: clear leadership around what AI can be used for, what is prohibited and when organisations need to keep a human in the loop.

“When you’re adopting AI in an organisation, it needs to come from the top down, and it needs to have the message that AI is here to assist and facilitate you in doing your job, and it’s not here to replace you,” says Jason Heyman, legal product specialist director at Thomson Reuters. “You need those clear guidelines to set you up for success for how you can then deploy AI within your organisation.”
Law firms that are successfully advancing from pilots to wider adoption not only have a clear strategy detailing what they want to achieve, but also AI champions who will promote usage across the firm.
“Those champions are always best if they include leaders, whether it is partners or seniors in the firm who are going to model the behaviour they want to see in their team, and then prioritise use cases which are aligned to their strategic goals,” says Sophie Baugh, senior manager, customer success at Thomson Reuters.
Firms that do this successfully also typically have a steering committee or a group of individuals that periodically meet to oversee AI use in the firm and ensure that technology is implemented responsibly, adds Baugh.
A responsible AI approach means taking into consideration people and processes, while also ensuring that when lawyers use AI on legal matters, they use technology designed specifically for legal work. This means only using tools that retrieve information from authoritative sources and provide citations for every answer so the outputs can be verified. These tools must also have strict data security and privacy protocols to ensure confidential information is kept safe and that data won’t leak or be misused, says Sanchez.
If a company doesn’t have strong guardrails and policies around AI use, shadow AI usage is going to happen
Second, responsible AI means training lawyers on appropriate usage to reduce shadow AI consumption where lawyers use AI solutions without the organisation’s knowledge or consent.
“It’s very tempting when lawyers are under pressure to get things done that they may use a non-sanctioned AI tool to get a second pair of eyes on their work,” says Sanchez. “If a company doesn’t have strong guardrails and policies around AI use, that shadow AI usage is going to happen. It’s not sufficient just issuing a memo saying these are the things that you cannot do, they really need to have a solution that they’re guided to.”
Ensuring organisations put in place solid data foundations is also a critical part of responsible AI use given the potential issues caused by relying on inaccurate data.
“When you think about what AI’s capabilities are and how quickly it can multiply and pass right through an organisation, bad data can be multiplied very, very quickly,” says Heyman.
There are several operating model components legal leaders also need to consider when they are embedding AI into everyday workflows. First is having the right governance ownership and structure in place, which will usually depend on the type of organisation. For corporate legal departments, AI is likely to be a general counsel’s responsibility.
“The general counsel may lead that AI governance committee, sometimes specific to their areas or to the broader enterprise,” says Sanchez.
One of the roles of that committee is to assign decision rights around who approves new AI tools and which legal matters can be supported by AI, Sanchez adds.
For law firms, that ownership may hinge on the size of the firm. For smaller firms, it may be managed by someone doing a dual role, says Baugh, whereas for larger firms they may have dedicated roles such as a chief information officer, chief technology officer or chief innovation officer.
The next step is translating governance into policy. This means not only making acceptable use policies clear, it is also about outlining the verification steps that need to happen once AI has been used and in what circumstances. Sanchez says this means implementing regular audits and spot checks to review AI outputs for issues such as bias, while also ensuring that people are following AI usage rules.
While organisations may allow some level of AI automation without human intervention for low-risk work, in most cases there will need to be a human in the loop at some stage of the process.
“None of us appreciate getting an AI automated message, especially when we’re paying for a service,” says Baugh. “If you’re using AI to help draft a practice note quicker or an advice note to a client, you’re going to read it to humanise it and make sure that it sounds like it’s coming from you.”
For higher-impact work, for example client-facing briefings or contract drafting, any AI outputs would always need reviewing by a senior lawyer, says Sanchez. And for some high-stakes work, AI use may not be appropriate at all, he adds.
Ultimately implementing a structured AI operating model ensures organisations can scale with intention and ensure their AI adoption is a success.
To learn more about AI you can trust, visit: www.thomsonreuters.com
Legal leaders have spent years piloting AI. Now comes the harder part: making it business as usual. A legal AI operating model, built on governance, security, training and clear human-in-the-loop rules, is the difference between scattered experiments and responsible, firm-wide adoption at scale.
Legal leaders have spent the past few years experimenting with AI, now they must move beyond product testing and embed AI into the way they work at scale. This means adopting a legal AI operating model that combines data security, robust governance and clear usage policies to ensure organisations can successfully transition from scattered pilot projects to widespread daily use.
“The best approach is to treat AI implementation as a strategic programme,” says Agustin Sanchez, account specialist director at Thomson Reuters. “Those corporate legal departments that are truly succeeding now are the ones that have developed a structured AI operating model, leaving behind this ad hoc experimentation and operationalising AI with intent, by baking AI into everyday work.”