To turn AI experimentation into lasting advantage, law firms must prioritise security, governance and trusted generative AI tools that safeguard client data

With the vast majority of law firms already moving beyond AI experimentation and pilot projects, the transition to wider adoption means firms need to be confident that the AI tools they are using are reliable and transparent and will keep client data secure.
Given that not all AI use cases carry the same risk, some firms are approaching this by separating use cases into different risk categories, from low impact work such as admin-related tasks to more sensitive matters involving client information that will need higher security levels.
At the heart of this approach is the fact that generative AI (GenAI) technology is different from previous iterations of AI and machine learning tech because it introduces a “non-deterministic” aspect to using the technology, says Mark Cullen, VP of product at Thomson Reuters.
This means unlike old technology, where for a given input you would get a known, given output, GenAI is unpredictable. “Because of the way GenAI models work by design, it doesn’t necessarily mean that for that same given input, you get the same output every time,” says Cullen. “This means outputs must be grounded in truth, grounded in fact and verifiable for it to be trusted, especially in industries where trust is sacrosanct.”
To get to that point, there are several foundational elements that firms need to have in place around data quality and governance to ensure they can build trust with clients. To start with, firms need to be confident that what their AI tools generate is drawn from specialist and trusted legal content rather than public AI systems where the outputs may be generated from unreliable sources.
As well as relying on third-party legal content for tasks such as research, firms also need AI tools that allow them to integrate their own content.
Confidence in AI tools
As well as relying on third-party legal content for tasks such as research, firms also need AI tools that allow them to integrate their own content.
“Law firms need to be able to make use of their IP and their raft of experience so that AI outputs can also be grounded in their voice, their opinions, their playbooks and their clause libraries,” says Cullen.
This will require greater alignment between knowledge management teams and legal ops, which will become even more important as firms grow their AI capabilities and differentiation will hinge on the quality of legal advice firms can offer.
“If everyone has access to the same GenAI tools, it’s through their IP they can differentiate, and ultimately that’s about knowledge management,” says Cullen. “There’s a whole new potential paradigm for knowledge managers to help create and curate prompt libraries, for example.”
By encouraging greater collaboration between knowledge managers and legal technologists, firms can use AI to make it easier than ever for lawyers and clients to access and benefit from firm knowledge.
“When you put those two disciplines together, that’s where the magic can really happen,” says Cullen.
The law is always shifting beneath one’s feet, so organisations need to ensure their content is up to date and reliable
Firms also need strong governance standards to ensure the data being used is consistent and can keep pace with any legal changes, preventing templates or precedent libraries from becoming outdated.
“The law is always shifting beneath one’s feet, so organisations need to ensure their content is up to date and reliable,” says Cullen.
For example, firms should carry out periodic AI audits to check for issues such as data quality or security threats, as well as maintaining other governance measures such as having a central repository for firm data and standardising document libraries.
Part of building client trust is also ensuring a firm’s lawyers trust the technology too. This means adopting workflow tooling that brings embedded AI capabilities into platforms they are already using on a daily basis, such as drafting tools that sit inside Microsoft Word.
“Trust for lawyers comes with familiarity,” says Cullen. “We don’t necessarily want to use a brand new technology in a whole new paradigm, so if you can bring that new technology into a lawyer’s existing workflow, that familiarity and ease of use will support adoption and breed trust.”
Focus on human impact
For firms, Cullen says they should place less focus on the technology itself and more focus on the human impact.
“At the moment, the industry is focused purely on the art of the possible and what the technology can do,” says Cullen. “But like defining technologies of other generations, the technology doesn’t alone drive long-term change in the way that the world operates.”
For that to happen, it has to be a combination of people, process and technology.
“If one focuses purely on the technology and just expects magic to come out the other end, that’s where there’s a danger of failure,” says Cullen. “Generally speaking, humans don’t like fundamental change that much. So if you don’t focus as much on the people and the process aspects of this, then you won’t drive that adoption, which ultimately leads to trust, which ultimately leads to return on investment.”
Another key element of building client trust is ensuring firms have a robust vendor partnership strategy, particularly if client data will be used in AI systems. This means evaluating privacy, data governance and other aspects of the technology, such as how the underlying AI models work.
“It all comes back to trust, so it’s about choosing a vendor who has a vested interest in the industry and a proven track record,” says Cullen. “When you’re talking about GenAI, there’s more than 100 GenAI assistants in legal to choose from. Clearly in one to two years, there’s not going to be 100, there’s going to be significantly less. So putting your trust in a horse you know that’s going to be in the race as this consolidation starts to happen is going to be important.”
By adopting a trust-first approach, firms can innovate with confidence and build legal AI systems that clients will value.
To learn more about AI you can trust, visit: www.thomsonreuters.com
To turn AI experimentation into lasting advantage, law firms must prioritise security, governance and trusted generative AI tools that safeguard client data
With the vast majority of law firms already moving beyond AI experimentation and pilot projects, the transition to wider adoption means firms need to be confident that the AI tools they are using are reliable and transparent and will keep client data secure.
Given that not all AI use cases carry the same risk, some firms are approaching this by separating use cases into different risk categories, from low impact work such as admin-related tasks to more sensitive matters involving client information that will need higher security levels.
At the heart of this approach is the fact that generative AI (GenAI) technology is different from previous iterations of AI and machine learning tech because it introduces a “non-deterministic” aspect to using the technology, says Mark Cullen, VP of product at Thomson Reuters.
This means unlike old technology, where for a given input you would get a known, given output, GenAI is unpredictable. “Because of the way GenAI models work by design, it doesn’t necessarily mean that for that same given input, you get the same output every time,” says Cullen. “This means outputs must be grounded in truth, grounded in fact and verifiable for it to be trusted, especially in industries where trust is sacrosanct.”