
With AI tools becoming more common in the legal sector, legal professionals face the challenge of using them ethically while navigating an uncertain legislative backdrop.
“Lawmakers are keen to balance facilitating the innovation of legal services on the one hand, and the protection of clients and the justice system against harmful use of AI on the other hand,” says Felix Steffek, principal legal AI advisor at Thomson Reuters.
For example, in the UK, the government is planning to introduce a comprehensive AI bill in 2026, giving it time to assess the impacts of the technology rather than rushing into creating regulation that risks stifling innovation.
However, while the legislative backdrop for AI continues to evolve, existing laws and regulations may already capture the use of AI in the provision of legal services, particularly when it comes to a lawyer’s professional duty. The High Court of England and Wales recently established rules around AI use for preparing court submissions and client advice, stating that lawyers have a professional obligation to ensure the accuracy of text produced by AI.
The focus in a world of AI or in a world without AI should always be the same – on solving a client’s problems or creating value for a client
“Not keeping that standard, according to the court, may lead to lawyers being reported to regulators. It opens the possibility of proceedings for contempt of court, and it may also trigger adverse costs orders,” says Steffek.
Therefore, when adopting AI in an ethical way, legal leaders need to approach it in the same way they would any other area of their professional duties. For Steffek, this means ensuring people always come first, be it clients or the legal professionals at their firms.
“The focus in a world of AI or in a world without AI should always be the same – on solving a client’s problems or creating value for a client,” says Steffek. “That means AI is not a tool that is just to be used for its own purpose. Instead, it should always serve the goals that clients have. So understanding the goals and problems of clients always comes first.”
Evolving ethical concerns
An ethical AI framework also needs to distinguish between the different tasks that lawyers perform. For example, using AI to research or analyse information will need different ethical considerations compared to using AI to prepare court documents or make decisions.
“This means there’s not one size fits all – instead, you need to apply your values on the specific task,” says Steffek.
Firms also need to ensure their ethical AI policies go beyond common vague principles.
“When we reflect back on the first wave of AI policies, they all focused on common values such as fairness, transparency, autonomy and similar commonplaces,” says Steffek. “However, such general values are often not specific enough when it really comes to the concrete ethical decision-making that a professional needs to take.”
Therefore, legal leaders must develop more specific principles that strike a balance between values and interests. For Steffek, this means looking at the interests of those who are affected by justice systems, and developing their ethical framework in a way that best serves those interests.
“The good news is that we can very often refer to the values and principles that have been established before the emergence of generative AI, because these values and legal principles still hold today,” says Steffek. “The challenge is that we have to apply them in a new technical context.”
This means firms also need to ensure non-lawyers are covered by their AI policies, particularly those that have technology-related roles that might impact the way client services are provided.
To that end, legal leaders must ensure their AI policies are well communicated so that employees in all areas of the firm understand what is expected of them, particularly given the fast-changing nature of the technology.
“Best practice is to ensure that the development, communication and application of the AI policy engages all levels of the firm, so ideally this is both a top down and a bottom up process,” says Steffek. “What is particular to AI is that it keeps developing, so the communication process needs to be continuous and not just a one-off event, because there’s constant change.”
Thinking outside of the box
Policies also need to cover the ‘grey use’ of AI, for example, tools like ChatGPT that may be outside of approved tech. Firms are at a greater risk of that happening – and breaching their professional duty – if they don’t have a comprehensive AI policy or communicate it meaningfully.
Firms must also ensure their ethical strategies are not just box-ticking exercises, but dynamic frameworks that guide how lawyers work.
“The key is to really understand that ethics is part of value creation and not a barrier to value creation,” says Steffek. “Clients want high-quality and cost-effective legal services, and if AI contributes to this, clients will tend to support it.”
The key is to really understand that ethics is part of value creation and not a barrier to value creation
One firm that is already putting this ethical AI approach into practice is UK firm Primas Law, which has adopted Thomson Reuters’ Co-Counsel platform. For Adam Kerr, managing partner at Primas Law, good law firms should already be set up to adopt AI in an ethical way.
“When you have conversations around technology and AI in particular, people often frame the discussion as if it requires a completely new, revolutionary way of thinking in terms of how we use it,” says Kerr. “That’s not necessarily the case, because if law firms are run properly, they will have very strong governance frameworks and very strong risk management functions in place, so if you’re introducing AI or any kind of innovation, you’re always operating within that landscape of good governance.”
Another way firms can be certain they are using AI in an ethical way is to ensure they know why they are adopting AI in the first place, says Kerr.
“When you’re picking what system you’re going to use, you have to address why you’re using AI in the first place,” he says. “Some people think that because everyone else is getting it, we need to use it too. Often in those situations, when you scratch the surface of that discussion, they’re not quite sure what tools they’re getting and why. So the procurement part of the process is crucially important.”
Kerr says his rationale for selecting Thomson Reuters as an AI provider was that he wanted an AI tool that could be deployed across the entire firm instead of having multiple unconnected systems that only handle discrete tasks. Ethical considerations were also an essential component of the decision-making process, such as ensuring client data is safe.
“From an ethics perspective, it’s a closed system that is pulling information from and learning from the best suite of legal resources in the world,” says Kerr. “That limits the risk of bias and it limits the risk of hallucinations.”
Taking people on the journey
One of the challenges that legal leaders often face when it comes to integrating AI technology is getting buy-in from lawyers so that tech investments don’t end up being mothballed.
“We’ve been talking to our team about doing something along these lines for a number of years, so we brought them on the journey with us. By the time it came to actually giving them stuff to try, they were already bought in and we didn’t face a huge amount of resistance,” says Kerr. “People understood what we were doing and why we were doing it. So that transparency internally really helped us from an integration and an adoption perspective.”
Even so, adoption is an ongoing process and not something that will change instantly overnight, given that many ways of working are culturally ingrained, particularly for more experienced lawyers.
“Getting people to change their daily habits is ongoing, not because people are resistant, but people have been doing things in a particular way for 10, 15, 20 years,” says Kerr. “So even if they want to use the tools that you’re giving them, it just takes a while to re-programme.”
As adoption grows, firms must ensure their AI policies have guardrails and feedback mechanisms in place that can keep ethics top of mind when legal professionals are using AI tools and exploring new use cases.
“You have to constantly look at the way in which technology is being used within the firm, on a matter-by-matter basis,” says Kerr. “Having an open culture is important too, so that people can talk about these things and express concerns and ask questions if they’ve got them. If they’ve got misgivings, let’s talk it through. They might have a really good point to make that we need to consider firm-wide.”
The balancing act
Given how fast the technology is evolving, firms must ensure they can strike an appropriate balance between innovation and responsible AI use.
“The way we’ve tried to educate people about this is almost to humanise the technology,” says Kerr. “Don’t think about it as a tool or a system. Think about it as a person you have got at your disposal – the world’s best paralegal or junior lawyer. But even if you had the world’s best one-year qualified lawyer, you would not send out a piece of their work without it being reviewed and supervised and going through your usual evaluation process. This is no different.”
By adopting a comprehensive and ethical AI strategy, firms can be confident that they can innovate safely and remain competitive while ensuring their lawyers can continue delivering legal services to the highest professional standards.
To learn more about AI you can trust, visit: www.thomsonreuters.com

With AI tools becoming more common in the legal sector, legal professionals face the challenge of using them ethically while navigating an uncertain legislative backdrop.
“Lawmakers are keen to balance facilitating the innovation of legal services on the one hand, and the protection of clients and the justice system against harmful use of AI on the other hand,” says Felix Steffek, principal legal AI advisor at Thomson Reuters.
For example, in the UK, the government is planning to introduce a comprehensive AI bill in 2026, giving it time to assess the impacts of the technology rather than rushing into creating regulation that risks stifling innovation.