
With law firms adopting AI at a rapid pace and transforming how they deliver legal services, corporate legal departments need to update their engagement letters and billing guidelines to reflect this new world of AI-enabled work. To ensure instructions to outside counsel are practical, enforceable and aligned with professional standards, this guide covers the essential elements in-house teams need to consider when setting rules around how and when their external providers can use AI on legal matters.
1. Disclosure guidelines
The first step is to lay out clear requirements around what firms are expected to disclose about their AI use.
“Not every tool is built the same, but from a customer trust standpoint, in-house departments should expect transparency, whether or not the output the legal department is getting is driven by a tool or a human,” says Anthony Rokis, head of customer trust at Thomson Reuters. “The law firm providing the service needs to have a human in the loop, but the base output can vary, so they should demand full transparency into the lineage of the output.”
This means that if they are using an AI tool, they need to disclose what tools are used and for what tasks, how their data is used and what level of human review is involved. In-house teams should also clarify if they want disclosure for all AI use, no matter how trivial a firm may consider that usage.
“Some firms might deem low transactional work does not require disclosure, but if the legal department that they’re servicing asks the question, they should be willing to disclose if and when an AI tool is used,” says Rokis.
2. Confidentiality safeguards
If firms are using AI tools on live matters, in-house teams must have confidence that there are adequate security controls in place around confidentiality, user access and verification standards for AI outputs.
“Firms need to have an appropriate third-party risk management function to challenge and validate security, compliance and regulatory controls for the tools they use,” says Rokis. “Subsequently, a client should expect that a firm has done the proper due diligence of any tool they’re using that has an AI component.”

In-house teams should also demand that firms they are engaging should have clear AI training and acceptable-use policies to educate employees and ensure they have the necessary skills to use AI safely, says Rokis. In-house teams must also clearly indicate whether matters should only be handled by fiduciary-grade legal AI tools like CoCounsel Legal rather than public AI platforms such as Claude or ChatGPT, he says.
“That is up to the legal department’s policies and ethical standards, but depending on the types of cases and the types of information within the case, it may be a limit that the department sets, or maybe they just want the most efficient output, so they don’t mind what tools are used,” Rokis adds.
The legal department’s policies and ethical standards will determine their confidentiality safeguards. The types of cases and information involved will likely influence the policies the department sets. Teams chasing efficiency will use any tool available; teams chasing quality choose tools with safeguards.
3. Audit policy
As AI tools are increasingly embedded into law firm workflows, in-house teams must be able to request a clear audit trail that documents prompt usage, reviewers and how AI outputs influenced decision making.
“In terms of reviewers, the law firm should hold themselves, and ultimately, the partners and associates accountable for performing that review,” says Rokis. “However firms present that audit documentation, they should include the author and/or reviewer so the legal department knows there’s a human in the loop.”
Prompt disclosure, however, may not be as straightforward. Given the proprietary expertise and expense that may have gone into prompt engineering or building agentic AI workflows, this may raise potential IP issues if predefined prompts that lawyers use are specific and tailored enough to the firm that disclosing those prompts could create a commercial risk, Rokis notes.
“The evolution of prompt transparency will be quite interesting within the court systems,” Rokis says. “As of today, I don’t think they should be proactively provided; only at the request of their clients in terms of key decisions.”
4. Staffing requirements
In-house teams should also make clear in their engagement letters when it is appropriate for AI to handle work and when it should only ever be handled by a human. However, in-house teams should acknowledge the potential inefficiencies and timeline impacts for delivery if a human handles a matter when AI may have provided a quicker solution, says Rokis.
Given this could impact billing costs, in-house teams may decide to grant firms some autonomy in determining where they see AI optimisation opportunities, for example in areas such as legal research and document analysis.
“Clients may say they’re not ready for an AI agent to actually file a motion on their behalf, but they may be okay with AI drafting a memo and then a human review at the end,” Rokis adds.
5. Fee discussions
The final step is updating billing guidelines to incorporate AI usage. Given that AI use can significantly speed up completion times, corporate legal teams and their service providers need to agree appropriate pricing when AI is used on a legal matter. For example, in-house teams may wish to request flat fees or phased fees or other alternative fee arrangements depending on how much AI is used on a matter.
“We’re probably going to shift to more of an outcome based, shared-value model,” says Rokis. “This means working out how much value you are getting as a client and then attaching a pricing structure based on the value creation that is being generated.”
For example, some in-house teams may place a greater value on work being delivered faster, therefore pricing may not be significantly different. For others, they may expect a lower cost given fewer billable hours were needed to complete the work.
As AI billing models mature, industry standard values attached to certain types of AI-enabled legal work may emerge.
“That’s where firms can start to have some standardisation to avoid the volatility in revenue,” says Rokis.
With law firms adopting AI at a rapid pace and transforming how they deliver legal services, corporate legal departments need to update their engagement letters and billing guidelines to reflect this new world of AI-enabled work. To ensure instructions to outside counsel are practical, enforceable and aligned with professional standards, this guide covers the essential elements in-house teams need to consider when setting rules around how and when their external providers can use AI on legal matters.
1. Disclosure guidelines
The first step is to lay out clear requirements around what firms are expected to disclose about their AI use.




