
AI is revolutionising the audit sector, helping over-worked accountants to speed up or improve their processes. The tech has plenty of uses. It can scan millions of data transactions to identify anomalies, summarise board minutes in a matter of seconds and extract information from complex contracts in a fraction of the time it would take a human.
But convenience comes at a cost, and too many firms are rushing to implement AI without first developing proper frameworks to evaluate the risks and benefits. According to a recent review by the Financial Reporting Council (FRC), none of the six biggest accountancy firms have any formal process to monitor or quantify the impact of AI on audit quality, despite widely adopting the tech.
This raises serious concerns about oversight in a profession that is built on accountability and honesty.
Audit quality and trust at risk
“Without clear metrics, it is difficult to know the extent to which AI is improving the quality of audits, increasing efficiency or just adding complexity,” says Akber Datoo, chief executive of D2 Legal Technology, a data and governance consultancy that works with financial institutions.
“Paradoxically, AI can turn the audit into a black box,” he adds. Humans are often unable to determine why AI systems produce the outcomes they do – why they flag a particular risk or approve a certain transaction. If auditors cannot explain an AI’s decisions, then they do not truly have oversight over the audit process. The quality and trust of the audit is therefore undermined.
A fool with a tool is still a fool – only faster and potentially more dangerous
Concerns about biases and other ethical pitfalls are growing. AI tools can reflect or even amplify biases present in training data. And, because tech users who rely heavily on AI often fail to sufficiently critique its outputs, such errors or biases can easily make their way into the final audit report.
“The adage ‘garbage in, garbage out’ still applies,” Datoo notes. “AI can process flawed inputs at speed, producing incorrect results with a veneer of legitimacy. Worse still, such errors can quietly cascade across multiple audits before anyone notices.”
The audit partner and the sign-off team are ultimately responsible for any mistakes made by the AI tools they use. Auditors must scrutinise not just the numbers but also the data and assumptions behind the AI. When professional scepticism gives way to a blind acceptance of AI outputs, audit quality will suffer. “A fool with a tool is still a fool – only faster and potentially more dangerous,” Datoo says.
How audit teams can use AI safely
To use AI safely and effectively in the audit process, accountants must understand how these systems arrive at their conclusions and verify their reliability. Auditors must test their tech in different situations; they must control the data it uses, review its assumptions and track any discrepancies.
“Too often, firms have implemented AI with scant training on how to evaluate its outputs,” says Dr Clare Walsh, director of education at the Institute of Analytics. Accountants, she explains, must identify key performance metrics and test whether new solutions developed by third parties will work with their firm’s data. “Rigorous governance training on human monitoring processes is needed, with clear feedback loops to report and act on any concerns.”
Simple safeguards
Here’s a list of basic checks auditors should perform when using AI tools in their work
Check the accuracy of AI-generated meeting summaries (automated minutes).
Ensure that AI tools correctly identify key clauses in legal documents.
Confirm that document-scanning tools highlight the right information for auditors to review.
Understand the extent to which the scanning quality affects the final audit outcome, especially when looking for financial misconduct.
Review the ‘precision’ settings, which decide how sensitive the tool is when flagging something as suspicious.
Because no two machine-learning algorithms work in exactly the same way, auditors must be clear about which type of tool they are using. Some tools, such as unsupervised AI, are relatively easy to understand but fairly unreliable for spotting unusual activity, Walsh says. Others, such as deep-learning systems, are great at identifying problems but present significant challenges for human explainability.
Often, these tools are built with parts from different teams or companies, which makes it difficult to judge their reliability. Although audit teams typically do not programme these algorithms, they should have basic theoretical knowledge of how the systems work and the risk factors, Walsh says.
“It is unclear at the moment whether companies using AI audit tools could ever prove that an AI tool can detect all the areas that regulatory bodies expect to be picked up,” Walsh notes. “It’s a very demanding success metric.”
A shortage of technical skills
Most accounting teams do not have the technical expertise to evaluate machine-learning models. “Auditors are grappling with a clear skills gap,” says Datoo. Reviewing financial reports produced or influenced by AI requires working knowledge of data science and algorithms – areas that most auditors are not trained in. Similarly, many AI developers have little knowledge of accounting standards. The human-in-the-loop model is critical, Datoo adds, but not easily achieved without proper training.
“The challenge for the accounting industry is to build the relevant AI knowledge and expertise in trainee auditors,” says Phil Broadbery, head of technology at PKF Littlejohn, an accountancy and audit firm.
But, as AI automates more basic tasks, many accountancy firms are cutting junior roles. KPMG has slashed its graduate cohort from 1,399 in 2023 to just 942 in 2025. Broadbery argues that, by doing so, the industry is undermining its future.
Although AI can perform many tasks handled by junior employees, entry-level roles help young professionals build the foundational knowledge needed progress to more complex tasks. “How can the industry expect to train the next generation of auditors if AI automates many of these foundational tasks, such as evidence collection and validation and basic analytics?” Broadbery asks. “It would seem a very short-term decision to cut hiring of future talent because AI is doing the work. The consequences of that will become very apparent in five to 10 years time.”
Instead, Broadbery emphasises the need for earlier exposure to judgement-based tasks among junior auditors, as well as training on how AI works, its limitations and how to interrogate its outputs. Soft skills and scepticism are equally important, he adds. “As AI handles the mechanical tasks, human auditors must double down on professional scepticism, communication and ethical judgment.”
Can AI be audited?
Algorithmic audits are emerging as a way to hold AI to account. Just as financial audits examine books and records, AI audits evaluate an algorithm’s data, design and decisions, checking for flaws, biases or compliance risks.
Some audit firms are experimenting with services that evaluate not only financial statements, but also the AI tools used to create them. Deloitte, EY and PwC are developing AI-assurance services to help assess whether AI systems perform as intended and to meet growing client demand for trustworthy tools. This could open a new revenue stream for auditors, just as the emergence of environmental, social and governance (ESG) metrics created a market for ESG-assurance services.
Efforts to AI more transparent are underway. The FRC has emphasised the need for “proportionate and appropriate documentation” and contextual explainability in AI systems. And global standards bodies such as the International Organisation for Standardisation (ISO) and The Institute of Electrical and Electronics Engineers (IEEE) are developing technical standards for AI governance. However, clarity and wider consensus on these various standards is still needed.
“It’s something of a Wild West right now,” says Datoo, adding that mandatory audits for high-risk AI uses are “likely on the horizon”, especially as regulation such as the EU AI Act comes into effect. “Auditing AI is a new frontier and that comes with its own challenges,” he says. “Models evolve, complexity increases and new risks constantly emerge, making continuous monitoring and reassessment critical.”
If they are to be trusted in financial auditing, AI systems themselves must undergo rigorous audits as standard. Keeping the tech in check, however, will require more than governance and reporting frameworks. Audit professionals at all levels must be trained to interrogate AI and identify errors in its outputs. The industry must act quickly to adapt to the increased used of AI in financial reporting – its credibility is at stake.

AI is revolutionising the audit sector, helping over-worked accountants to speed up or improve their processes. The tech has plenty of uses. It can scan millions of data transactions to identify anomalies, summarise board minutes in a matter of seconds and extract information from complex contracts in a fraction of the time it would take a human.
But convenience comes at a cost, and too many firms are rushing to implement AI without first developing proper frameworks to evaluate the risks and benefits. According to a recent review by the Financial Reporting Council (FRC), none of the six biggest accountancy firms have any formal process to monitor or quantify the impact of AI on audit quality, despite widely adopting the tech.
This raises serious concerns about oversight in a profession that is built on accountability and honesty.