
The August 2026 deadline is fast approaching, and organisations need to be ready to meet the high regulatory bar set by the EU AI Act. Heads of IT should already be mapping the compliance layer of AI tools that stretches across APIs, model integrations, and legacy systems to ensure the entire business is auditable and meets regulatory demands.
At a technical level, most organisational exposure begins with APIs. Over the past two years, teams have rapidly integrated third-party AI services across internal and customer-facing services, from support to analytics to fraud detection. Many of these connections are poorly documented, with limited visibility into what the AI is doing. This is set to become a much larger governance issue with the integration of AI agents that can operate independently of humans and collaborate with other agents.
Continuous monitoring systems are required under the EU AI Act
The first step for organisations is to build a complete API inventory, which includes every external AI endpoint. For each endpoint, there needs to be a log of the data transmitted, the purpose of that transmission, and whether the data is sensitive. Organisations should also build or enable a continuous monitoring system for these logs, as required under the EU AI Act.
A monitoring system must capture all inputs, outputs, and relevant metadata to provide a transparent trail for internal reviews and regulatory requests. Organisations should also aim to minimise data outflow at the API layer, reducing the likelihood of sensitive data being exposed to third-party services.
Once this is in place, APIs connected to the business should be categorised based on risk. Low and minimal risk APIs may only require transparency controls, but anything touching employment decisions, financial profiling, behavioural analysis, or other real-world impacts must be labelled as high risk, with tracking and audit processes available in real time.
High-risk systems, particularly in employment and finance, require a rigorous approach to data governance and documentation. This may go far beyond what was in place before deployment and could require systems to be redesigned with traceability and transparency at the forefront.
A checklist
For organisations with high-risk systems, a practical checklist for auditing legacy systems used in decision-making should include:
- Know where it’s used: list every point where the system influences high-risk decisions.
- Track the data: confirm what data goes in, where it comes from, and what changes along the way.
- Rebuild decisions: maintain a clear, step-by-step record of how each decision is made.
- Understand the logic: document how the system reaches decisions, whether through rules or models.
- Check for bias: review historical outputs to identify unfair or inconsistent patterns.
- Confirm human control: ensure humans can intervene or override decisions at any stage.
- Test failure handling: verify the ability to detect issues, roll back decisions, and maintain records.
Legacy systems present unique challenges in the context of AI and autonomous agents, as they often rely on historical models or opaque vendor tools. Under the EU AI Act, particularly Articles 9 to 15, these systems may need to be effectively reverse engineered to meet required levels of transparency.
The costs of the act are already becoming apparent, with some vendors charging 20-30% more
Alongside internal audits, vendor management is another critical area that organisations need to tighten before the Act comes into force. Many suppliers are already marketing tools as “EU AI Act ready”, but as the recent Delve scandal has shown, organisations need to spend far more time testing and validating vendor claims rather than relying on marketing assurances.
The costs of the Act are already becoming apparent, with some vendors charging 20-30% more to reflect certification costs and engineering overhead. For European firms, this is often seen as a necessary trade-off in high-risk areas, where non-compliance can result in significant fines and reputational damage.
With only a few months remaining, IT leaders need to translate regulatory language into system-level controls. That means knowing every AI touchpoint in the organisation, understanding how it behaves and ensuring it can be controlled.
The August 2026 deadline is fast approaching, and organisations need to be ready to meet the high regulatory bar set by the EU AI Act. Heads of IT should already be mapping the compliance layer of AI tools that stretches across APIs, model integrations, and legacy systems to ensure the entire business is auditable and meets regulatory demands.
At a technical level, most organisational exposure begins with APIs. Over the past two years, teams have rapidly integrated third-party AI services across internal and customer-facing services, from support to analytics to fraud detection. Many of these connections are poorly documented, with limited visibility into what the AI is doing. This is set to become a much larger governance issue with the integration of AI agents that can operate independently of humans and collaborate with other agents.
Continuous monitoring systems are required under the EU AI Act
