
The EU AI Act, the world’s first comprehensive, risk-based regulation of AI, entered into force in August 2024. The law will be implemented in stages, the first of which began in February 2025. With full enforcement planned from August 2026, the clock has started for UK companies selling AI systems or AI-enabled products in the EU.
The law applies to any organisation selling AI systems in the EU market, meaning UK firms have limited time to overhaul governance, documentation and model oversight, or risk penalties up to 7% of global turnover or €35m, whichever is higher.
The act is broad in scope and its expectations are detailed. UK leaders who have not begun preparations must do so or risk regulatory penalties and operational disruption.
Key compliance dates for the EU AI Act
Implementation of the regulations began in February 2025 and will continue until August 2026, although some high-risk systems have until August 2027 to make the full transition required by the legislation. Here are the key implementation dates of the AI Act.
- 2 February 2025: Ban begins for AI systems that present unacceptable risks, such as predictive policing based on protected characteristics and untargeted biometric scraping.
- 2 August 2025: Obligations for general-purpose AI models begin. Model developers must publish technical documentation, training-data summaries, model cards and systemic-risk-mitigation plans.
- 2 August 2026: Providers of high-risk AI systems must have established quality-management systems, human-oversight structures and post-market monitoring.
- 2 August 2026: On or before this date, full compliance for high-risk AI becomes mandatory, including conformity assessments, standards certification, mandatory incident reporting and registration in the EU’s high-risk AI database.
- 2 August 2027: Transitional arrangements end for certain embedded high-risk AI systems, particularly those that function as safety components in regulated products and were already on the EU market before the act’s adoption.
The European Commission maintains a Q&A platform to clarify the expectations.
Which UK companies fall under the EU AI Act?
UK organisations may misjudge their exposure to the act’s legal and operational requirements. The law applies to any provider, deployer, importer or distributor placing AI systems on the EU market, which means it’s not only EU-based firms that fall within its scope.
Firms operating in these areas must be particularly aware of their obligations under the act:
- Deployments in high-risk sectors: These include AI systems for credit scoring, hiring, worker management, education, healthcare, critical infrastructure, law enforcement, transport safety or biometric ID. Any UK vendor supplying these systems to EU customers is within scope of the act.
- General-purpose AI: UK companies that build or significantly fine-tune foundation models must meet transparency, cybersecurity and systemic-risk obligations, even if the model isn’t marketed as ‘high risk’.
- AI-embedded in products: Manufacturers and software firms selling AI-enabled devices or tools – from consumer tech to industrial automation and decision-support systems – are in scope if the AI meaningfully influences decisions or safety.
Moreover, providers and deployers located outside the EU can be covered by the act if their systems’ outputs are used inside the bloc, which is the scenario many UK businesses now face.
What documentation does the EU AI Act require?
Preparing for the act requires a level of documentation and transparency many UK organisations have never formalised. High-risk providers will face demanding record-keeping and governance standards as the EU increases its scrutiny of AI systems. These are the core requirements UK leaders must prepare for now.
At minimum, high-risk AI providers must produce:
- Technical documentation: Details of intended use, model specifications, performance metrics, risk-mitigation measures, training procedures and management records.
- Data-governance documentation: Evidence demonstrating data quality, representativeness and rights compliance, as well as processes for identifying and correcting bias.
- Human-oversight plans: Clear instructions, made accessible to technical and non-technical operators, specifying when human intervention is required and how it is applied.
- Cybersecurity and resilience evidence: Results of security testing, vulnerability-management procedures and safeguards designed to prevent model manipulation or misuse.
- Post-market monitoring systems: Structured processes for tracking real-world model performance, logging incidents and reporting serious risks to EU authorities in defined timelines.
Developers of general-purpose AI will be required to publish training-data summaries, assess systemic risks, prevent unauthorised uses and provide downstream providers with information needed for their own compliance.
Ensuring the accuracy of the documented information is essential. These records underpin conformity assessments and enforcement decisions, and gaps could invite regulatory scrutiny and lead to penalties.
Where are UK firms most vulnerable?
Alongside hefty fines, UK companies may face operational disruption if they flout the regulation. High-risk systems that are not properly documented or tested may fail conformity assessments, leading to delayed launches, suspended deployments or barriers to selling in the EU. Companies that cannot demonstrate data quality, traceability or human oversight could face mandatory corrective actions from regulators, diverting staff and resources from core work.
There is also commercial risk. EU clients are starting to require evidence of readiness in procurement processes, which means slow movers risk losing contracts to competitors who are better prepared.
Firms with fragmented AI development or unclear ownership of model governance face the highest exposure, as they may struggle to produce consistent documentation across products. For organisations operating multiple AI systems, these risks can compound quickly.
Next steps for UK business leaders
UK executives must prioritise sequencing compliance work before deadlines converge. The first step is mapping all AI systems connected to the EU market, including embedded or downstream uses, which are not always recognised internally as AI. Companies should classify each system against the act’s categories, confirm whether high-risk obligations apply and appoint a governance lead with authority to oversee compliance.
Documentation work must begin early. Producing technical records, risk assessments and human-oversight plans can take months, and many firms will need to audit sources of training data, evaluate monitoring practices and update vendor and customer contracts. Cross-functional coordination is essential to avoid duplication and rework.
Organisations that begin this work now will be better positioned to meet deadlines without disrupting product development or customer commitments.
The EU AI Act, the world’s first comprehensive, risk-based regulation of AI, entered into force in August 2024. The law will be implemented in stages, the first of which began in February 2025. With full enforcement planned from August 2026, the clock has started for UK companies selling AI systems or AI-enabled products in the EU.
The law applies to any organisation selling AI systems in the EU market, meaning UK firms have limited time to overhaul governance, documentation and model oversight, or risk penalties up to 7% of global turnover or €35m, whichever is higher.
The act is broad in scope and its expectations are detailed. UK leaders who have not begun preparations must do so or risk regulatory penalties and operational disruption.
