
On 2 August, a key section of the EU AI Act, targeting GenAI providers, will come into force.
The EU AI Act is an enormous piece of legislation, the first of its kind in the world. Enacted into law in 2024, the act sets out to regulate transparency and accountability for AI systems and their developers and sets out levels of risk for AI uses based on societal, ethical and legal considerations.
EU member states will enforce the act, with penalties for the most egregious infringements reaching €35m or up to 7% of annual worldwide turnover, whichever is higher. Similar to GDPR, the regulations apply to any organisation operating in the EU, not just businesses that are located in the bloc.
Stage two of the act will come into force on 2 August. It targets general purpose AI platforms such as ChatGPT and introduces a broad range of compliance requirements for companies that use large language models.
AI providers will be required to thoroughly assess the safety of their models and demonstrate that they have done so through risk assessments and testing. They’ll have to maintain technical documentation about their model architectures and make those records available to authorities. They will also be required to publicly disclose their training data if it is requested by the appropriate authorities.
Early in July, the EU released its General-Purpose AI Code of Practice, which outlines a compliance blueprint for GenAI providers. The code is divided into three chapters: transparency, copyright and safety and security. Although following the code is voluntary, the document establishes what Brussels believes to be best practices for AI companies. Rifts have already erupted over the document, but many major providers, including Google, Microsoft and Anthropic, have signed up to the code this week. Meta, however, has declined.
Here, experts in the fields of tech, legal, academia and policy reflect on what the EU AI Act has got right – and what could be improved.
The European business view
This is a very important topic. Humanity needs to address the growing power of AI. It must be transparent and controllable, not harmful.
European businesses are competing with startups from Silicon Valley, which face much less regulation. Because of that, it is very hard to compete. For instance, when European firms train AI models, they need to assess and comply with the AI act. That, at the end of the day, slows the speed of development.
The EU AI Act is critical to European competitiveness in AI. So we have to think about it together. But the speed of development of the AI industry in Europe, should also be taken into account. We lack the intense collaboration – the brainstorming, visits and interviews between businesses and government – that would make the legislation more fruitful.
The EU should create exceptions, not based on employee headcount or revenue, but on [the substance of] AI initiatives. There are many tech products and services we’d like to develop inside huge corporations – they all want to create something new. It is important for them to be able to compete with American companies and to preserve development velocity.
And, if AI startups aren’t impacting a lot of people, we should leave them alone. Let them grow. When the startup is impactful enough and it might affect society, then let’s start thinking about regulation.
The ethical view
The EU AI Act is a step in the right direction. Every new technology must be accompanied by rules governing its use. The AI act is the best effort we have in the world thus far.
I worry that a risk management approach is at odds with one that puts fundamental rights at the centre of liberal democracies. But the devil is in the details and it will partly depend on how the law gets interpreted, implemented and enforced.
The challenge is to stand our ground as Europeans in the face of a complex geopolitical context, while also looking for international allies in other liberal democracies.
Everyone has a part to play in building the kind of society we want to live in. I want to live in a society that respects fundamental rights and freedoms. And, among many other roles, we need innovators who are creative enough to build competitive technology that respects rights. Better tech is possible and we deserve tools that are more respectful of our autonomy.
The cybersecurity view
One of the most significant anticipated successes of the act is the standardisation of AI security across the European Union, creating a harmonised, EU-wide security baseline. A key strength of the proposed regulation is its emphasis on security by design, mandating a lifecycle approach that integrates security considerations from the outset and throughout an AI system’s operational life.
Despite the promising aspects, several limitations and caveats could hinder the effectiveness of this and other AI security regulations. A primary concern is the rapid evolution of threats in the AI landscape. New attack vectors may emerge faster than static rules can be updated, meaning regular revisions will be required.
The global nature of AI supply chains and cloud deployments presents jurisdictional challenges, complicating enforcement efforts. There’s also the risk of ‘compliance theatre’, where firms prioritise checkbox compliance over genuine and meaningful security enhancements. Resource and expertise gaps could also pose a challenge.
But the act is the first major law to call out protections against data poisoning, model poisoning, adversarial examples, confidentiality attacks and model flaws. The EU AI Act introduces technical protections, called ‘targeted resilience’, against AI-specific attack vectors.
The real compliance burden will be determined by technical specifications that don’t yet exist. These will define the practical meaning of “appropriate level of cybersecurity” and may evolve rapidly as AI threats mature.
This continuous-monitoring requirement represents a fundamental shift from traditional compliance models. Organisations will need dedicated AI security teams and automated monitoring infrastructure, creating significant ongoing operational costs that will open a whole new range of services from managed service providers to help SMEs.
The policy view
The EU AI Act is the first attempt to regulate AI comprehensively and that should be noted as a major achievement of the EU.
It sets the tone for global AI governance in the same way that GDPR set the tone for data protection. GDPR is by no means perfect, but many countries still follow it. There’s potential with the EU AI Act, because the EU is leading the way, that other countries will follow that, too – it could become the blueprint.
GDPR remains the go-to reference for data protection – whether you agree or disagree, it is the starting point. Similarly, the European Commission would like the AI act to be the starting point for AI regulation.
No other government will try anything so ambitious, so the opportunity – or the risk, depending on your perspective – is that the EU might achieve its goal to make the AI act the baseline. The US does not want anything so comprehensive. That’s not their approach to regulation. But many governments are nervous about AI. They want to understand how to regulate it and the AI act is the model they can turn to.
But, as with GDPR, the EU legislation is both a blueprint and a warning sign. Being the first act of its kind, it maybe gets things a little wrong on practicalities, particularly for businesses that work globally, for AI ecosystems that we haven’t considered yet and also for SMEs.
What does it get right? The act is very clear on high-risk categories. Whether you agree or disagree with the list, its terms are easy to comprehend. It specifies employment algorithms, credit scoring and medical diagnostics, for instance, as high-risk categories. That makes things more predictable for investors, businesses and policymakers.
It also gets its risk-based approach right for AI models, specifying unacceptable risk, high risk, limited risk and minimal risk. This enables proportional obligations, based on real-world risk and complexities, to be placed on companies using and developing AI models.
From a policy-making perspective, it was a collaborative process, particularly the code of practice, which is a follow-on from the law itself. More than 1,000 stakeholders provided input on the code over a period of months. The result isn’t perfect, but the drafters have shown a willingness to listen to a broad range of stakeholders.
The compliance obligations are quite heavy, however, particularly for startups and SMEs that are building foundational models or sector-specific AI solutions. The reporting requirements are complex and, at times, ambiguous, and that could clash with the European ambition to create global AI champions. But it could also conflict with requirements and other bits of regulation where there are other reporting requirements. It could also duplicate reporting requirements, thus increasing costs for businesses.
The legal view
The EU AI Act gets one crucial thing right: it starts to provide clarity in a space where legal obligations have often been ambiguous. By establishing risk-based tiers, particularly for high-risk systems, it provides developers with a framework in which to build their systems.
Additionally, while the law technically gives open-source AI a pass, the most widely used systems will still fall under the regulation. This could help to boost public trust, since popular open tools will still need to meet safety standards – but it also creates additional costs for developers who now need to make sure their tools comply with the regulation.
Ideally, we’d see regulation that’s both clear and agile – rules that protect users and build trust, without creating a compliance burden so heavy it stifles innovation. The real test will be whether the EU can strike that balance and whether others, including the UK, choose to align or go their own way. Many key provisions won’t take effect until 2026 and so we’ll continue to watch this space.

On 2 August, a key section of the EU AI Act, targeting GenAI providers, will come into force.
The EU AI Act is an enormous piece of legislation, the first of its kind in the world. Enacted into law in 2024, the act sets out to regulate transparency and accountability for AI systems and their developers and sets out levels of risk for AI uses based on societal, ethical and legal considerations.
EU member states will enforce the act, with penalties for the most egregious infringements reaching €35m or up to 7% of annual worldwide turnover, whichever is higher. Similar to GDPR, the regulations apply to any organisation operating in the EU, not just businesses that are located in the bloc.