Brussels answers the call for more coherent AI regulation

The EU is set to become the first major market to create a legislative framework specifically covering AI tech. There are hopes that others will follow its lead
A view of an empty EU parliament chamber

When Microsoft unleashed Tay, its AI-powered chatbot, on Twitter on 23 March 2016, the software giant’s hope was that it would “engage and entertain people… through casual and playful conversation”. 

An acronym for ‘thinking about you’, Tay was designed to mimic the language patterns of a 19-year-old American girl and learn by interacting with human users on the social network. 

Within hours, things had gone badly wrong. Trolls tweeted politically incorrect phrases at the bot in a bid to manipulate its behaviour. Sure enough, Tay started spewing out racist, sexist and other inflammatory messages to its following of more than 100,000 users. Microsoft was forced to lock the @TayandYou account indefinitely less than a day later, but not before its creation had tweeted more than 96,000 times.

AI systems are, of course, adaptive, learning from cues in their environment and changing their behaviour autonomously. This means that, once they’re put in place, unforeseen ramifications may ensue, for which humans can quite easily avoid responsibility. They can also be an invisible hand that discreetly influences our choices to an extent that goes beyond our understanding and/or wishes. 

As these risks proliferate, legislators have stressed the need for clearer and more consistent regulatory frameworks to deal with them. The world’s major economies have yet to establish such measures so far, relying instead on a suboptimal patchwork of old laws and standards to police the industry. 

But that will change when the European Commission’s proposed Artificial Intelligence Act becomes law. Heralded as the world’s first legal framework designed specifically to cover AI, this will seek to identify and regulate higher-risk forms of the technology – biometric identification, for instance. It will impose far-reaching obligations on developers, covering standards of governance, design, transparency and data security. Those that fail to comply may be subject to hefty fines. 

We know that bias is an issue that crops up in numerous machine learning systems

Once passed, the act will still need to work its way through the European Parliament for adoption – and it will be at least a couple more years before it becomes enforceable.

Other key markets that have yet to design an AI-specific regulatory regime, although China issued guiding principles for its regulation in 2017 and, this March, a data privacy law called the Internet Information Service Algorithm Recommendation Management Provisions. 

The UK is awaiting the publication of a government white paper on AI regulation, while the White House started preliminary discussions on the need for what it called “a bill of rights for an AI-powered world” in November 2021. 

Dr Mariarosaria Taddeo is an associate professor at the Oxford Internet Institute and a faculty fellow at the Alan Turing Institute. She believes that a clear “Brussels effect” is setting the pace for other administrations to follow. 

“Starting with the General Data Protection Regulation and continuing with the Digital Services Act, the Digital Markets Act and now the AI Act, the EU has created a framework for the coherent regulation of digital technologies. Any tech provider seeking access to the single market will have to abide by this,” she says. “I suspect that we’ll be moving on to a ‘transatlantic effect’, because the EU and the US have strengthened ties over the past year and sought shared points to align their regulations. Other markets are likely to follow.”

Part of the problem with the current mishmash of regulations is that it creates inconsistency. AI-based businesses would be much better served by a set of shared international standards if they are to build a thriving global industry.

Dr Cosmina Dorobantu, co-director of the Alan Turing Institute’s public policy research programme, notes that, in practical terms, AI is a “general-purpose technology” that touches all sectors. This means that some issues require a common approach, yet often they will be policed by separate regulators, each of which may take a different approach. 

“We know that bias is an issue that crops up in numerous machine learning systems,” Dorobantu says. “We will see it in credit scoring algorithms used by mortgage lenders, facial recognition technologies used by police forces and automated triage systems used by hospitals, to name but a few examples.” 

In the UK, these applications fall within the remit of different watchdogs, which increases the risk of regulatory arbitrage, whereby users seek the path of least resistance, she warns. Also, important problems could fall through the cracks if one regulator were to assume – wrongly – that another authority is handling them.

The EU’s AI Act is not without its weaknesses. The European Commission’s consultation on the draft legislation attracted more than 300 feedback submissions – a far stronger response than its other tech bills have generated. Criticisms have focused on the act’s overly broad definition of AI and what it classes as high-risk uses of the tech that will be subject to more stringent controls. Human rights campaigners, meanwhile, are concerned that the act is not strict enough in controlling the use of AI in law-enforcement applications such as predicting criminals’ behaviour and conducting mass surveillance using facial-recognition systems.

As the global AI market develops, we may see flashpoints when different jurisdictions’ approaches to regulation clash, Taddeo predicts. The EU’s approach to digital governance has historically been based on values such as human dignity. In the US, the focus has been much more on preserving freedom, of speech or of markets. But both value democratic values and basic human rights, even if they may interpret these differently. China’s attitude to such matters will be harder to reconcile. The EU has already vowed to ban AI systems such as Beijing’s so-called social credit system, which enables organisations and individuals to be tracked and evaluated for trustworthiness.

Despite their differing approaches, democratic countries are likely to seek regulatory alignment wherever possible as they seek to unlock the opportunities created by AI, according to Taddeo. 

Even if they can’t achieve full alignment, she says, some measure of convergence will at least “create a playground that allows different actors to collaborate”.