How businesses can stay on the right side of AI law

Although advances in artificial intelligence offer them ever more profit potential, firms must also prepare for the day of regulatory reckoning. How can business leaders get the balance right?

Business colleagues in a meeting discussing strategy and risk

While the rapid development of artificial intelligence offers considerable value for businesses in a range of industries, it also presents a key concern for their tech chiefs. Should they seize every new opportunity offered by AI and risk falling foul of any future regulation, or should they wait for a clear compliance framework to be constructed and so risk falling behind their early-adopting rivals? 

Not for the first time, legislators have fallen behind the technological curve. For firms seeking to implement AI in a measured, responsible way, there’s no way to know for certain that they’re going to comply with the rules that are eventually enacted. The sheer speed at which new applications for this powerful technology are emerging makes it impossible to anticipate, let alone mitigate, all the associated risks.

Three considerations for AI compliance

For businesses to extract the most value from AI now while staying mindful of the future compliance risks, their IT chiefs must focus on three qualities: flexibility, scalability and effective governance. So says Caroline Carruthers, co-founder and CEO of Carruthers and Jackson, a consultancy specialising in data management. 

The flexibility element, she explains, means having the means to both “take advantage of new opportunities afforded by advances in AI” and adjust quickly to any new regulatory requirement. 

It’s crucial to train technology teams not only in implementing AI but also in maintaining ethical and secure systems

When it comes to scalability, Carruthers notes that “some tech teams can build fantastic new tools but aren’t able to scale these up, meaning that only a small part of the business benefits from them. AI innovation will be no different. It’s important for any AI-based transformation to strike a balance between doing something fast and ensuring that it is scalable.” 

Underpinning both of these elements should be good governance, she stresses. “We don’t know what AI regulation will look like yet, so understanding the risks of this technology and building a framework that recognises them is critical to getting ahead of potential new laws.”

That’s not to understate the risk of non-compliance with the legislation that’s already in place. Indeed, data protection authorities in the EU have been investigating several complaints about OpenAI. The creator of ChatGPT has been accused of numerous breaches of the General Data Protection Regulation (GDPR) since launching its popular chatbot in Europe late last year. 

Given such concerns, leaders should prioritise minimising liability, argues Stephen Lester, CTO at business services provider Paragon. 

“In a complex and changing regulatory landscape, the key thing to remember is also the simplest: don’t tell AI anything you wouldn’t talk about openly,” he stresses.

Phased implementation can help mitigate risk

For technology chiefs, a phased approach to implementing new AI applications may offer the appropriate balance between obtaining value from them and mitigating the compliance risks. That’s the view of Bob Strudwick, CTO at Sonovate, a fintech specialising in invoice financing.

“Starting with small pilot projects allows for both risk assessment and fine-tuning without committing extensive resources,” he says. “To achieve agile innovation combined with thorough due diligence, it’s crucial to train technology teams not only in implementing AI but also in maintaining ethical and secure systems.” 

Training covering key topics such as “data protection, information security and algorithmic bias can equip teams with the skills they require to handle the challenges posed by AI”, Strudwick suggests.

How to ensure lasting compliance

While the GDPR has resilient clauses addressing data privacy and the applicability of AI, there isn’t yet a regulatory framework for making an AI application ‘forget’ information it has learnt. Arguably the most significant forthcoming legislation for firms trading internationally will be the EU’s Artificial Intelligence Act, which is due to take effect in 2025. This legislation will place particular emphasis on the transparent use of AI and the protection of private data.

Bernd Greifeneder is the founder and CTO of Dynatrace, a software firm specialising in application performance optimisation. He believes that one of the biggest non-compliance risks is that the so-called large language models (LLMs) – the AI algorithms that perform a range of processing tasks – could learn from what people have pasted into their prompts and accidentally compromise intellectual property. DevOps teams could also inadvertently violate privacy laws such as GDPR. 

The key thing to remember is also the simplest: don’t tell AI anything you wouldn’t talk about openly

With regulation set to evolve and proliferate, users could be exposing themselves to some serious potential problems in this area. The only way to manage this risk effectively is to use LLMs that have been purpose-built to comply with data security and privacy standards and to recognise that the risk goes beyond mere regulatory compliance, Greifeneder argues. This is a matter of maintaining the trust of users and customers.

“At Dynatrace, we use domain-specific methods that let us filter and mask any sensitive data as it’s ingested into our platform, while retaining the context that our AI uses to give our customers the insights they need,” he explains. “This minimises risk while maximising the value we create.”

It’s also important for firms to use the AI solutions best suited to their needs. Thinking carefully about which technologies to adopt can help them to meet basic standards of explainability for any insights they may gain from their chosen tools.

“This is a key challenge, given the recent advances we’ve seen with technologies such as ChatGPT. It hasn’t been designed to distinguish fact from fiction,” Greifeneder observes. “We’ve integrated a purpose-specific type of AI into our platform, building it around a hyper-modal framework that encompasses predictive, causal and generative AI methods, each one serving a distinct purpose and ensuring the reliability of data.”

CTOs should be aware of the need to audit their AI usage periodically as a matter of good practice and compliance hygiene. And, if new tools are introduced, existing tech is modified or using AI becomes a high-risk activity, they would be wise to update their audits (and risk assessments) and redo them, advises Adam Penman, an associate at global law firm McGuireWoods.

“Demonstrating and documenting prudent steps to mitigate risk and harm will shield a business, at least in part, from punitive regulation,” he says.

For now, at least, that’s the best that businesses can do to manage their potential liabilities in this highly complex area.