
Governments everywhere are debating how best to regulate generative AI – or, indeed, whether it should be heavily regulated at all. But, in some jurisdictions, AI systems are not merely the targets of policies and regulations; they are the drafters and interpreters of laws.
Such developments raise questions not only about the automation of jobs in the fields of law and government but also about the allocation of power and legal decision-making between humans and machines.
The first AI laws
In October 2023, Porto Alegre, the capital city of Brazil’s southernmost state, introduced a bill to exempt citizens from paying for a new water meter in the event of theft. It passed the city council unanimously. Six days later, its chief sponsor, Ramiro Rosário, revealed that the bill had been drafted entirely by ChatGPT. This sparked a heated debate in the country about AI’s role in society.
Is technology democratising justice or exacerbating inequality? There’s never one answer for that
Speaking to the Washington Post, Rosário explained that the bill was created in a matter of seconds rather than days, as is standard. It was partly intended to make a point – that citizens must ready themselves for an AI-powered future. Other legislators have taken similarly symbolic steps. For instance, Costa Rica’s congress used ChatGPT to draft a law on regulating ChatGPT, and Ted Lieu, a US senator, introduced a congressional resolution drafted by AI to highlight his concerns about the tech.
But some countries are more ambitions about the use of AI in legal processes. In April this year, the United Arab Emirates unveiled plans to implement an AI-driven legislative system, overseen by a ‘regulatory intelligence office’. The system will use AI tooling to both draft laws and review and amend existing ones. It will also analyse court rulings and the application of laws and even propose amendments in real time.
AI lawmaking: perks and pitfalls
AI has been causing a stir in the world of law for some time, says Charlie Bromley-Griffiths, senior legal counsel for Conga, a compliance automation company. In 2023, two AIs successfully negotiated a legally binding non-disclosure agreement without any human involvement. “It was completed in a matter of minutes and the only human requirement was a signature at the end,” Bromley-Griffiths says. And interest in the tech among businesses and governments has increased markedly since then.
There are “many, many, many benefits” to an approach such as the UAE’s, adds Greg Francis, the CEO of Access, a global tech-policy consultancy. “There would certainly be lower legal fees, you could get legislation done much faster and, as the UAE intends to do, you could use AI to update laws much more quickly.”
Legal systems reflect cultural values, political structures and risk tolerance. Harmonising that isn’t easy
He adds that AI could also be used to grab more granular data from law enforcement. For instance, an AI system could spot whether the wrong people were being picked up in wide-reaching law enforcement dragnets. Then, the law could be tweaked “just enough” to “make sure it was applied to real offenders versus those that transgressed unknowingly and were punished unnecessarily”.
When Francis first settled in the UK, Gordon Brown’s government had recently decreed that terror suspects could be held in detention for 40 days without trial. A machine-based system, he says, might have helped to prevent such illiberal legislation.
In the fog of war or a state of panic, legislators can easily overreach, Francis notes. Even if it is not used to draft laws directly, AI may be able to steady the “direction of travel for laws, regulation and policies”. AI systems could help governments to “course-correct”, via parliament or an executive body, when ordinary legislative processes fail or become vulnerable in the face of extraordinary circumstances.
Fine-tuning human laws
Like legal systems, AI systems are imperfect. Some critics argue that because they are susceptible to errors, hallucinations and biases, AI systems should not be trusted to draft laws or regulations. Bruce Schneier is a public-interest technologist and the author of an upcoming book, Rewiring Democracy, which investigates how technology and civics interact. He argues that AI could help policymakers, many of whom are chronically under-resourced, to create smarter and more effective laws. Crucially, it could help to lessen the influence of lobbyists in the legislative process.
Laws are typically written by a paid intern or based on model legislation provided by lobbyists aiming to influence policies in their favour, Schneier explains. Both of these are flawed processes. “This is just another flawed process,” he says. “It has pluses and minuses. But the lobbyist feedback loop is so disgusting, it’s hard to do worse.”
He adds: “The downside of [using AI to draft policy] is that it could make a mistake that no one notices – but that’s the downside of the fully paid intern as well.”
Different applications for AI: common law and civil law
Francis says that AI can be used in adjudication, as well as legislation, but applications might differ depending on the legal system in use. Judges in common law jurisdictions, such as the UK and the US, rely on previous court decisions and legal precedents to interpret or evolve the law. In civil law jurisdictions, such as the UAE, judges systematically apply written statues to the facts of a case, with little emphasis placed on legal precedent.
AI systems might be most effective in civil law jurisdictions, says Francis. Algorithms can easily be trained to apply rigid, codified rules to a legal case. But adjudication in a common law system requires nuanced legal interpretation that balances statues with practice and precedent. Here, AI systems might struggle.
Yet it is possible, and, in Francis’s view, desirable, for AI to be used in both types of legal systems. In common law countries, especially those with “sclerotic or backed-up caseloads”, AI could speed up judgments. A machine-based approach could sharpen needlessly long and process-heavy adjudication, where the potential biases of judges might obstruct the fair application of law. “There would still be biases with AI,” he notes. “But you could eliminate bribing, favours, electioneering and not-so-subtle personal biases.”
AI and lawmaking challenges
Despite the techno-positive sentiment, the use of AI in legal processes could create significant challenges in the complex, global commercial landscape. Inconsistent approaches across countries could cause headaches for lawmakers and lawyers, for instance.
The lobbyist feedback loop is so disgusting, it’s hard to do worse
“Imagine an AI used for loan approvals,” says Bromley-Griffiths. Thanks to biases, an AI system might reject applications from certain demographics. And while some countries might have regulations to prevent such discrimination, others will not.
Common legal standards are needed to ensure fairness and ethical treatment across borders, he adds. But creating such standards would be complicated. Consistency around the world would be desirable, but law is inherently local. What one country views as fairness, another might see as overreach, he explains.
“Legal systems reflect cultural values, political structures and risk tolerance – and harmonising all of that isn’t easy,” Bromley-Griffiths says. “But we can and should strive for the baseline principles of transparency, accountability and non-discrimination. We won’t get perfect uniformity, but we can build a shared foundation that respects local nuance while protecting human dignity everywhere AI is used.”
The future of legal tech
Many legal startups are already using AI tools to write briefs, research jurors, test court arguments, conduct mock trials and provide feedback on delivery, says Schneier. When the technology is used well, it can be hugely beneficial. But, if it is applied badly, then “the tech becomes the one in charge”.
“Will AI make the average attorney better? Does it increase access to justice? If so, that’s fantastic,” Schneier says. “Or, does it make the best attorneys better, which further divides rich and poor? It comes back to power – is technology democratising justice or exacerbating inequality? There’s never one answer for that.”
As Rosário’s Port Alegre experiment suggests, AI might already be used in lawmaking more than anyone realises. What matters now are the guardrails erected around it to ensure that AI is increasing justice, rather than eroding it.

Governments everywhere are debating how best to regulate generative AI – or, indeed, whether it should be heavily regulated at all. But, in some jurisdictions, AI systems are not merely the targets of policies and regulations; they are the drafters and interpreters of laws.
Such developments raise questions not only about the automation of jobs in the fields of law and government but also about the allocation of power and legal decision-making between humans and machines.