
Filing workplace grievances was once a tricky task. But employees are increasingly turning to generative AI tools such as ChatGPT to send lengthy grievances to their managers – without giving much thought to the consequences.
Although AI-assisted complaints appear polished and professional, they often contain inconsistencies, irrelevant arguments and, at worst, fundamental inaccuracies. But employers must not prejudge any of the points. These claims must be reviewed carefully and responded to whether they are drafted with AI or not. And this is creating a huge time and cost burden for HR teams and their lawyers.
The problem is that many individuals rely on these tools without verifying the accuracy or relevance of the output. They often treat AI-generated content as authoritative without applying even their own judgment, let alone seeking legal advice.
Some employees are even filing litigation with the help of AI tools. As a result, they could submit obscure claims and arguments that have almost no chance of succeeding. In one recent employment tribunal, a claimant used GenAI to write a letter alleging corporate manslaughter by their employer. The catch? No one had died.
Nonetheless, employers must respond to such arguments to properly defend themselves. This is driving up legal costs for both parties and adding complexity to what should and would have been straightforward cases, if AI had not been involved.
GenAI claims are causing untenable pressures
Employment tribunals are already under extraordinary pressure, with some cases taking more than a year to reach final hearing. But AI-generated grievances do much more than delay hearings. They also increase the volume, pace and persistence of correspondence from claimants.
In some instances, claimants feeling empowered by automated drafting will barrage representatives and the tribunal with multiple emails per day and attempt direct communication with judges. These often contain new issues, challenges and complaints.
This overwhelming correspondence risks clogging tribunals even further, making it harder for judges and lawyers alike to focus on the substantive issues in a case. At times, the only option is to stop responding to avoid further jamming the tribunal’s workflow.
Chatbots are creating biased claimant echo chambers
Consumer AI apps are designed to validate the user. So when claimants feed biased cues into AI, only telling their side of the story, the application is unlikely to critique them. Instead, these tools often convince the claimant of the unassailable strength of their case, regardless of factual or legal merit.
This the efficiency of the system, as well as the interests of claimants, who can become anchored to unrealistic positions, making dispute resolution more challenging. Sometimes, with no one reining them in, claimants submit aggressive communication, unfounded accusations or threats to regulatory bodies.
Using AI in this way also threatens privacy, as popular, consumer AI platforms can easily leak sensitive information about employers, business processes or third parties.
How should HR and legal teams respond?
There is no perfect solution to these challenges. It is neither realistic nor fair to bar employees from using AI. Many employers, HR teams, and legal professionals, as well as employees themselves, use these tools in legitimate ways.
Instead, companies must focus on practical guardrails. For example, HR policies and procedures should be updated to clarify responsible AI use and, crucially, to require employees to avoid inputting confidential or commercially sensitive information into public AI platforms.
People teams must also prepare for the increased volume and complexity of grievances and train staff to recognise and handle AI-generated complaints. Proactive education and clear communication can go some way to mitigating the impact of the new challenges businesses face.
While AI may be reshaping the face of employment disputes, this is perhaps just another area where businesses, courts and tribunals must keep pace with change by adapting their approach.
Ailie Murray is a partner in Travers Smith’s employment team.

Filing workplace grievances was once a tricky task. But employees are increasingly turning to generative AI tools such as ChatGPT to send lengthy grievances to their managers – without giving much thought to the consequences.
Although AI-assisted complaints appear polished and professional, they often contain inconsistencies, irrelevant arguments and, at worst, fundamental inaccuracies. But employers must not prejudge any of the points. These claims must be reviewed carefully and responded to whether they are drafted with AI or not. And this is creating a huge time and cost burden for HR teams and their lawyers.
The problem is that many individuals rely on these tools without verifying the accuracy or relevance of the output. They often treat AI-generated content as authoritative without applying even their own judgment, let alone seeking legal advice.