
The fear of being replaced by AI is weighing heavily on employees’ minds. Research by KPMG shows that 52% of workers believe AI will harm their job security, fuelling anxiety. This narrative is being further pushed by senior executives: Anthropic’s CEO, Dario Amodei, warned that AI could wipe out half of all entry-level white–collar jobs and Microsoft’s AI CEO Mustafa Suleyman said that AI could automate “most, if not all” white collar tasks within 18 months.
Experts are now highlighting the potentially severe mental health impact posed by the growing threat of AI replacing jobs. In a recent academic paper, two researchers argue the phenomenon is significant enough to merit its own clinical label: AI replacement dysfunction, or AIRD. According to the authors, the constant fear of job loss could be driving symptoms ranging from anxiety, insomnia, paranoia, and loss of identity, even among otherwise healthy individuals.
So far, much of the debate around AI and mental health has focused on the personal risks of using the technology itself, including reports of chatbots fuelling delusions or encouraging harmful behaviour. But the broader emotional impact of simply living and working under the shadow of AI has been largely overlooked. As headlines about AI-driven layoffs mount, AIRD may demand far greater attention from employers.
AI can erode work identity
Even without layoffs, rapid technological change can spark anxiety among staff. Dr. Brittany Straton, senior lecturer in cyberpsychology at Arden University, calls it a “chronic stressor” that erodes wellbeing, psychological safety, and motivation. “Our jobs give many of us a sense of purpose and identity,” she explains. “When tasks are automated, workers can experience a loss of professional identity or purpose.”
The irony is that AI doesn’t need to replace people to cause strain. The mere perception of risk can influence behaviour, Straton explains. Workers may withdraw, resist new technology, or even hide knowledge to protect their perceived value. “These are not signs of unwillingness to modernise – they’re stress responses rooted in identity protection,” she says.
Some workers feel they must compete with machine efficiency or constantly upskill to survive
Performance pressures compound the strain, according to Tracey Paxton, clinical director at Perkbox, an employee benefits group, and chair of The Royal College of Psychiatrists APPTS Advisory Board. “Some workers feel they must compete with machine efficiency or constantly upskill to survive, creating a sense of never being secure or good enough. This is a known driver of stress and presenteeism,” she says.
The challenge for employers is that AI can simultaneously empower employees while provoking anxiety in others. Indeed, some studies suggest that reducing low-control, high-repetition tasks can improve both wellbeing and engagement. “While some employees feel genuine relief, others experience a subtler, corrosive stress,” Paxton explains. “Uncertainty is one of the strongest drivers of workplace stress, and AI can intensify it, raising questions about role stability, skills relevance and long-term employability.”
In Paxton’s view, when changes are introduced quickly or with limited staff involvement, trust in leadership can decline, with employees feeling replaceable or overly monitored. “This combination, reduced control, weakened role identity and lower trust, is exactly what organisational change research predicts when technological transformation is experienced as imposed rather than collaborative.”
How employers can build resilience
The good news is employers are increasingly aware of these psychological effects. “I’m seeing more transparent communication about how AI will be used,” Paxton says. “Where organisations involve staff early, evaluate progress and emphasise augmentation rather than replacement, employee fear reduces significantly.”
Dr Aaron Taylor, head of human resource management at Arden University, is seeing more emphasis on training and reskilling as core elements of AI strategy. “Rather than assuming employees will adapt, employers are investing in structured upskilling programmes, coaching and digital literacy support,” he says. “These initiatives build capability while reinforcing a sense of future employability, which is vital for morale. When workers can see a pathway through the change, their confidence and engagement increase.”
Other firms are pairing technological changes with wellbeing checks and manager training on how to talk about AI. “Ultimately, the future of work will depend on employers embedding AI in ways that uphold fairness, transparency, and human value,” Taylor says. “Workers are looking for the truth about what AI will change, reassurance about where humans remain essential, and support to grow into new forms of work.”
Future AI lawsuits
By failing to address the psychological impact of AI on their workforce, business leaders may be exposing themselves to legal risk. “Workplace AI anxiety can become a liability once an employer is, or reasonably should be, aware of a genuine risk of harm and fails to take steps to mitigate it,” says Hannah Mahon, a partner in Eversheds Sutherland’s Employment, Labor and Pensions group.
While litigation directly linked to AI adoption is still emerging, existing legal duties around psychiatric harm, workplace stress and job security already apply in many cases, Mahon explains.
When workers can see a pathway through the change, their confidence and engagement increase
HR and legal teams therefore play a critical role. Documenting employee concerns, responding promptly, and maintaining open communication channels not only builds trust but also demonstrates that employers have taken reasonable steps to safeguard staff.
“Sharing information early, explaining AI objectives in clear, accessible language, and involving employees in decision-making helps people feel included rather than sidelined – reducing stress and the likelihood of grievances,” Mahon says.
Although UK law does not yet explicitly address AI-related psychological harm, employers remain bound by established duty-of-care obligations to manage foreseeable risks arising from workplace technologies. According to Mahon, existing frameworks covering fairness, discrimination and employee wellbeing are broad enough to encompass AI-related challenges, though more targeted guidance may emerge as the technology evolves.
In practice, minimising both harm and liability comes down to proactive engagement, transparent communication and positioning AI as a tool for augmentation rather than replacement – approaches that protect employees while helping organisations navigate the transition responsibly.
The fear of being replaced by AI is weighing heavily on employees' minds. Research by KPMG shows that 52% of workers believe AI will harm their job security, fuelling anxiety. This narrative is being further pushed by senior executives: Anthropic’s CEO, Dario Amodei, warned that AI could wipe out half of all entry-level white–collar jobs and Microsoft’s AI CEO Mustafa Suleyman said that AI could automate “most, if not all” white collar tasks within 18 months.
Experts are now highlighting the potentially severe mental health impact posed by the growing threat of AI replacing jobs. In a recent academic paper, two researchers argue the phenomenon is significant enough to merit its own clinical label: AI replacement dysfunction, or AIRD. According to the authors, the constant fear of job loss could be driving symptoms ranging from anxiety, insomnia, paranoia, and loss of identity, even among otherwise healthy individuals.
So far, much of the debate around AI and mental health has focused on the personal risks of using the technology itself, including reports of chatbots fuelling delusions or encouraging harmful behaviour. But the broader emotional impact of simply living and working under the shadow of AI has been largely overlooked. As headlines about AI-driven layoffs mount, AIRD may demand far greater attention from employers.




