
Generative AI is fundamentally changing nearly every job role, or is expected to do so in the next few years. But one task in particular may be especially vulnerable to disruption by AI: coding.
Unsurprisingly, software engineers and developers are slightly wary of their employers’ efforts to adopt AI solutions. Right now, only low-level, simple coding tasks are at risk of automation. But as the technology improves, GenAI could force even adept coders into redundancy. Reports of layoffs at AI leaders such as Microsoft do not help to allay these anxieties.
“Employee resistance and fears about new AI tools is not much different to the fears that have accompanied any business change or new technology,” says Jaco Vermeulen, CTO at BML, a consultancy.
“However, AI suffers from the extensive PR that highlights replacing people. Media headlines of companies using AI to make dramatic workforce layoffs are not helpful. It is not surprising then that employees may be resistant and fearful of adopting AI tools,” he explains.
But given the efficiency gains that can be achieved with the technology, business leaders, keen to fine-tune their firms’ operations and broaden their profit margins, will no doubt plough ahead with their adoption plans.
So how should tech leaders respond when developers, out of scepticism and fear, resist the deployment of these shiny new tools? Linh Lam, CIO at Jamf, a mobile-device management platform, was recently forced to consider this question.
Employee resistance to AI
When Jamf began adopting GenAI tools, engineers in Lam’s team responded with apprehension and discomfort over AI making recommendations or changes to their code. Rather than feeling empowered by AI, some believed that the technology merely disrupted their workflows. Lam had to find a way to bring them on board.
Unlike some businesses that went all-in on AI straight away, Jamf decided early on that it wouldn’t recklessly implement AI solutions. So Lam, with the support of the company’s leadership, established an internal AI council two years ago to oversee any deployments of the tech, measure progress towards the goals of its implementation and assess the efficacy of the tooling.
If you’re not talking about AI, then you might as well go home
“At any get-together that I have with other CIOs or CISOs or CTOs, if you’re not talking about AI, then you might as well go home,” says Lam. However, many of the companies that adopted the tech without proper planning or safeguards are now struggling to get any value out of their deployments, she says.
Jamf’s AI council established norms for internal cross-functional governance to ensure legal, privacy and security standards were upheld and that any deployment was effective and focused. But even with this level of preparation, some staff, including Jamf’s engineers, were unconvinced of AI’s benefits.
These are “really highly technical, extremely smart people”, says Lam. “But we found as soon as we gave them AI capabilities in the development lifecycle, there was scepticism, I would say resistance to it.”
Ironically, the team was building AI into their products, but felt that using AI to build those products would encroach on their autonomy. They immediately questioned what the new tooling would mean for their day-to-day job activities, not to mention for the inviolability of projects they were so proud of.
To soothe their fears, Lam increased communication and training on AI, so staff could better understand how the technology can augment their work. She set out to clarify how AI use could change operations and job functions with the aim of increasing throughput, not reducing headcount. “We’re not trying to take the existing 20 people on our team down to 10,” she says. “We’re trying instead to increase our delivery by 30% or 40% with the same team.”
Change management crucial for effective AI implementations
As organisations begin adopting AI systems, the departments using those tools should be encouraged to define their own success metrics, measure their progress and hold themselves accountable for any outcomes. Such encouragement can help ensure that AI deployments deliver tangible value and also give teams ownership of AI projects, thus reducing their scepticism of AI initiatives.
“Just because you build and release it, doesn’t mean people will adopt it,” says Lam. Educating and supporting users is therefore key to successful AI adoption; that means considering any process changes, communicating the benefits of using the technology and offering staff ongoing help.
Put simply, businesses need to plan better when it comes to AI
Backing from leadership is crucial, Vermeulen says. Without communication, training or other support, employees may fear that, by engaging with GenAI platforms, they are effectively training machines to do their jobs, therefore precipitating their redundancy.
Vermeulen says, just as with any other process change or tech deployment, organisations should establish, communicate and strictly adhere to principles determining how operating models will be impacted and encouraging organisation-wide collaboration on the initiatives.
He agrees that poor implementations can make colleagues feel that the changes are done to them, rather than with them. “Employees’ jobs get new dimensions, additional tasks and expectations of output that aren’t based on objective operational assessments,” he says. “This results in resistance due to errors or potential operational disruption, even increasing workload.”
How to manage staff resistance
According to Vermeulen, several common pitfalls may frustrate firms’ efforts to implement AI smoothly and create resistance among employees. Above all, businesses must avoid any “haphazard, decontextualised, unstructured” implementations, he says.
“GenAI tools are particularly prone to this approach, where businesses are looking for potential use cases and then forcing AI without fitting it into the wider tech landscape, process or ways of working,” he says. This can feel “disjointed” with any existing operating models, processes, data and systems.
“Put simply,” he says, “businesses need to plan better when it comes to AI”. That means, in part, carefully balancing efficiency and quality, Vermeulen adds. Efficiency at all costs can “sacrifice the quality of customer products and experience and thereby employee satisfaction and perceived worth”.
Responsible businesses, he says, will “reinforce authenticity and unique skills, positioning any AI tool as enabling more capacity, or a better employee experience, in the face of increased workload”. Such an approach can help to safeguard employees’ roles while boosting uptake and maximising the return on investment for AI tools.
As Lam found to be the case, careful planning can help prevent employees from feeling alienated as firms adopt AI tools. By guaranteeing their job security and communicating the many benefits of using the technology, employees may yet come to see AI as an enabler – and that they might even like it.

Generative AI is fundamentally changing nearly every job role, or is expected to do so in the next few years. But one task in particular may be especially vulnerable to disruption by AI: coding.
Unsurprisingly, software engineers and developers are slightly wary of their employers' efforts to adopt AI solutions. Right now, only low-level, simple coding tasks are at risk of automation. But as the technology improves, GenAI could force even adept coders into redundancy. Reports of layoffs at AI leaders such as Microsoft do not help to allay these anxieties.
“Employee resistance and fears about new AI tools is not much different to the fears that have accompanied any business change or new technology,” says Jaco Vermeulen, CTO at BML, a consultancy.