
If you’re a recruiter, chances are you spend much of your working day reading cover letters and CVs written using AI. This, at least, is a conclusion you might reach after seeing recent surveys on the subject, including one published by CharityJobs in February, which revealed that 64% of applicants in the UK charity sector admitted to using AI to help with some part of the recruitment process in 2025. This is up from 52% in 2024, and it’s also very similar to the percentage produced by a January survey from EdTech company Kahoot!, which found that 65% of UK-based Gen Z workers have used LLMs to write cover letters and other application material.
Such figures are the tip of a very large iceberg, since cybersecurity experts and HR professionals testify to AI use being widespread in the context of job applications. Not only that, but much of this use is now problematic or fraudulent, with some bad actors using AI to provide themselves with real-time interview assistance, create fake identities, or even generate live deepfake streams. These activities have come to the fore most infamously in the context of North Korean hackers finagling jobs with European firms, a problem which resurfaced in the US in March.
While there’s no single tool or measure that can help HR departments unfailingly detect AI usage, there are a variety of steps they can take during the application process to drastically lower the probability of deception by a fraudulent candidate. And when taken together as a package of anti-AI deception measures, they could reduce organisational risk to minimal levels.
The spectrum of AI usage in job applications
The use of AI in job applications is now so frequent that businesses may have a hard time differentiating legitimate from illegitimate usage. “AI use in applications runs on a spectrum, and the boundary between assistance and fraud is genuinely blurry,” says Shahak Shalev, Global Head of Scam and AI Research at Malwarebytes.
Shalev tells Raconteur that, at one end of the spectrum, “AI-generated application spam” has become prevalent, defined by formulaic resumes and cover letters often sent by automation tools that deliver “hundreds” of applications per day. This may already be questionable enough for most prospective employers, but then they encounter more dishonest uses, such as real-time interview assistance, in which a candidate surreptitiously runs an LLM that provides it with answers to questions.
“AI use in applications runs on a spectrum, and the boundary between assistance and fraud is genuinely blurry.”
Shahak Shalev, Global Head of Scam and AI Research at Malwarebytes.
However, at the other end of the spectrum, recruiters are also witnessing outright fraud, which should concern them less from an employee-quality standpoint, and more from a security perspective. “That means synthetic identities with AI-generated headshots, fabricated LinkedIn histories, and live deepfake video in the interview itself,” says Shalev, adding that the toolkit employed by North Korean ‘IT workers’ is now “being used by regular criminal operators,” who take advantage of low prices and relative user-friendliness.
According to Bart Lautenbach, the SVP and GM of Talent Solutions at Equifax, problematic and fraudulent uses of AI are “becoming increasingly common” in the job application process. As one example of this, he cites how over 100 American companies have had issues with North Korean scammers and hackers, yet he tells Raconteur that Equifax’s clients are also seeing bad actors use “synthetic identities” to “get through” the application process.
“This problem extends from the initial hiring phase through to the onboarding process, where bad actors are increasingly utilizing AI-enhanced falsified documents to prove their eligibility to work in the U.S.,” he says. “Beyond just securing employment, these synthetic identities could allow them to infiltrate corporate systems for more dangerous purposes, including large-scale data theft and the compromise of sensitive internal information.”
Such risks aren’t hypothetical, since according to the U.S. Department of Justice, a North Korean IT work scheme was able to gain access to International Traffic in Arms Regulations (ITAR) data from a California-based defence contractor. This technical data, which was legally restricted under the ITAR framework, was downloaded by an overseas conspirator, while the scheme as a whole generated $5 million in revenue for the Democratic People’s Republic of Korea (DPRK).
Getting worse before it gets better?
While the DPRK is often singled out as the primary source of fraudulent job applications, security firm Pindrop tells Raconteur that some IP addresses are also traced back to Russia. Either way, many fake worker schemes are disarmingly sophisticated, with Pindrop’s Chief People Officer, Christine Kaszubski Aldrich, reporting that they’re “increasingly” encountering candidates who use deepfake technology to manipulate their faces and voices in interviews.
“Oftentimes these candidates appear credible, they have relevant experience and strong LinkedIn profiles, and they speak naturally in conversations,” she says. “But in several interviews, Pindrop’s platform flagged what initially appeared to be real candidates as AI-generated.”
“Oftentimes these candidates appear credible, they have relevant experience and strong LinkedIn profiles, and they speak naturally in conversations”
Christine Kaszubski Aldrich, Chief People Officer, Pindrop
Kaszubski Aldrich adds that such applicants were using synthetic voices layered over a live feed in order to deceive recruiters, while Pindrop has also seen candidates swap places between different stages of the application process. She explains, “This type of proxy interviewing creates a breakdown in identity consistency across the hiring process, making it difficult for teams to verify they are evaluating the same candidate end-to-end.”
Most worryingly of all, Pindrop’s internal recruiting data indicates that 16.8% of job applicants now exhibit signs of digital manipulation and even in some cases possible fraud. Kaszubski Aldrich therefore predicts that, as AI continues to improve, the hiring process is going to be exposed to greater security and identity risks. Equifax’s SVP and GM of Talent Solutions, Bart Lautenbach, agrees with such an assessment, noting that Gartner has predicted that 25% of job applications will be fake by 2028. He also reveals that a recent Equifax survey found that 71% of HR professionals have encountered misleading or false candidate information, a percentage which underlines how widespread the problem of AI-generated applications and — in effect — AI-generated candidates, is now becoming.
What organisations can do to filter out fake applicants
Not only does the rise of AI-enabled misleading and fake job applications pose a severe security risk to organisations, but Lautenbach argues that they also threaten a significant loss of productivity for HR departments, which are being “forced to manually screen possibly thousands of fraudulent applications.” In the face of this issue, he advises businesses to conduct sufficient background checks in order to determine that a candidate’s self-submitted profile fits with independently verifiable information. He also advocates testing interviewees on situational knowledge, asking them technical and context-specific questions on prior projects and roles.
For Malwarebytes’ Shahak Shalev, the problem is exacerbated by the fact that there’s “no single reliable detector” of AI usage in job applications. He says, “AI detection tools have well-documented false positive problems, and they also can create disparate impact risks under employment law.” So instead of some nonexistent one-size-fits-all AI detector, Shalev recommends the use of “layered friction.” This could involve adding at least one real-time, unplanned interaction that would be hard to fake during an interview, ranging from something simple like asking a candidate to wave a hand in front of their face (something which deepfake models struggle), to shifting the topic of conversation to something tangential or unexpected.
“For any role with sensitive access, require at least one in-person or verified video stage before the final offer,” Shalev also advises. This should come in addition to cross-checking ID documents, phone numbers and addresses (particularly when company devices need to be shipped out), and to requiring HR to work directly with IT and security when onboarding new hires.
In a similar vein, Christine Kaszubski Aldrich affirms that the most significant change Pindrop has made is that it’s now separating ID verification from candidate evaluation. It’s also introducing system-level checks throughout its hiring process, including the use of a tool to verify whether the same voice is heard in each interview stage.
“The goal isn’t to turn interviews into a security checkpoint,” she concludes. “It’s to move identity verification into the background as much as possible.”
If you’re a recruiter, chances are you spend much of your working day reading cover letters and CVs written using AI. This, at least, is a conclusion you might reach after seeing recent surveys on the subject, including one published by CharityJobs in February, which revealed that 64% of applicants in the UK charity sector admitted to using AI to help with some part of the recruitment process in 2025. This is up from 52% in 2024, and it’s also very similar to the percentage produced by a January survey from EdTech company Kahoot!, which found that 65% of UK-based Gen Z workers have used LLMs to write cover letters and other application material.
Such figures are the tip of a very large iceberg, since cybersecurity experts and HR professionals testify to AI use being widespread in the context of job applications. Not only that, but much of this use is now problematic or fraudulent, with some bad actors using AI to provide themselves with real-time interview assistance, create fake identities, or even generate live deepfake streams. These activities have come to the fore most infamously in the context of North Korean hackers finagling jobs with European firms, a problem which resurfaced in the US in March.
While there’s no single tool or measure that can help HR departments unfailingly detect AI usage, there are a variety of steps they can take during the application process to drastically lower the probability of deception by a fraudulent candidate. And when taken together as a package of anti-AI deception measures, they could reduce organisational risk to minimal levels.




