With up to 59% of jobseekers already using AI to write their CVs, companies are been facing a wave of fraudulent remote workers, and creating a serious cybersecurity risk for organisations, says Equifax’s SVP of Talent Solutions.
AI may be supercharging it, but the tendency to embellish or even lie on job applications is neither new nor rare. In fact, a 2017 US-based study found that 72% of applicants embellish their CVs, while 31% have included outright fabrications. Such findings have been replicated by other studies in the years since, yet the more recent addition of generative AI is creating an increasingly difficult situation for recruiters and HR departments.
According to a survey published in April by Resume Genius, 59% of jobseekers already use AI to write their CVs, while 22% have used AI to answer interview questions, and 19% have used it to complete skills assessments. This is already alarming enough on its own, but companies have also been facing a wave of entirely fraudulent remote workers, often based in North Korea, who have used AI to create fake identities. Such candidates have funnelled salaries to the (sanctioned) North Korean government, while some have even accessed and downloaded sensitive data, creating a serious cybersecurity risk for organisations.
Companies are already waking up to such risks, yet some are more alert than others. One of these is Equifax Workforce Solutions, whose SVP and GM of Talent Solutions Bart Lautenbach gave an interview to Raconteur, addressing the emergence of AI-enabled remote worker fraud. While this threat is expected to continue growing in the near future, Lautenbach affirms that there are several key measures organisations can adopt to reduce their potential exposure. That way, they can devote more time to legitimate job applicants.
From ‘padded resumes’ to ‘synthetic identities’, HR teams face a spectrum of AI-enabled challenges
Lautenbach acknowledges that fraudulent uses of AI are becoming increasingly common among job applicants, with incidents involving remote North Korean workers being the most notorious. Yet he also explains that the problem is something Equifax Workforce Solutions, which is a subsidiary of Equifax, experienczes at first hand.
“Clients tell us they’re seeing bad actors get through application processes using synthetic identities,” he says. “This problem extends from the initial hiring phase through to the onboarding process, where bad actors are increasingly utilising AI-enhanced falsified documents to prove their eligibility to work in the U.S.”
The first examples of this kind of activity were reported in August 2022, while Lautenbach notes that the problem is likely to become more widespread in the coming years. “By 2028, Gartner predicts 1 in 4 job seekers will be fake,” he says.
And what makes the issue more complicated for recruiters and HR teams is that there’s now a full spectrum of AI use in job applications, ranging from “padded resumes” to entirely fictitious identities. “The problem of fabricated or misleading information on applications is extremely prevalent, and it’s worsened by the reality that AI-generated resumes make it more difficult to detect this information,” he explains.
“HR teams are being forced to manually screen possibly thousands of fraudulent applications”
Equifax has conducted its own survey of HR professionals regarding this problem, finding that 71% have received “fabricated or misleading” information from candidates. Yet according to Lautenbach, “only 20% of this group said they were ‘very confident’ in detecting fabricated or misleading information on resumes.”
This inability to confidently detect false info is a serious problem for companies, and not only for cybersecurity-related reasons. As Lautenbach explains, it creates “a two-fold crisis for employers by presenting a severe security risk to sensitive internal systems, and by causing a potentially massive loss of productivity as HR teams are forced to manually screen possibly thousands of fraudulent applications.”
Mapping self-reported claims to situational knowledge
The question therefore turns to what businesses can do to not only weed out fraudulent candidates, but to weed them out efficiently. And for Lautenbach, HR departments should seek to actively verify candidates via a mix of “data-driven screening” and situational knowledge.
“Because research finds that 93% of job seekers have embellished or lied during the hiring process, verifying [that] an individual is who they say they are is a critical guardrail,” he explains. “Completing a background check that verifies an identity, education, and employment — then comparing that data against a candidate’s resume — can help confirm trust in a new hire.”
What this means in practice is that, in parallel with the traditional application and interview process, systems should be run — perhaps by IT and/or security departments — to verify and cross-check candidate information wherever public or third-party info on a candidate is available. “By transitioning the hiring framework from merely validating claims to establishing a verified history, employers can help mitigate business risks while helping ensure that authentic, qualified candidates are prioritised,” Lautenbach adds.
“HR departments should seek to actively verify candidates via a mix of “data-driven screening” and situational knowledge”
In addition to a data-first verification approach, Lautenbach recommends that HR teams move beyond superficial resume reviews, mapping a candidate’s self-reported history to the situational knowledge they should be able to demonstrate during interviews. He says, “Asking technical, context-specific questions about the how and why of a previous project can help recruiters identify when a candidate might lack the deep experience that their polished, and potentially fraudulent, resume suggests they have.”
If adopted together, such strategies can significantly reduce potential exposure to misleading or fraudulent job applicants. They may not represent a silver bullet, but introducing them as part of a routine screening system is likely to be hugely effective. That said, companies should always remain conscious of how quickly AI tools are evolving, since today’s deceptive practices could become obsolete tomorrow.
AI may be supercharging it, but the tendency to embellish or even lie on job applications is neither new nor rare. In fact, a 2017 US-based study found that 72% of applicants embellish their CVs, while 31% have included outright fabrications. Such findings have been replicated by other studies in the years since, yet the more recent addition of generative AI is creating an increasingly difficult situation for recruiters and HR departments.
According to a survey published in April by Resume Genius, 59% of jobseekers already use AI to write their CVs, while 22% have used AI to answer interview questions, and 19% have used it to complete skills assessments. This is already alarming enough on its own, but companies have also been facing a wave of entirely fraudulent remote workers, often based in North Korea, who have used AI to create fake identities. Such candidates have funnelled salaries to the (sanctioned) North Korean government, while some have even accessed and downloaded sensitive data, creating a serious cybersecurity risk for organisations.
Companies are already waking up to such risks, yet some are more alert than others. One of these is Equifax Workforce Solutions, whose SVP and GM of Talent Solutions Bart Lautenbach gave an interview to Raconteur, addressing the emergence of AI-enabled remote worker fraud. While this threat is expected to continue growing in the near future, Lautenbach affirms that there are several key measures organisations can adopt to reduce their potential exposure. That way, they can devote more time to legitimate job applicants.