Do businesses really need to worry about deepfakes?

The rapid advance and increasing availability of AI-based technology is making it ever more easy for criminals to defraud unsuspecting companies by impersonating senior executives 

Deepfake technology

When thieves made off with $35m (£28m) from an unnamed US company in early 2020, all it took was one telephone conversation and a few emails. 

To execute the heist, the criminals used artificial intelligence to clone the voice of a director at that firm, convincing a manager that the call was a genuine one coming from HQ, according to court documents. Posing as the director, they instructed their victim to transfer the money as part of an acquisition the company was supposedly making. 

The emails, designed to look like they were coming from a corporate lawyer, backed up the deception.

In another case, reported by the Wall Street Journal, the CEO of a British energy company was tricked into thinking he’d been phoned by the boss of the firm’s German parent company. When asked to send €220,000 (£190,000) to the bank account of what he thought was a Hungarian supplier, he duly complied.

Both are examples of so-called deepfake fraud, a scam that uses artificial intelligence to impersonate another person on a phone call or even a video conference. While documented cases remain relatively rare, fraud experts report that the threat is increasing as advanced AI tech becomes more accessible.

“We’re on the cusp of seeing these situations more and more,” says David Fairman, chief information officer at cybersecurity firm Netskope. “With the rise of generative AI over the past year, it has become much, much easier to gain access to these capabilities. These services are more widely accessible to the masses – you don’t need to be a data scientist or have a strong technical background to start using them for malicious purposes.” 

The risk of deepfake fraud is becoming more real

Security experts are also seeing examples of deepfake tech being used in extortion attempts. Fairman has heard of cases in which criminals created deepfake images portraying senior executives in compromising situations to blackmail their victims into giving them access to their firms’ resources.

This technology is not only being used as a vehicle for stealing money, as Mike LaCorte, CEO of private detective agency Conflict International, notes.

With deepfakes it makes more sense to look at the behavioural context

“It could be used for competition research, industrial espionage or even deliberate efforts to spread disinformation or damage a rival’s reputation,” he says. 

As the two documented cases of stolen funds highlight, attackers will often impersonate someone in a key position because their subordinates are less likely to resist their requests.

“When employees think they’re dealing with someone in the C-suite, it applies an element of pressure and urgency that can almost force the situation,” Fairman notes.

Deepfake tools can also make it easier for criminals to mount so-called social engineering attacks, typically targeting new starters or lower-level employees and building their trust over time, gradually creating opportunities to commit fraud.

“Each time you interact with someone remotely, you could be at risk of thinking you’re dealing with a real person but it’s actually a deepfake,” says Sabrina Gross, regional director at digital ID platform Veridas.

Although deepfakers may be attracted by the high potential rewards of targeting a large business, smaller firms are just as vulnerable to attack, if not more so, given they’re less likely to have robust governance processes in place, Fairman warns.

How can firms defend against deepfakes?

While the risks are clear, it’s hard to gauge the true scale of the problem, partly because organisations are unlikely to broadcast that they have been hoodwinked. Depending on the type of attack, companies may not even realise that they have become victims.

“There aren’t lots of stats on this, unfortunately, because no one really wants to share where they’re super-vulnerable,” Gross says. “Much of the time, a firm wouldn’t necessarily know that it has been hit unless someone in the organisation were actively seeking a security breach.”

Boardrooms are becoming increasingly concerned about the broader risks associated with the rapid advance of AI. Research by cybersecurity firm Kaspersky indicates that 59% of C-suite members are worried about the potential security threat presented by generative AI. Despite this, only 22% have discussed establishing safeguards in leadership meetings.

“It’s quite concerning that they recognise the potential problem, but haven’t got the capability to meet the challenge,” observes David Emm, principal security researcher at Kaspersky.

But there are some basic cyber hygiene techniques that any firm can adopt to raise awareness of the deepfake risk across the organisation. For instance, while fake audio can be hard to detect, there are other non-technical warning signs that people should be alert to, as Emm explains.

“With deepfakes, it makes more sense to consider the behavioural context,” he says. “It’s less of a matter of asking yourself: ‘Is this speech a bit jittery or is this a shaky image?’ It’s more a question of: ‘Was I expecting this person to get in touch and are they pressuring me into doing something out of the ordinary?’”

In such instances, firms could establish a call-back procedure so that the authenticity of a request can be verified. They would also be well advised to cover the threat of deepfake attacks in their broader cybersecurity training.

“All organisations have a responsibility to ensure that they have a strong control framework and control processes in place,” Fairman stresses. 

As sophisticated deepfake tools become ever more accessible, firms must therefore ensure that all staff understand that the caller on the other end of the line, however genuine they might seem, might not be the person they’re claiming to be.