Deepfaking it: the new cybersecurity frontier

From impersonating a top executive to opening money-laundering bank accounts, deepfake fraud is becoming a growing problem and poses new challenges

A few weeks ago, on a routine company video call, one of the tech groups decided to prank the boss and five of them turned up looking like him. 

“It was very spooky. They used a publicity still of me and the person in the publicity still was blinking, moving his head, smiling, talking, saying things I don’t say, but it was me,” Andrew Bud, chief executive of biometric authentication provider iProov, recalls. 

Over the past couple of years, deepfakes – manipulated videos or audio recordings that appear to show individuals doing or saying things they never did or said – have started to emerge. Most feature celebrities or political figures, with some created purely for amusement value and others vehicles for misinformation. 

Deepfake threat to businesses

However, new types of deepfake have now entered the frame with the aim of committing fraud. Indeed, the use of deepfake video and audio technologies could become a major cyberthreat to businesses within the next couple of years, cyber-risk analytics firm CyberCube warns in a recent report.

“Imagine a scenario in which a video of Elon Musk giving insider trading tips goes viral, only it’s not the real Elon Musk. Or a politician announces a new policy in a video clip, but once again it’s not real,” says Darren Thomson, head of cybersecurity strategy at CyberCube. 

“We’ve already seen these deepfake videos used in political campaigns; it’s only a matter of time before criminals apply the same technique to businesses and wealthy private individuals. It could be as simple as a faked voicemail from a senior manager instructing staff to make a fraudulent payment or move funds to an account set up by a hacker.”

In fact, such attacks are already starting to occur. In one high-profile example in 2019, fraudsters used voice-generating artificial intelligence software to fake a call from the chief executive of a German firm to his opposite number at a UK subsidiary. Fooled, the UK chief executive duly authorised a payment of $243,000 to the scammers. 

“What we’re seeing is these kinds of attacks being used more and more. They’re not overly sophisticated, but the amount of money they’re trying to swindle is quite high,” says Bharat Mistry, technical director, UK and Ireland, at Trend Micro.

“I was with a customer in the UK and he was telling me he’d received a voicemail, and it was the chief information officer asking him to do something. Yet he knew the CIO of the organisation was on holiday and would never have phoned. There was no distinguishing factor, so you can see how clever it is.”

Attacks such as this follow the same pattern as traditional business email compromise scams, but with vastly more sophistication. 

“We’ve seen all these cloud technologies, things like analytics, machine-learning and artificial intelligence, and deepfakes are just an extension of that technology, using the tech in an abusive manner,” says Mistry.

Fraudulent bank accounts

Another emerging type of deepfake fraud is the fraudulent creation of accounts, whether they are bank accounts, foreign exchange dealing accounts or share dealing accounts. These can be used by organised crime for the purposes of money laundering. And with the advent of the coronavirus pandemic, what was previously a gradual shift to remote account creation has now been massively accelerated, along with the potential for fraud.

Setting up an account remotely generally involves a two-step process: first, providing a scan of an identity document and then presenting a selfie. The selfie is often generated by asking the applicant to record a video in which they recite words or numbers, or perhaps through a short video interview with an agent. 

“It’s obviously been a good way of protecting against fraud up until now, but now the fraudsters can deepfake themselves to look like the innocent victim,” says Bud. 

“They may have stolen or copied the documents of an innocent victim from some source, and then all they need to do is deepfake the victim’s face onto their face and conduct the interview with the agent, and the agent will be never the wiser.”

In a report late last year, identity verification firm Jumio found selfie-based fraud rates were five times higher than ID-based fraud and particularly prevalent where users are able to upload their own ID images. This means fraudsters can manipulate a legitimate ID or use an image of an ID found on the dark web or from a Google Images search.

Financial institutions are awakening to the risk. In a survey for iProov, three quarters of cybersecurity experts in the financial sector said they were concerned about deepfake fraud and nearly two-thirds said they expected the threat to get worse.

“Banks like ING, Rabobank in the Netherlands, Standard Bank in South Africa and the government of Singapore, which is supplying the financial services industry, these are all aware of the threat of deepfakes and are taking proactive measures,” says Bud.

However, only 28 per cent of survey respondents said they’d put plans in place to protect against deepfakes, with 41 per cent planning to do so in the next two years. 

And with another poll of banking customers revealing most were unconcerned about deepfake fraud, introducing extra security measures can be problematic.

“There’s a big difference between how much cybersecurity experts think people care and how much they do care, and that turns into a problem as soon as they try to implement intrusive measures,” says Bud.

“There is a risk that if they protect against deepfakes in ways that impact the customer experience, it will be immediately resisted.”

How organisations are fighting back

The first line of defence against impersonation attacks, says Mistry, is to make sure all standard security procedures are implemented and to build in automatic checks.

“If they’re asking for a money transfer or to change something or to amend something on a document, then it should be verified through another channel,” he says.

Financial institutions, meanwhile, are turning to more sophisticated methods of detecting deepfakes.

Passive liveness detection uses algorithms to detect signs in an image that it’s not genuine by examining textures, edges and the like.

Increasingly, though, active detection is being used, introducing unpredictable information the deepfaker can’t predict and can’t effectively spoof. 

“What we do is illuminate the subject; we use the screen of the person’s device to illuminate them with a rapidly changing sequence of colours,” says Bud.

“Then we stream the video of their face back to our servers and analyse the way the light reflects from their face and the sequence of colours reflected on their face, and this is an unpredictable element that it’s difficult for a deepfake to replicate effectively.”

What’s clear is the use of deepfakes for fraud is an escalating risk and over the coming years the arms race between fraudsters and security professionals will only increase.

“At the moment it’s in its infancy; a lot of cybercriminals are still after using ransomware or business email compromise, but as these channels start to dry up and people cotton on, I think they’re going to move on to deepfakes more and more. At the moment, the only limiting factor is the technology,” Mistry concludes.