Fighting (fake) fire with fire: can deepfakes catch financial scams?

While deepfake technology is often associated with fraud and manipulation, American Express wants to turn it against the scammers
illustration of two masks

Most of us have seen a deepfake video at one time or another, be it Donald Trump appearing on Better Call Saul or Tom Cruise performing magic tricks on TikTok. The media coverage is often negative: from sexploitation to corporate blackmail, we’re constantly told that deepfakes will enable fraud and deception on a massive scale.

At American Express, however, the technology behind deepfakes is serving the opposite end: fighting fraud. By using hyper-realistic data to help train the company’s own detection systems, the company’s researchers believe they can warn customers more accurately and minimise the number of unnecessary card stoppages.

It’s certainly a bold strategy, not to mention a timely one. Global payment card fraud losses reached $28.65 billion in 2019, according to the Nilson Report. It’s almost certain this figure has risen during the Covid-19 pandemic; various financial agencies have reported an uptick in fraud during recent months, driven by a rise in online shopping.

The Machine Learning advantage

We’ve witnessed a dramatic evolution in fraud tactics over recent years, driven by digital technology. From phishing scams to botnets that can run card testing schemes (where a fraudster “tests” a credit card number that they may have purchased on the dark web, randomly generated, or acquired via phishing or spyware software) on an industrial scale, the fraudster has never had more weapons in their arsenal. The advent of deepfakes opens up a potential new front, enabling con artists to dupe victims into handing over their details by simulating the voices of relatives or company bosses.

In their attempts to stem the rising tide, many card companies are now using machine learning, a form of artificial intelligence in which computer systems improve automatically by adapting to the data they receive. Engineers feed reams of transaction data into the ML algorithm; using this data, the algorithm identifies patterns around fraudulent transactions - their size, their location, the time of day they take place - and submits this data to the fraud prevention team.

ML models offer three distinct advantages over conventional, rules-based prevention strategies. First, they can incorporate a multitude of factors. Second, they can adapt to changing behaviour patterns. And finally, they create fewer false positives, avoiding the need for those blockages that cause customers so much frustration. However, there is one crucial caveat: they rely on realistic, high-quality data inputs to identify patterns accurately.

This is where deepfake technology comes in. The technology is itself a form of ML, which relies on a pair of algorithms known as generative adversarial networks (GANs). The two algorithms are, essentially, trying to outsmart one another: one algorithm, the generator, creates the content, while its rival, the discriminator, looks for flaws. Accuracy and rigour are baked into the system.

Attackers are constantly working to find new exploits, with defenders often playing catch up

According to Dmitry Efimov, American Express’s VP of ML research, GAN technology enables the company’s data scientists to react rapidly to new types of fraud. By simulating spending patterns from genuine transactional data, the data science team can create vast amounts of records for their ML models without relying on real-life information.

“GAN is useful when we’re not able to train a model because of lack of data. An immediate use case is our fraud model, because fraud patterns can change rapidly. Early detection is the key prevention. 

“If we detect a new pattern of fraud we’ve never seen before, we want to be able to protect our customers from it, so we want to train our model to detect these new patterns. But to train the model you need lots of data, and we may have only seen this new fraud pattern a couple of times. 

“That’s why we started to explore whether GAN could help us solve that problem, by allowing us to use simulated data of that fraud pattern in order to train the models and improve model performance.”

At this stage, however, the project is still only at the research and experimentation stage. While the GAN data has proved extremely useful when the researchers haven’t had vast swathes of historical spend data to work with (as is the case when dealing with new customers), Efimov admits that all the experiments carried out so far show that “GAN-simulated data did not always improve the final [ML] models.”

It remains to be seen whether GAN-based data will become a standard tool for fraud detection across the finance industry. Some commentators are sceptical, suggesting there is a limit to the accuracy that these simulated records can provide.

Cautious optimism

“These arms race dynamics are very challenging,” says Henry Ajder, a freelance advisor on deepfakes and emerging technologies.. “Whether it’s detecting deepfakes or suspicious bank transactions, attackers are constantly working to find new exploits, with defenders often playing catch up.”

Generating synthetic data to train detection systems might give them an edge in the short term, Ajder adds, but there’s no guarantee how long that advantage will last. Still, he thinks there could be benefits.

“Think of anti-virus software. No company claims its software will catch every virus, although it does raise the barrier of entry by catching the majority of examples that aren’t on the cutting edge.”

Shifting patterns

For the GANs to be truly useful, the American Express researchers will need to consider a full range of fraud scenarios in their data inputs. The sheer variety of situations that could lead to fraud can be extremely difficult to replicate. As well as conventional card theft, for example, the inputs must include cases in which the target has been tricked into making the transaction themselves. 

“Humans are unpredictable,” says cybersecurity consultant Dr Edewede Oriwoh, who previously worked at the University of Bedfordshire. “Fixed, repeated patterns of behaviour, even when it comes to spending money, may not always appear. Patterns may change drastically depending on an individual’s mood or recent events.”

American Express will need highly varied methods “to ensure their algorithmic model does not flag too many false positives”, Oriwoh adds.

The scale of the challenge, then, is considerable. However, some are optimistic about the project, viewing deepfakes as a genuine solution to fraud in the long term.

“If you’re talking about recreating human faces or voices, there are still some telltale signs with deepfakes,” says Leroy Terrelonge, a senior cyber risk analyst at Moody’s Investors Service. “But when you’re dealing with a document that’s essentially just numbers and text, I don’t see what the barrier is.”

American Express has data that potentially goes back all the way to the start of the company, Terrelonge noted. ML systems are powerful because they can recognise patterns far more quickly than humans. 

“This seems like a very feasible use case for deepfakes.”