How financial services operators are dialling up conversational AI to catch out fraudsters

Organisations are using new technology to analyse the voices of those posing as customers in real time while reducing false positives
Illustration of conversational AI being used to fight fraud

Great Britain is the fraud capital of the world, according to a Daily Mail investigation published in June. The study calculated that 40 million adults have been targeted by scammers this year. In April, a reported £700m was lost to fraud, compared to an average of £200m per month in 2021. As well as using convincing ruses, scammers are increasingly sophisticated cybercriminals.

If the UK does go into recession, as predicted, then the level of attacks is likely to increase even further. Jon Holden is head of security at digital-first bank Atom. “Any economic and supply-chain pressure has always had an impact and motivated more fraud,” he says. He suggests that the “classic fraud triangle” of pressure, opportunity and rationalisation comes into play. 

Financial service operators are investing in nascent fraud-prevention technologies such as conversational AI and other biometric solutions to reduce fraud. “Conversational AI is being used across the industry to recognise patterns in conversations, with agents or via chatbots, that may indicate social engineering-type conversations, to shut them down in real time,” continues Holden. “Any later than real time and the impact of such AI can be deadened as the action comes too late. Linking this to segmentation models that identify the most vulnerable customers can help get action to those that need it fastest and help with target prevention activity too.”