Fighting fraud in times of crisis

Cybercrime is always distressing for those affected, but when the resultant losses come from the public purse, it must be taken even more seriously

Coronavirus has coursed through every facet of our lives, and society and business have already paid a colossal price to restrict its flow. We will be counting the cost for years, if not decades. And while people have become almost anaesthetised to the enormous, unprecedented sums of support money administered by the government, it was still painful to learn, in October, that taxpayers could face losing up to £26 billion on COVID-19 loans, according to an alarming National Audit Office report.

Given the likely scale of abuse, it raises the question of how authorities should go about eliminating public sector fraud? Could artificial intelligence (AI) fraud detection be the answer?

Admittedly, the rapid deployment of financial-aid schemes, when the public sector was also dealing with a fundamental shift in service delivery, created opportunities for both abuse and risk of systematic error. Fraudsters have taken advantage of the coronavirus chaos. But their nefariousness is not limited to the public sector.

Ryan Olson, vice president of threat intelligence at American multinational cybersecurity organisation Palo Alto Networks, says COVID-19 triggered “the cybercrime gold rush of 2020”.

Indeed, the latest crime figures published at the end of October by the Office for National Statistics show that, in the 12 months to June, there were approximately 11.5 million offences in England and Wales. Some 51 per cent of them were made up of 4.3 million incidents of fraud and 1.6 million cybercrime events, a year-on-year jump of 65 per cent and 12 per cent respectively.

Cybercrime gold rush – counting the cost

Jim Gee, national head of forensic services at Crowe UK, a leading audit, tax, advisory and risk firm, says: “Even more worryingly, while the figures are for a 12-month period, a comparison with the previous quarterly figures shows this increase has occurred in the April-to-June period of 2020, the three months after the COVID-19 health and economic crisis hit. The size of the increase needed in a single quarter to result in a 65 per cent increase over the whole 12 months could mean actual increases of up to four times this percentage.”

In terms of eliminating public sector fraud, Mike Hampson, managing director at consultancy Bishopsgate Financial, fears an expensive game of catch-up. “Examples of misuse have increased over the last few months,” he says. “These include fraudulent support-loan claims and creative scams such as criminals taking out bounce-back loans in the name of car dealerships, in an attempt to buy high-end sports cars.”

AI fraud detection and machine-learning algorithms should be put in the driving seat to pump the brakes on iniquitous activity, he argues. “AI can certainly assist in carrying out basic checks and flagging the most likely fraud cases for a human to review,” Hampson adds.

John Whittingdale, media and data minister, concedes that the government “needs to adapt and respond better”, but says AI and machine-learning are now deemed critical to eliminating public sector fraud. “As technology advances, it can be used for ill, but at the same time we can adapt new technology to meet that threat,” he says. “AI has a very important part to play.”

Teaming up with technology leaders

Technology is already vital in eliminating public sector fraud at the highest level. In March, the Cabinet Office rolled out Spotlight, the government grants automated due-diligence tool built on a Salesforce platform. Ivana Gordon, head of the government grants management function COVID-19 response at the Cabinet Office, says Spotlight “speeds up initial checks by processing thousands of applications in minutes, replacing manual analysis that, typically, can take at least two hours per application”. The tool draws on open datasets from Companies House, the Charity Commission and 360Giving, plus government databases that are not available to the public.

“Spotlight has proven robust and reliable,” says Gordon, “supporting hundreds of local authorities and departments to administer COVID-19 funds quickly and efficiently. To date Spotlight has identified around 2 per cent of payment irregularities, enabling grant awards to be investigated and payments halted to those who are not eligible.”

We need to watch how the technology fits into the whole process. AI doesn’t get things right 100 per cent of the time

She adds that Spotlight is one of a suite of countermeasure tools, including AI fraud detection, developed with technology companies, and trialled and implemented across the public sector to help detect and prevent abuse and error.

Besides, critics shouldn’t be too hard on the public sector, argues David Shrier, adviser to the European Parliament in the Centre for AI, because it was “understandably dealing with higher priorities, like human life, which may have distracted somewhat from cybercrime prevention”. He believes that were it not for the continued investment in the National Cyber Security Centre (NCSC), the cost of fraudulent activity would have been significantly higher.

Work to be done to prevent fraud

Greg Day, vice president and chief security officer, Europe, Middle East and Africa, at Palo Alto Networks, who sits on Europol’s cybersecurity advisory board, agrees. Day points to the success of the government’s Cyber Essentials digital toolkit. He thinks, however, that the NCSC must “further specialise, tailor its support and advice, and strengthen its role as a bridge into information both from the government, but also trusted third parties, because cyber is such an evolving space”.

The public sector has much more to do in combating cybercrime and fraud prevention on three fronts, says Peter Yapp, who was deputy director of incident management at the NCSC up to last November. It must encourage more reporting, make life difficult for criminals by upping investment in AI fraud detection and reallocate investigative resources from physical to online crime, he says.

Yapp, who now leads law firm Schillings’ cyber and information security team, says a good example of an initiative that has reduced opportunity for UK public sector fraud is the NCSC’s Mail Check, which monitors 11,417 domains classed as public sector. “This is used to set up and maintain good domain-based message authentication, reporting and conformance (DMARC), making email spoofing much harder,” he says. Organisations that deploy DMARC can ensure criminals do not successfully use their email addresses as part of their campaigns.”

While such guidance is welcome, there are potential problems with embracing tech to solve the challenge of eliminating public sector fraud, warns Dr Jeni Tennison, vice president and chief strategy adviser at the Open Data Institute. If unchecked, AI fraud detection could be blocking people and businesses that are applying for loans in good faith, or worse, she says.

“We need to watch out how the technology and AI fit into the whole process,” says Tennison. “As we have seen this year, with the Ofqual exam farrago, AI doesn’t get things right 100 per cent of the time. If you assume it is perfect, then when it doesn’t work, it will have a very negative impact on the people who are wrongly accused or badly affected to the extent they, and others, are fearful of using public sector services.”

There are certainly risks with blindly following any technology, concurs Nick McQuire, senior vice president and head of enterprise research at CCS Insight. But the public sector simply must arm itself with AI or the cost to the taxpayer will be, ultimately, even more significant. “Given the scale of the security challenge, particularly for cash-strapped public sector organisations that lack the resources and skills to keep up with the current threat environment, AI, warts and all, is going to become a crucial tool in driving automation into this environment to help their security teams cope.”