Snooping on the police: can AI clean up the Met?

London’s police service to use AI to root out misconduct in its ranks, after a string of damages to its reputation
Posters highlighting some major police failures hang on the railings outside the Met’s headquarters in London. Mark Kerrison via Getty Images

Shamed and appalled by the brutal murder of Sarah Everard at the hands of a serving officer, the British public demanded a swift response from the Metropolitan Police Service. 

A subsequent review into the conduct of officers based at Charing Cross in London unearthed a toxic environment where colleagues bonded over jokes about rape, killing black children and beating their wives.

Heads had to roll, starting with the former Met Police Service commissioner Dame Cressida Dick. The poor handling of the Everard case did little to assuage conclusions by its own watchdog that the Met is “systematically and institutionally corrupt”.

Inspector of Constabulary Matt Parr said that the Met had “sometimes behaved in ways that make it appear arrogant, secretive and lethargic” in response to investigations into dirty cops, and that it did “not have the capability to proactively monitor” communications with any effect, “despite repeated warnings from the inspectorate”.

One regime change and tens of millions of pounds later, the Met is now the owner of shiny new AI software that analyses email, mobile chat apps and other data such as printer logs and overtime, to sniff out wrongdoing. 

Acting commissioner Sir Stephen House told parliament’s Home Affairs Committee that the technology, which applies programmable algorithms to an individual’s datasets, can learn behaviours over time and warn when something is amiss. 

“This would sit above our systems and look at internal emails and Metropolitan Police mobile phones issued to our officers to check for alarming keywords, and at the amount of overtime worked,” Sir Stephen said. 

Various communication and personal data points would be harvested to spot officers going off the rails, with hopes the intelligent system would become more accurate as it learnt from the data.

“Today’s society is so reliant on electronic devices that it is a logical step to widen surveillance to those used in public office to be held accountable – especially those who have wide legislative powers,” says Dr David Lowe, senior research fellow in policing and security at Leeds Beckett University Law School and a former police officer. “The issue is how the data obtained will be stored and used, as well how the term ‘alarming keywords’ is defined,” he says.

Met officials are tight-lipped on details, other than stating the aim is to have the surveillance tools in place “next year”. “To further advance our counter-corruption capability, we are preparing to invest a multimillion-pound sum in technology to monitor the use of devices by more than 40,000 officers and staff,” a spokesperson says. 

The police union is unimpressed at the prospect of Big Brother looking over shoulders. “The Public and Commercial Services union believes there needs to be a culture change in the Met, but we question whether snooping on employees is the answer,” PCS general secretary Mark Serwotka says. “While the assistant commissioner has announced plans for new software to monitor employees’ phones and computers, he hasn’t consulted us on the issue.”

Algorithms trained to predict police misconduct have been in use for several decades, with varying degrees of success. In the 1990s, the Chicago Police Department built a neural-network tool to generate alerts when an officer’s behaviour was showing red flags. The software contained models to predict which officers would be sacked for misconduct. It connected complaints of bad behaviour, logged by colleagues or the public, to personal stressors such as divorce or debt.

In 2015, police and academics in Charlotte, North Carolina, took the baton from Chicago’s early-intervention system and created a more advanced behavioural analytics platform able to process wider data points.

Without careful justification and implementation, the algorithmic tools could embed corruption further [by prompting officers to take their discussions offline and so continue undetected]

They found officers are as likely as anyone to underperform at work if experiencing personal issues, but they are also exposed to a wholly different level of stress to the people they serve and protect. Those involved in suicide and domestic-abuse calls earlier in their shifts were much more likely to become involved in adverse interactions later in the day, researchers found. Stressful calls emerged as a leading indicator of later wrongdoing, but forces have little control over which officers are dispatched to crimes during their shifts. 

Employee surveillance trends in the corporate world are certainly favouring predictive analytics about employee welfare, especially as work moves online. Companies want a better view of staff they cannot manage or measure in person, while old-fashioned supervisory tasks are being automated and outsourced to robots. 

Regulated businesses are often required to have clear policies on private devices and the use of encrypted communication apps such as WhatsApp, Signal and Telegram, but the Met refused to say whether such private channels could or would be monitored. 

The lack of transparency has concerned experts who say this goes to the heart of the problem, and the focus should remain on improving the culture of the force and setting the tone from the top. 

“There is good reason to be sceptical that the Met will successfully rebuild public trust and confidence through an increased reliance on emerging technologies,” says Dr Gabrielle Watson, University of Oxford fellow and award-winning author of Respect and Criminal Justice.

Misconduct and collusion often occur outside official company channels, and heavy-handed surveillance could push the problem elsewhere, Watson says. Algorithms scanning for keywords need wider context, she adds, leading to questions around what behaviours will be monitored for and who retains ultimate power over the system and its results. 

“Without careful justification and implementation, the algorithmic tools proposed could embed corruption yet further by prompting officers to take their discussions offline and so continue undetected,” Watson says. “The reputational damage incurred by the force in recent years is simply too great to be dispelled through excessive spending on technology alone.”

Chicago’s attempt at solving endemic corruption in its force with technology failed. The neural-network tool worked too well in identifying rogues, and without a framework to improve the culture of the organisation in tandem, it was torpedoed before it could cause further embarrassment.

It lasted two years, but not before all of its reports, recommendations and predictions went missing. Union figures blocked the system from being used again, arguing police were hampered in their jobs by such intrusive monitoring and that officers were being punished for crimes they hadn’t yet committed. The Met has its work cut out overcoming similar hesitancies, but given its current standing in the public’s eyes, it has no other choice.