If people are their businesses’ greatest asset, they’re also their greatest liability. The cost of poor data management is colossal: human error in this respect knocks $3.1tn (£2.3tn) each year off the US economy alone, IBM has estimated.
Fat fingers aside, malicious acts by workers also cost businesses hundreds of billions of pounds a year. Exploiting the increasing dependence of firms on online operations since the pandemic started, cybercriminals have been “developing and boosting their attacks at an alarming pace”, according to Interpol. In a significant proportion of cases, rogue employees at the organisations being targeted are committing or abetting acts such as fraud, intellectual property theft and corporate espionage.
The smart thing for a company to do when faced with such a potentially costly risk is to interpret the data. But when that data is, in essence, people and the decisions they make, the issue becomes even thornier. Sales of AI-based surveillance systems have boomed during the pandemic, but employee monitoring still carries a stigma that gathering information about customers, say, does not.
“Many technologies are immensely useful for collecting employee data, aggregating it and providing a handy overview of team performance,” says Ashish Gupta, senior lecturer in HR management at the University of Law Business School. “But their application by organisations is more likely to create anxiety and conflict.”
Corporate interest in surveillance tech was strong even before the Covid crisis. For instance, when Accenture polled 1,400 business leaders in 13 industries in 2018, 62% of respondents said that their firms were using new technologies to collect data on staff and their activities. But the pandemic has weakened companies’ efforts to combat employee malfeasance, according to Fahreen Kurji, chief customer intelligence officer at risk and compliance specialist Behavox.
“Businesses that were quick to adopt remote working without fully comprehending the risks have sleepwalked into trouble,” she reports. “We have seen a huge rise in the number of bad actors simply because HQs have become digital and there is less oversight.”
Teams in thousands of UK companies have become dispersed as a result of the government’s work-at-home guidance, often without a risk assessment of which controls still apply to them. Old-fashioned supervisory methods such as in-person observations and impromptu chats are becoming things of the past in this new era of video meetings. The monitoring of employees’ data footprints instead appears preferable.
Banks, which by law must record the communications of certain employees, have been among the earliest adopters of AI-based surveillance, given its usefulness as a compliance tool. The benchmark-rigging scandals of the late noughties revealed the limitations of the previous monitoring tech, which simply scanned emails for telltale words and phrases. It’s being replaced by intuitive analytics platforms that cover a far broader area.
Machine learning algorithms can be trained on GPS data, swipe-card usage and various chat and video formats, for instance, to build a detailed picture of any employee’s behaviour. Someone who’s working excessive hours, say, or sending emails with attachments or even passwords outside the business may be showing signs of going off the rails.
“Insider dangers, such as intellectual property theft and corporate espionage, are the kinds of problems where it pays to be ahead of the curve rather than running an investigation after the fact, as we’ve seen historically with various insider-trading scandals,” Kurji says.
Late last year, New York hedge fund the Jordan Company installed AI-powered software that can monitor individuals’ emails, videoconferences and other data. It will build a far more nuanced picture of how each person operates than was possible when using older surveillance methods.
The company’s general counsel, Ugo Ude, reports that the new system is continually learning and “gets better and more accurate every time I log in”. Previously, Ude would simply pick random emails or phone calls to monitor before judging whether any wrongdoing might have occurred. Now that this task has been automated, he can devote his time to more value-adding work.
The use of AI-based employee surveillance systems will only increase as firms face greater regulatory pressures and the potentially ruinous threat of reputational damage. So says Paul Hodge, co-founder of risk and control specialist 1LoD and a former head of first-line supervision for Barclays in EMEA.
These systems take an “employee-focused approach to identifying potential market abuse, poor cultural indicators and examples of conduct risk, linking an individual’s communications across multiple channels with their sales or trading activities”, he says. They can also monitor “personal employee data such as credit scores and time in and out of the office. In some cases, they will even analyse communication metadata, which can bring to light hidden relationships in a department.”
AI can also root out negative subcultures in ways that human reviewers cannot, Hodge notes. “This behavioural lens is one of the many ways in which the larger, more sophisticated compliance functions are attempting to improve the efficiency of their surveillance functions while at the same time improving their ability to detect risk.”
Employee-generated data represents a complex issue for some multinational companies, he adds, particularly where some jurisdictions have laws against recording and analysing staff conversations. Despite this, ever more ingenious ways of obtaining employee data are emerging – and the uptake of powerful AI-based surveillance tools seems unlikely to level off any time soon.
For businesses that do want more workforce insights but expect resistance to broader monitoring, the best policy is to be honest about their intentions, says Rachel Tozer, employment partner at Keystone Law.
“Employers should communicate clearly to their staff how they are being monitored, why it’s justified and how the information that’s gathered will be stored and used,” she advises. “Employees should then be mindful that any data gathered from lawful monitoring may be used in disciplinary or capability procedures.”