Could ESG concerns threaten national security?

For many it conjures up images of machines replacing human beings in the workplace, but artificial intelligence (AI) could have an even darker side. What if AI were to be misused on the battlefield? What if the armies of tomorrow were to create robot armies that could decide through the narrow lens of AI who lives and dies in the theatre of war? What if the fighter jets of the future no longer needed pilots and used AI-driven autonomous weapons systems instead?

Governments need to ensure they engage openly about the purpose of any programmes that involve AI for national security purposes

Of course, the picture this paints has more chance of being played out in a Hollywood science-fiction film than in near-time reality, but nevertheless it does raise important questions concerning how we should respond to AI in the future.

Take the so-called tech titans, many of whom must answer to their shareholders, for example. All are deeply wedded to environmental, social and governance or ESG for short. Whether those in charge actually believe in ESG is not the point. The point is that it makes them money and helps them attract the brightest staff. After all, who wouldn’t want to be paid a lucrative salary to work for an ethically minded company?

Big tech rethinking national security projects

But there may be a downside to ESG. Could it be, for example, that a set of standards, which openly encourages businesses to champion social good, is actually putting global security at risk from rogue states? Take Microsoft, for example. In February, Microsoft employees urged its chief executive not to honour a $479-million contract to make 100,000 augmented reality headsets for the US military.

Elsewhere, Google bowed to pressure after more than 3,000 Google staff expressed their dissatisfaction at the company’s decision to work with the Pentagon on Project Maven. Google employees feared Maven, which uses AI technology to improve images on the battlefield, could one day be used to improve the accuracy of drone strikes. Last June Google announced that it was not renewing the contract.

But there’s the rub. Many tech giants not only operate an ESG business model, but also appear to have a firm stranglehold on research and development. Ewan Lawson, senior research fellow at the Royal United Services Institute (RUSI) and an expert on cyberwarfare capabilities, says: “It’s impossible to avoid the fact that the big tech companies have a critical role to play in considering future national security challenges, including AI, given their role in developing the sort of technologies which can and will be exploited.

“The companies and their employees may have ethical concerns and governments need to ensure, therefore, that they engage openly about the purpose of any programmes that involve AI for national security purposes. That said, there are plenty of small tech startups that could be encouraged in this space if the larger companies, or their employees, decided it wasn’t a part of business they were interested in being involved in.”

Are big tech leaving a gap in the market for rogue players?

But in an age of data, some believe that the tech titans’ stranglehold on data makes them indispensable. Could their reluctance, therefore, to develop AI systems for the military in the future give rogue states, whose state-sponsored technology companies have no such qualms in doing so, a clear advantage over Nato armies on the battlefield?

Professor Alan Woodward, computer security expert at the University of Surrey, does not think so, at least not yet. He explains: “There’s no doubt that this presents a problem for armies in the West, especially when you consider that many of them, including the US and UK militaries, rely on commercial contractors to supply them with technology and high-tech weaponry.

“But Google not wanting to be in ‘the business of war’ does not put our national security at risk in the long term, providing Western governments using the commercial off-the-shelf [COTS] system take urgent steps to rectify the situation in the next two years. If they do nothing, however, then it could give China, a country that wants to be the world leader in AI by 2030, and Russia the upper hand.”

Kevin Curran, professor of cybersecurity at Ulster University, agrees that a short interregnum period will not endanger global security. On the contrary, he says, it could actually make us all safer.

Professor Curran explains: “Google pulling out of a military contract may cause a financial and logistical headache for the Pentagon in the short term, but the US, UK and Nato countries should see it as an opportunity to get rid of the COTS system, which is expensive, and develop their own AI software. Furthermore, operating systems that work well in civvy street aren’t always transferable to the battlefield.”

Government needs to be more transparent with tech partners

Take the Trident missile system installed on Royal Navy Vanguard-class submarines, for instance. It runs on a bespoke version of Windows XP, which Professor Curran says “makes it much more susceptible to malware than a military-built operating system”.

It is a view shared by Professor Woodward, who adds: “When private civilian contractors build software and hardware for the military, the systems have to be ‘hardened’ before they can be used on the ground. This adds an extra step to the process and, in a time when armies are being cut, this could save valuable funds that could be spent elsewhere.”

But is it really possible to think that the armies of tomorrow can become self-reliant and build AI systems in-house? They may be able to construct their own operating systems and software, but with “data being the new oil” and the tech unicorns having a monopoly on it, what if these leviathans choose to limit licences for civilian use only?

Professor Woodward says: “I think they would be naive to do this. If they did, rogue states would simply find a way of circumnavigating the restrictions. And what’s more, the companies responsible for the curbs would probably never know and, even if they did find out, there would be very little they could do. So I don’t see the data giants imposing controls on who can and cannot access data.”

Instead, if the tech behemoths and the military want to continue collaborating, RUSI’s Mr Lawson says that “the onus is on governments to make the tech giants part of the conversation”.

“Traditionally, the UK and US have been very secretive regarding their cyber-capabilities. That needs to change. No one’s asking them to reveal what they do. Instead, they just need to enter into a more open dialogue with large private sector data companies and involve them regarding what future safeguards we need to put in place and why.”