Tech giants are called to account on extremist material

The relationship between the public and private sectors must be fine-tuned to counter extremist material online

The Conservative Party Conference, at the start of the month, will go down as the most disastrous in modern British history by a ruling party, thanks to prime minister Theresa May’s calamitous speech.

While the chronic coughing, falling slogan letters and a P45-wielding comedian combined to besmirch Mrs May’s keynote address, hogging headlines and sparking plots to oust the Tory leader, there were a number of important statements from other cabinet ministers drowned by the torrent of ridicule. Not least home secretary Amber Rudd’s announcement that those who repeatedly view terrorist-related content online, on social media channels and elsewhere, could face up to 15 years in jail.

“I want to make sure those who view despicable terrorist content online, including jihadi websites, far-right propaganda and bomb-making instructions, face the full force of the law,” said Ms Rudd, the day before the prime minister’s nightmare speech. The move, designed to tighten the law tackling radicalisation, is part of a review of the government’s counter-terrorism strategy triggered by an alarming increase in terrorist attacks on British soil that have pockmarked 2017.

In mid-June, Mrs May and her French counterpart Emmanuel Macron vowed to penalise tech behemoths, such as Facebook and Google, if they fail to stymie online radicalisation. This warning arrived days after the London Bridge attack, the third of a trio of terrorist incidents within a horrific month-and-a-half period in which 32 innocents were killed and hundreds of others suffered life-altering injuries. Islamic State claimed responsibility for all three attacks and in every case the murderers left a digital paper trail hinting at their diabolical intentions.

Forensic investigators at the London Bridge attack aftermath
The government has reviewed its counter-terrorism strategy following an alarming  increase in terrorist attacks on UK soil

While the prime minister blamed tech giants for allowing terrorist ideology “the safe space it needs to breed” online, alongside Macron she said: “We are already working with social media companies to halt the spread of extremist material and poisonous propaganda that is warping young minds. We need to work with allied democratic governments to reach international agreements that regulate cyberspace to prevent the spread of extremist and terrorism planning.”

In August, Ms Rudd was dispatched to Silicon Valley, where the biggest tech giants roam, and attended the inaugural meeting of the Global Internet Forum to Counter Terrorism in San Francisco. “Extremists have sought to misuse your platforms to spread their hateful messages,” she said to delegates. “The responsibility for tackling this threat… lies with both governments and with the industry. We have a shared interest – we want to protect our citizens and keep the free and open internet we all love.”

At the forum, the home secretary highlighted that the UK’s Counter-Terrorism Internet Referral Unit had been responsible for taking down 280,000 pieces of terrorist content since 2010 and had also deactivated millions of accounts. But is that enough, and does the government really have a meaningful strategy to limited and harness the power and data generated by the tech giants?

Not by a long chalk, according to Luke Vile of 2-sec, a London-headquartered cybersecurity consultancy. He would know as his organisation works with government departments and National Health Service trusts as well as private equity firms and smaller businesses.

There is no way a government department is going to be able to understand either the terminology or the sheer scale of the operations

“The fact is, and without apportioning any blame, there is no way a government department is going to be able to understand either the terminology or the sheer scale of the operations,” the cybersecurity operations director says. “They simply don’t have the resources or tech-savvy experts required to comprehend what it is they are looking at regulating. The government is massively behind the curve on this.

“There is no internal or external plan for what is an acceptable limit of data and the difficulty is these boundaries are being pushed back all the time. Because of that there is a frontier mentality; it is like the Wild West when it comes to data. The top tech companies are far too powerful. They are dictating how they are managed to the regulators, the public sector and governments.”

So how can the government improve this situation? “Firstly, they need to educate the public better and explain the downsides of metadata collection,” says Mr Vile. “The key to this lies in earlier collaboration with tech companies, though that is really difficult – the government is not finding out about potential problems until the products are off the shelf.

“Imagine if a car manufacturer designed, built and then sold a car that the government didn’t get its hands on until six months later to work out whether or not it is safe to drive. It would never happen. Data is an inert topic that doesn’t physically hurt anyone. As a knock-on effect, the job of the public sector is made much harder because, to continue the metaphor, by the time people are driving around in their cars they are reluctant to hand back the keys.”

Nothing happens locally in the tech world, so any government action would need to be supranational

Marco Rimini, chief development officer of Mindshare Worldwide, agrees that earlier collaboration is critical. He says: “The technology giants control the data that fuels the 21st-century business model. The use of this data should be informed by citizens through their governments rather than just by shareholders.

“Governments need to direct the tech giants on behalf of their citizens or the tech giants will direct the governments. This ‘directing’ can be done as trench warfare or as a negotiation. A negotiation may result in a win-win; a war definitely will not.”

Mr Rimini believes it is a “human right to control our own digital identity” and adds: “In theory you could ask Apple, Facebook or Twitter to sign your terms and conditions. It’s important, though, to note that nothing happens locally in the tech world, so any government action would need to be supranational.”

The tech companies insist they are doing their best to halt radicalisation. In September, the 11th biannual Twitter Transparency Report underlined that the micro-blogging site’s “continued commitment to eliminate such activity from our platform has resulted in an 80 per cent reduction in accounts reported by governments compared to the previous reporting period of July 1, 2016 through December 31, 2016”.

It continued: “Notably, government requests accounted for less than 1 per cent of account suspensions for the promotion of terrorism during the first half of this year. Instead, 95 per cent of these account suspensions were the result of our internal efforts to combat this content with proprietary tools. We have suspended a total of 935,897 accounts for the promotion of terrorism in the period of August 1, 2015 through June 30, 2017.”

Facebook, too, is using “behind-the-scenes” artificial intelligence to thwart terrorist communications and recently employed 3,000 extra staff to moderate posts that break the law or community guidelines. “We agree with those who say that social media should not be a place where terrorists have a voice,” according to a recent Facebook blog post.

However, the acknowledgement, deeper in the blog, that “it is an enormous challenge to keep people safe on a platform used by nearly two billion every month, posting and commenting in more than 80 languages in every corner of the globe” is telling. It illustrates the mammoth task both the tech giants and governments face to eradicate poisonous propaganda and worse.