Theresa May wants tech firms to remove 'extremist content' faster… but it's not quite that simple


UK prime minister Theresa May has called on the likes of Microsoft, Google, Twitter and Facebook to act faster to remove terrorism-related and extremist content. At the moment, it takes an average of 36 hours to remove content shared by the likes of Isis, and May wants this slashed to just two hours.

But even this is not enough for the government. It wants technologies to be developed -- or refined -- that will identify this sort of content and prevent it from getting online in the first place. Facebook agrees -- its love of AI is well-known -- but the solution to online extremism is not as simple as saying "technology firms need to do more."

May's demand seems to forget one thing: the internet is bloody massive. It's not as though there are a handful of points of entry -- there are endless social media sites, endless video hosts, countless ISPs, numerous web hosting services, message boards, chat tools, the list goes on and on. Then, of course, there is the dark web.

Google recognizes the enormity of the task being asked of technology firms. General counsel Kent Walker said:

The larger problem is you can't necessarily catch everything on the entirety of the internet. The challenge is, while machine learning is a powerful tool, it's still relatively early in its evolution.

There's also the question of what should be defined as "extremist content." In many -- possibly most -- cases, it's fairly obvious, but at the same time it's not always going to be completely clear. Who makes that final decision? Will there be a panel judging content? Will there be a right of reply? As soon as content starts to get categorized as "good" and "bad," the potential for mis-categorization creeps in. There's also the potential for mistakes to be made by automated tools, abuse by governments, human error and bias.

In addition to the practical concerns expressed by technology firms, security and privacy experts voice concern about the powers the government seeks, particularly when it comes to the matter of encryption. Mark James from ESET says:

This subject is one of those which should be easy to resolve; advanced automated intelligence should be able to determine if content is suitable and quarantine if necessary. The systems seem fairly good at identifying certain content already, but more needs to be done. Currently, anyone, anywhere is able to watch and learn about anything and that includes terrorism. This content needs to be managed and removed as soon as its posted -- if it’s not possible to stop it from being posted in the first place.

Bill Evans from One Identity is among those with concerns about just how powers sought by the government could be abused, and the scope for mistakes to be made:

Prime Minister Theresa May's continued call for the private sector of high tech to voluntarily support government’s desire to fight terrorist using the internet is a great idea... until you actually think about it. It is so fraught with peril that it boggles the mind.

Let's say the internet giants like WhatsApp, Google and Apple concede and decide to offer what is, in essence, a back door for government to review encrypted communications. And then that capability leaks. How long before the tax bureau starts using it to detect how I spend my money? Or it falls into the hands of hackers who are now hacking my communications?

Then there's the case of false positives. These internet companies will undoubtedly use some form of AI to track what is terrorist communications and what is not. And the next thing I know, the armed forces are breaking down my front door in a case of mistaken communications.

While it's fair to say that there is a certain amount of extremist content slipping onto mainstream platforms, the very fact that governments are already so interested in policing them has pushed extremists to tools and parts of the internet that are less easily touched.

Charlie Winter, senior research fellow at the International Centre for the Study of Radicalisation at King's College London, says:

It is crazy to have this conversation without placing Telegram front and center of the conversation because, if you strip away the rhetoric the reality is this: Islamic State supporters don't use Twitter or YouTube like they used to -- Telegram is their new center of gravity.

Google is realistic about the Herculean task being demanded of it:

There is no silver bullet when it comes to finding and removing this content, but we're getting much better. Of course finding problematic material in the first place often requires not just thousands of human hours but, more fundamentally, continuing advances in engineering and computer science research. The haystacks are unimaginably large and the needles are both very small and constantly changing.

Image credit: Scanrail1 / Shutterstock

4 Responses to Theresa May wants tech firms to remove 'extremist content' faster… but it's not quite that simple

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.