European Eye on Radicalization
In Parliamentary testimony on 24 October, the top British counter-terrorism police officer Neil Basu painted a gloomy and alarming picture. Terror investigations had reached a record high of around 700, he said. Jihadis remain the biggest concern, making up around 80% of cases, but the far right threat is increasing and the two sides feed off each other. Basu fears the UK counter-terrorism machine does not match the danger it faces, even though it is running “red hot”.
There was one significant ray of light, though, in an area where many might not have expected to find one. Asked about social media and online extremism, Basu called it “the greatest threat”. But he went on to sound an optimistic note about technology companies in the aftermath of a string of terrorist attacks in the UK in 2017 that claimed 36 lives:
I am glad you’ve been specific about social media providers, because as I’ve said, that’s the greatest threat. So, if I’d been sitting here a year ago, or certainly 18 months ago, prior to the first attack [in Westminster in March 2017], I’d have probably been calling them out. The reality is that, you know, 2017 has been a shock and a watershed year for a lot of people, and I think that includes big tech.
The thinking and approach of the companies have changed, Basu continued:
I think they’ve recognized that it is no longer good enough to say “we’re platforms” when “we’re actually publishers” and the realize they have got to do something about it.
I am not sure they are as match fit to do something about it as we all thought they might have been. Were they putting enough resources, effort, people into this problem? Not sure they were. I think they’re really stepping up.
We obviously have an ongoing relationship, the government has an ongoing relationship with them. When I say us, counter-terrorism policing and MI5. They are coming much more to the table and doing something about it.
…
None of the big players want any kind of criminal content on their platforms. They’ve been very clear about that. They’re trying to do more about it.
Nonetheless, the companies could try even harder:
My big plea to them is when you’re taking stuff down, and they are now starting to use automated artificial intelligence techniques to do that, is it’s OK spotting it and taking it down. But if you don’t read it and you don’t understand what it is and you don’t understand what the underlying threat is and you don’t report it to us, that’s a real gap in intelligence.
Basu cited his predecessor Mark Rowley to illustrate the gravity of this issue:
I think Mark reported to this committee one in which bomb making instructions had quite rightly been spotted, flagged and taken down. But they hadn’t reported it. And it took several months for that to come to us due to a post-incident investigation.
Basu went on to describe what he called “utopia” – a situation where terrorist content is not uploaded in the first place:
Effectively anyone trying to upload won’t be able to upload because it hits lots of particular flags. But creating an algorithm for that is incredibly difficult.
Better profiling systems are needed, he added:
What they do need, and this is something we need as well, there isn’t a body of research in the same way that there is for serial murderers or pedophiles. So you’re looking at creating an algorithm that tells you what terrorist behavior is. We need better understanding of what the behavioral triggers would be, to create an algorithm for a company to be able to a) spot when somebody is reengaging or b) spot when stuff is going on.
And I’m not talking about the very obvious. There’s a lot of very obvious material that goes on that is very easy to spot and their robotics can take down. There’s a lot of other stuff that doesn’t quite get to that threshold. And I would like to see them taking extremism that doesn’t cross the criminal threshold seriously.
Smaller platforms are another problem:
Obviously we have lots of problems with much smaller platforms that are very difficult to engage with. But the big companies who have the biggest influence in this area, they have been cooperating.
Basu also revealed the scale and depth of the work of the counter-terrorist Internet Referral Unit:
[It] engaged with 300 companies since it was founded back in 2010. 308,000 pieces of terrorist ideology removed from the internet. All of that has been done through voluntary co-operation with those 300 companies, of which the vast majority sit on the big platforms. So those big companies, effectively, they have listened when we’ve flagged stuff to them and they have taken stuff down.
Some in the committee room seemed slightly taken aback by Basu’s positive views, even though they were accompanied by important qualifications.
Indeed, one should keep in mind that he has an objective in his work with technology companies:
And actually continually calling them out in public, I’m not sure that’s the best way to get that cooperation long term.
Nonetheless, his positive views on the “watershed” do appear to be based on real progress. In a country facing serious and multiplying threats, this is reassuring.