The European Commission intends to adopt a new regulation aimed at strengthening the fight against terrorist content on the internet. The main innovation is a catalogue of very specific obligations for companies providing services, imposed with the threat of heavy penalties. With this move, the EU is abandoning its previous approach, centered on self-regulation. This change of position is based on the alleged frustration of European governments with the persistent availability of terrorist content on the main internet platforms.
While I share the concerns behind this initiative, I not only doubt that it will achieve the objective it sets, but also fear that it could end up damaging innovation in the digital sector.
Terrorists have shown enormous adaptability in cyberspace. They know how to exploit the new potential of information technologies, but they have also survived the harassment of their online presence by states, companies and internet users.
Nor is this a new phenomenon. The perception of the average citizen is that the eruption of terrorist content on the internet occurred in parallel with the rise of large social networks such as Facebook or Twitter and video viewing services such as YouTube. However, hardly anyone remembers that only fifteen years ago, chat rooms on the Internet, such as Paltalk, were one of the main spaces for online radicalization. Nor do people remember the hegemony exercised by a few internet forums as platforms for the diffusion of propaganda and as a space in which a global community of thousands of radicals interacted.
Today, there are no jihadists in the chat rooms simply because nobody uses them. These services soon became obsolete and were absorbed by other applications that offered the same functionality, along with a new catalogue of options related to connectivity and multimedia content. Jihadist forums have been languishing for years with hardly any activity as a consequence of the exodus of their users towards the most popular internet spaces.
One of the main lessons of this process is that the force that caused this change in the terrorist strategy was not the direct pressure exerted by those who struggled for years to overthrow these websites and accounts with radical content, nor was it the implementation of an international regulatory framework involving companies in the effort (which did not exist). The main reason is instead the need for terrorists to be present where their audience is.
In fact, the arrival of jihadism in social networks occurred despite the risk of opening new spaces in platforms where maintaining anonymity was much more complicated. Terrorists have always been clear that when it comes to balancing their security or gaining publicity, the priority is the latter. The opposite approach could turn them into a marginal community, cornered in a few internet forums that nobody visits.
This also explains why the forecasts of a massive eruption of terrorist content on the Deep Web proved to be mistaken . A priori, this internet zone met all the conditions for becoming a new digital sanctuary for terrorism: a space deliberately created to guarantee the anonymity of its users and where no state is capable of exercising control. However, the most important component was overlooked: the terrorist presence on the internet has an eminently propagandistic character, and for this it needs an audience. Today, the jihadist presence on the Deep Web is tiny, for the simple reason that they don’t want to waste time preaching in a digital desert.
That said, the main shortcoming of this EU proposal is not its foreseeable ineffectiveness, but the harmful effects it will have on the internet service providers sector.
The regulation only provides ambiguous definitions of the actors affected by these requirements and what is considered to be terrorist content.
It also lays down very problematic obligations, such as requiring providers to remove or prevent access to terrorist content within one hour of receiving a withdrawal order. This threatens the economic viability of small and medium-sized enterprises, which account for 90% of all companies in this sector . They must bear the cost of maintaining supervision teams large enough to meet these requirements 24 hours a day, 7 days a week. This financial barrier is a dangerous inhibitor of innovation in a sector where a very significant proportion of the most successful products began their journey in the form of start-ups with limited staff, focused of necessity on service maintenance and revenue generation.
The problem extends even to the giants of the internet. They do have the capacity to absorb the economic costs, but they manage immense data volumes, making it impossible to achieve human supervision of all the content labelled as terrorist. The only way to meet this requirement and avoid high penalties for companies that do not comply quickly enough is to automate the entire process, which will generate a large number of errors, affecting a right as sensitive as freedom of expression.
The most difficult aspect to understand about this interventionist approach is the EU’s timing. Never before have terrorists faced such a hostile environment for spreading their message on the internet. The bulk of these messages only last on the net for a few hours before being deleted. Some are blocked even before they are published . This new scenario has been come about not only because of the greater involvement of large platforms, which have ramped up the infrastructure used to monitor content, but also due to the growing effectiveness of algorithms based on artificial intelligence that make it possible to cover a volume of data that would be impossible to control exclusively with human supervision.
Ultimately, the new regulation describes with too much detail how to intervene in an ecosystem that will not exist in a few years’ time. Companies will offer services that we cannot imagine and they will be internally organized very differently. The terrorists, for their part, will have adapted to the new rules of the game, and the new challenges that they will pose will have little to do with this regulation. The real risk is that not only will the EU not achieve anything it intended, but that heavy institutional inertia will keep this regulation active for decades, making it a costly obstacle to innovation.
(1) EUROPEAN COMMISSION, “Proposal for a regulation of the European Parliament and of the Council on preventing the dissemination of terrorist content online”, (12.9.2018) http://ec.europa.eu/transparency/regdoc/rep/1/2018/EN/COM-2018-640-F1-EN-MAIN-PART-1.PDF
(2) EUROPEAN COMMISSION, “State of the Union 2018: Commission proposes new rules to get terrorist content off the web”, Press reléase (12.9.2018) http://europa.eu/rapid/press-release_IP-18-5561_en.htm
(3) TORRES-SORIANO, Manuel. “The Dynamics of the Creation, Evolution, and Disappearance of Terrorist Internet Forums”, International Journal of Conflict and Violence, 7:1 (2013), 164- 178. http://www.ijcv.org/earlyview/270.pdf
(4) WEIMANN, GABRIEL (2016) “Going Dark: Terrorism on the Dark Web”, Studies in Conflict & Terrorism, 39:3 (2016), 195-206.
(5) EUROPEAN COMMISSION, “Impact assessment accompanying the document Proposal for a Regulation of the European Parliament and of the Council on preventing the dissemination of terrorist content online” (12.9.2018) https://ec.europa.eu/commission/sites/beta-political/files/soteu2018-preventing-terrorist-content-online-swd-408_en.pdf
(6) VĚRA JOUROVÁ. “Code of Conduct on countering illegal hate speech online. Fourth evaluation confirms self-regulation Works”, EU Directorate-General for Justice and Consumers, (February 2019). https://ec.europa.eu/info/sites/info/files/code_of_conduct_factsheet_5_web.pdf
European Eye on Radicalization aims to publish a diversity of perspectives and as such does not endorse the opinions expressed by contributors. The views expressed in this article represent the author alone.