
Anne Craanen, Research Manager of Tech Against Terrorism
On May 14, 2022, an attacker walked into a supermarket in Buffalo, New York, and killed 10 people and injured three. The perpetrator, an 18-year old white male, live streamed his attack on streaming platform Twitch and posted a link to a manifesto detailing his motivations for the attack on Google docs before the shooting. After the attack, the perpetrator’s online diary was discovered on gaming platform Discord. In the immediate aftermath, the content produced by the perpetrator was downloaded, saved, and disseminated by other online users across a range of platforms, including several smaller platforms. Most tech companies, on which the material was uploaded to, responded quickly by removing it. Furthermore, tech initiatives such as the Global Internet Forum to Counter Terrorism (GIFCT) activated their Content Incident Protocol (CIP) — a mechanism to facilitate cross-platform sharing and removal of material during crisis situations.
Whilst we should be cautious in drawing conclusions based on his manifesto alone, it seems clear from that document, and other facts about the perpetrator’s life, that the attack was racially motivated. The attack was declared an act of domestic terrorism — the term often used in the US to describe what elsewhere is described as far-right or white supremacist terrorism. Commenting on the attack, US President Joe Biden, said that “[h]ate must have no safe harbor. We must do everything in our power to end hate-fueled domestic terrorism.” However, in this piece, I will argue that the US and other states are neglecting a crucial tool which will facilitate action against terrorist use of the internet, including racially and ethnically-motivated violent extremists: designation.
Removing Terrorist Content Online
Terrorists’ effective use of the internet has placed the battle against online terrorist content at the forefront of several global and national counterterrorism initiatives. Global tech companies, whether large or small, are overwhelmingly willing to counter terrorist use of their platforms. Tech Against Terrorism, a UN-backed initiative, supports these tech companies in countering terrorist use of their services whilst respecting human rights. In our experience, tech platforms are significantly more likely to remove terrorist content if it is produced by a designated terrorist group, since designation removes one level of uncertainty in platforms’ decision-making and provides a clear legal mandate for tech companies to remove such material. For example, the content we have reported to tech companies via our Terrorist Content Analytics Platform (TCAP), which flags content produced by designated terrorist entities, has a removal rate of 94%. We assess that is the case because we provide a clear relationship between designation and the legality of content produced by designated entities online.
In addition to being a mechanism that has tangible positive impact in the struggle against terrorist content online, this is also a question of principles. It is essential that counterterrorism measures, whether online or offline, are based on the rule of law. As such, it should be the responsibility of governments to adjudicate — with adequate human rights safeguards in place — what constitutes as illegal terrorist content. Currently, democratic governments are struggling with this, which means that tech platforms are left to make such decisions on their own. Whilst the larger tech platforms might — thanks to being able to outsource their counterterrorism expert teams — be able to make such decisions, smaller platforms will struggle with this. Designation is therefore a crucial instrument at governments’ disposal to facilitate improved action against terrorist use of the internet in a way that upholds the rule of law.
Over the last year, Tech Against Terrorism has analyzed 13 different global designation systems to investigate what terrorist designation processes are employed, what implications the designation of a terrorist entity has for online content, what human rights safeguards exist, and how global designation processes can be improved to provide guidance on implications for online content and, as a result, improve online counterterrorism efforts.
Lack of Consensus
First, we found that there is often little to no consensus on countries’ designation lists, which creates further confusion with regards to illegality across jurisdictions. Designation, banning, dissolvement, proscription, or political proscription are all terms used to denote the activity of government listing of terrorist organizations that come with different implications. This poses a challenge to tech companies, and especially to smaller ones who might not have the expertise in place to interpret different jurisdictions’ legal systems. We also recommend that there should be a separate listing process for the designation of terrorist groups that does not conflate these listings with groups that are anti-constitutional, political, or any other status. This would ensure that counterterrorism measures, whether online or offline, are not used for groups that are not terrorist.
Different Legal Designations
Second, we found that currently countries differ in the extent to which online content produced by, or in support of, a designated terrorist group is illegal. In the Interim Regime in the UK, it suggests that content produced by a designated terrorist entity that leads to a terrorist offence is considered illegal. However, this currently is only advisory. There are opportunities missed in using designation as a tool to prevent terrorist use of the internet given the demonstrated positive impact it can have in guiding counterterrorism online. This also creates a significant grey area in which tech companies must decide what content should be classified as terrorist.
In addition, this leads to a high risk that individuals and groups may be subject to unjust infringement on free speech, while those who are engaged in terrorism may be able to spread their messages online. Therefore, one of our key conclusions is that much can be won by democratic nation states and supranational institutions clearly stipulating that official content produced by a designated terrorist entity (whether a group or an individual) that leads to a domestic terrorist offence (in nation states’ jurisdictions) should be classified as illegal. This would ground online content moderation in the rule of law and provide clarity to tech companies on what most constitutes terrorist content. This would make it easier for tech companies to moderate such content, which will significantly disrupt terrorist use of the internet.
Skewed Designation Lists
Third, most examined countries’ designation lists are heavily skewed towards Islamist terrorist groups, with either none or only a few far-right terrorist groups listed. Canada and the UK have, to date, designated the most far-right groups as terrorist, with 9 and 5 (as well as 4 aliases) respectively. Worryingly, the US does not have the legal mechanisms required to designate terrorist entities or lone actors under the country’s definition of domestic terrorism, as there is no legislation in place to do so. The only way for a domestic terrorist, such as the Buffalo attack perpetrator, to be designated would be by affiliation to a Foreign Terrorist Organisation (FTO) on the US’ State Department list of terrorist organizations. Due to the lack of designation of far-right terrorist groups, the violent extremist far-right are able to operate more freely than many designated Islamist groups. It is therefore imperative that nation states designate more far-right terrorist groups to accurately reflect and respond to the threat stemming from national and transnational far-right terrorist groups. Only then, can designation be used to counter the online content produced by far-right terrorist entities.
Outdated Lists
Fourth, we found that there are inaccurate lists where disbanded or renamed organizations are often still listed under previous titles. This is likely due to a lack of regular review processes which hinders counterterrorism efforts both offline and online. We recommend governments establish regular review processes to ensure that lists are accurate in scope and kept up to date. We also recommend that these processes should take place with the input of civil society representatives, counterterrorism specialists, and human rights lawyers.
Untransparent Appeal Mechanisms
Fifth, untransparent appeal mechanisms present considerable risks to human rights. When individuals cannot appeal their inclusion on a designation list, they may be subject to stringent restrictions on their rights. There are known cases of individuals having been erroneously included on designation lists, therefore this is highly problematic. Governments should introduce accessible appeal mechanisms to ensure that members of groups and individuals can contest their inclusion in order to avoid erroneous additions.
Conclusion
It is essential that online content produced by actors such as the Buffalo attack perpetrator is countered in a way that respects human rights and a clear designation policy offers opportunities to do so. However, as it stands, designation systems are arguably outdated since they fail to clarify what consequences designation of an entity have for online content produced by said entity, which in turn limits the positive impact designation can have to counter terrorist content online. Therefore, I recommend that policymakers bring criminal justice, and particularly designation, into the 21st century to counter terrorist use of the internet more effectively and in an accountable, human rights compliant manner. To echo President Biden’s words, we should do everything we can to end hate-fuelled domestic terrorism, or any form of terrorism for that matter, and I argue that designation should be one of the priorities to achieve this.
European Eye on Radicalization aims to publish a diversity of perspectives and as such does not endorse the opinions expressed by contributors. The views expressed in this article represent the author alone.