
Mariana Diaz Garcia, works with the UNICRI under the framework of the Knowledge Center Security through Research, Technology and Innovation (SIRIO).
Extremist groups have used a variety of technologies to recruit members, spread their ideology, and plan and execute attacks. The internet has long been used by terrorists and other violent extremists as a communication and propaganda tool. They now exist across a variety of platforms and occupy different online ecosystems. Terrorist and violent extremist content continues to circulate on well-known sites like Facebook, Twitter, and Instagram despite ongoing content moderation efforts.
This is because terrorists and violent extremists are able to reach a broad audience thanks to the popularity and size of mainstream platforms, increasing the impact and reach of their messaging and enhancing their recruitment prospects. However, this has changed in the last few years because content moderation has forced extremist actors onto smaller, more niche and less regulated alternatives. Research in the United Kingdom has concluded that, while offline effects have declined, online influences as opposed to offline ones have been progressively increasing the rate of radicalization; and more importantly, radicalization is currently largely taking place online, as opposed to the hybrid or mixed pathway.
Technology should be analyzed as a double-edged sword, enabling the simplification of tasks and general improvement of processes for positive outcomes, while at the same time facilitating the malicious adoption of such. In this regard, the development and evolution of technology tools has been quickly exploited by extremist groups, which has increased the need to understand the threat in order to develop effective strategies to prevent and counter radicalization leading to violent extremism or terrorism.
The spread of extremist viewpoints has been significantly impacted by the development of the Internet and its capacity to link individuals while facilitating the exchange of ideas and information, particularly during and after the pandemic.
Social media, messaging apps, alt-tech and video-sharing platforms, as well as gaming and streaming platforms, have made the analysis of extremist group members interactions and techniques more complex, since these tools have provided better anonymity, connection mechanisms among members, and alternative forms of communication to avoid detection and enhance the sense of community. Furthermore, these tools constantly evolve alongside other emerging technologies that can be integrated to include additional features, such as the use of artificial intelligence (AI) to generate media content and text.
Understanding the ever-changing technology can be complicated, even when examining something as seemingly simple as social media, however, there are some relevant aspects that can be considered when understanding the dynamics of extremist actors online.
First, it is important to recognize the role of algorithms, which can be defined as a set of rules that control how users view data on the network. Social media algorithms enable the visualization of content (such as posts or videos) depending on its relevance for the user rather than when it was published. In other words, regardless of when the information was published, the algorithms prioritize the content that a user sees on the platform based on the likelihood that this user would engage with the content. The posts that are recommended to users when they scroll through their feed, for instance, are chosen by algorithms (e.g., Facebook or Instagram).
Algorithms are relevant when analyzing the spread of extremist narratives, which are often based on disinformation and conspiracy theories. False information can influence the ideologies and actions of hateful extremists by fostering echo chambers and increasing hate crimes.
Extremists are encouraged to disseminate false information and conspiracy theories when they gain more exposure and recruiting is facilitated. Other technologies can be adopted to enhance the impact of social media. For example, ChatGPT, an AI language model that interacts in a conversational way with the user. Although this tool has quickly shown multiple advantages, it has also raised concerns related to their possible use in the creation and spread of false information.

In January 2023, NewsGuard analysts instructed the chatbot to reply to a series of leading questions regarding a sample of misleading. The outcomes support fears about how the technology might be misused as a weapon. ChatGPT produced false narratives, including in-depth news pieces, essays, and TV scripts. The findings may appear credible and even authoritative to someone who is not familiar with the issues or subjects covered in this content. AI can also be used to create deep fakes, which are audio, pictures, and videos that are manipulated to deceive people or even to amplify the spread of disinformation, making it easier to reach more people.
However, AI can also be used to combat disinformation and other content based on false claims, by using techniques such as deep learning to detect and classify false information. AI chatbots have been improved to understand natural language and create “personalities” based on the type of information requested, which can simplify manufacturing propaganda and at the same time, reduce costs and human resources in its creation.
NewsGuard’s interaction with ChatGPT, where the chat was asked to create disinformation with a far-right extremist narrative.
Transnational extremism relies heavily on the ability to connect individuals and ideas, and while the phenomenon of extremist networking may not be new, the speed at which modern strategies can spread ideologies is unprecedented. Diverse ideological movements now frequently communicate (including reciprocal conspiracy inspiration) and issue calls to action, share ideas and plots, and even provide direct support.
Currently, terrorist and extremist activity has been identified in other online environments like gaming platforms (Twitch, Discord, Steam, Roblox), livestreaming in social media, and audio streaming platforms (SoundCloud, Spotify, Apple music, BandCamp). Extremists do not only take advantage of platforms and their algorithms, but frequently use other techniques such as changing the meaning of emojis and post reactions in social media, and memes.

The double-edged sword characteristics of technology have contributed to both create tools that combat the issues we can find online, as well as to increase the risks that some threats present, including the expansion of extremism.
Understanding the role of technology in the expansion of extremism is relevant to develop efficient counter extremism strategies. The identification of emerging technologies and fast changing techniques in the online ecosystem can provide more insights of the threat, enabling policymakers, law enforcement, private sector and other actors engage in cross-sector collaboration.
A broader perspective of the problem will help in identifying the multiple parts that can be addressed, including the ideological, physiological, financial socio-political and technological aspects entwined in the growth of online and hybrid extremism.
European Eye on Radicalization aims to publish a diversity of perspectives and as such does not endorse the opinions expressed by contributors. The views expressed in this article represent the author alone.