Summary
This article examines the ways in which extremist rhetoric spreads on social media and offers strategies for combating online extremism and misinformation.
This article examines the ways in which extremist rhetoric spreads on social media and offers strategies for combating online extremism and misinformation.
Over 4 billion people logged on to social media platforms in 2021, including 87% of political extremists, who use social media to spread misinformation and promote their agenda. While social media does not cause polarization, several factors make it a fertile environment for extremism and misinformation, including the lack of content moderation and oversight, the opportunities for rapid social connection, the engagement-promoting algorithms, the ability for users to remain anonymous, and the presence of news and other political media.
Domestic and foreign terrorists and other “threat actors” routinely use social media to publish their plans, engage users in conversation, attract young and impressionable users, draw users to their websites, seek financing, publicize violent acts, and take responsibility for terror attacks. For example, social media was used to coordinate and publicize the January 6, 2021 attack on the U.S. Capitol, the 2021 Patriot Front march, the 2018 Pittsburgh synagogue shooting, several anti-police protests in 2020, and the 2019 New Zealand mosque attack, among other incidents.
Social media is likely responsible for the increase in support for political violence since 2017, with 20 percent of Republicans and 13 percent of Democrats in a 2021 survey believing that political violence is justifiable. Politically motivated domestic terrorism, cybercrime, and other attacks have reached a 25-year high, with more than one-quarter of incidents originating with white supremacist individuals. Attacks have targeted especially people of color, immigrants, LGBTQ+ people, and Jewish people, as well as religious institutions, government buildings, and abortion clinics.