chevron-down Created with Sketch Beta.
March 20, 2023

Social Media, Information Disorder, and Biometric Manipulation

By: Laura K. Donohue
Social Media

Social Media

Imagine from Pexels.com

Download a printable PDF of this article

Information disorder presents one of the most serious national security risks faced by the United States. It extends beyond disinformation (i.e., false or misleading information intentionally created and disseminated) and misinformation (i.e., information that merely lacks veracity) to include the manipulation of even accurate data to serve adversaries’ political ends. It threatens to undermine public trust and U.S. political, financial, and societal structures by promoting siloed, contradictory, and at times patently false narratives about American democratic institutions. The construction of multiple realities further exacerbates societal tensions and increases the risk of civil unrest. The failure of the United States to respond effectively to such manipulation has set the country on a path that will only become more dangerous as virtual reality and machine learning come of age.

This article begins by describing the contours of the contemporary environment. Information disorder is endemic to what we now think of as third generation social media. It stems in part from the structure of social media and relationships among users as well as the algorithms employed by social media platforms, which wield enormous power. It relates to the underlying business models adopted by these platforms and legislative and judicial failures to address those models as the root causes of information disorder. Active measures traditionally adopted by adversaries to use information as an influence weapon differ significantly from those employed in the current environment. What we are seeing is different in kind, not degree, from what has come before. The article ends by positing that information disorder will become even more pronounced in fourth generation social media, as virtual and augmented reality and machine learning become increasingly common.

The Evolution of Third Generation Social Media

Third generation social media differs from prior iterations of online communications in ways that directly contribute to increasing information disorder. See generally Andres Montero & Borja Belaza, Towards an Augmented Reality Fourth Generation Social Networks (2017) (highlighting four generations of social media). In the 1970s and early 1980s, the first generation of online connection centered on user-to-user interfaces. Created in 1978, the Bulletin Board System (BBS) allowed users to access a central system, where they could post messages and upload and download software and files. Scott Gilbertson, Feb. 16, 1978: Bulletin Board Goes Electronic, Wired, Feb. 16, 2010. BBSes, accessed via modems, tended to be subject-specific and run by individuals who worked within, or had a passion for, the area of focus. Discussion fora emerged, sped by Telnet, a protocol developed in the early 1970s for online text communications. Telnet Overview, History and Standards, The TCP/IP Guide. Email, too, inhabited the user-to-user space. As with BBS, communication centered on sending and receiving text communications.

During the second generation, the adoption of the TCP/IP protocol caused a sea-change in online networking, moving services to the World Wide Web. Programs like Beverly Hills Internet (BHI) made it possible for online communities to materialize and for users to create and publish their own web pages. GeoCities, the next iteration of BHI, created virtual neighborhoods within which users with common interests could meet. By the end of the 20th century, it had become the web’s third most popular site. Ian Milligan, Welcome to the Web: The Online Community of GeoCities During the Early Years of the World Wide Web, in The Web as History 137 (Niels Brügger & Ralph Schroeder eds., 2017). Social media shifted from user-to-user communications to communities of interest. Users met online to discuss matters of mutual interest and to exchange information. See Benj Edwards, Remembering GeoCities, the 1990s Precursor to Social Media, How-To Geek, Aug. 24, 2021.

Third generation social media appeared as communities of shared interests gave way to the creation of individual profiles as a precursor to online friendship. Instead of seeking out others based on common interests, users began to post information about themselves, curating online personas. Technological advances in hardware, software, and communications, such as the expansion of broadband and the proliferation of mobile devices, enabled the shift. Online connection became all about forming connections with others and discovering the world. In 1997, SixDegrees forged the way by giving users the ability to build profiles, construct friends lists and school affiliations, and connect with people users already knew. Friendster, launched in 2002, had more than 3 million users by 2003. Grahamites, How Facebook Dominated the Social Network World, Yahoo!News, Dec. 13, 2017. These sites were quickly outpaced by MySpace (2003), 4Chan (2003), Facebook (2004), and other networking sites.

What made many of these sites different from what had come before was the way in which they focused on the individual user, instead of on communities or topics. Over time, they began to allow profiles to be linked and user-generated content to be disseminated. No longer did a user need to go and search a common site to engage; instead, algorithms deliver information based on patterns in an individual’s engagement with other users as well as the content the individual posted. “Likes” and “follows” became more than ways of building relationships: they provided insight into the user’s thoughts, beliefs, and predilections. Online identities, in turn, morphed into more than posts, which had defined online identities in second generation social media. In the third generation, identities became curated personalities. Profiles reflected what users wanted others to see and know about them—characters purposefully constructed. Entities too, from businesses to the National Park Service, developed online persona.

Photos, videos, music, and other audio-visual media deepened the experience of users and leant seeming validity to posts. As information creation and dissemination were democratized, confidence in the veracity of online information increased. Here was a person or entity that a user “knew,” relating information in the same way that a friend, acquaintance, or authority in the real world might. Along with the shift to curated personalities came the ability to communicate instantly, using video calls or instant messaging, and mass access to information. Online identities and information posted by them could now “go viral”: with the dawn of fiber optics, they flew across the Internet at the speed of light.

Facebook served as one of the first platforms to enable multiple personality formation and mass information dissemination. It was quickly followed by sites like LinkedIn and Google+. Paralleling personality sites were ones focused on microblogging (e.g., Twitter, Plurk, and Tumblr), photo sharing (e.g., Instagram, Snapchat, and Pinterest), and music and video (e.g., YouTube, Vine, and TikTok). Some sites, like Reddit, continued to form around communities of interest, but the manner of interaction on third generation platforms altered the online environment. Meanwhile, the construction of a two-dimensional reality within which users could develop online personas offered social media platforms a way to monetize user behavior: the more information that could be gathered about each user, the more information the platform could sell to third parties to exploit users’ interests and deliver user-targeted advertisements, prompting users to purchase goods and services. And it was not just information about individuals and the ability of third parties to reach them that deepened, but also information about on- and off-line communities could be collected, analyzed, and mapped to a greater extent than ever before, which led to the explosion of social network analytics and “big data” as fully-fledged academic disciplines.

What made this shift to online identities all the more remarkable was the power being amassed by social media platforms. The large-scale collection of data allowed companies the ability to track users’ behavior, map their relationships, gain insight into inter-personal and community dynamics, and influence users’ future behavior. All of this could be commoditized, entirely outside the users’ view. Platform power traversed geopolitical boundaries. By January 2022, there were 4.62 billion social media users globally. Hootsuite & We Are Social, Digital 2022 87 (2022). 2.91 billion of those users were on one platform: Facebook. Id. at 118. Instagram and WhatsApp (now controlled by Meta, the parent company of Facebook) reached another 1.48 billion users and 2 billion users respectively, while Facebook Messenger hit almost 1 billion users. Id. at 142, 167. Meta is not the only behemoth: YouTube attracts 2.56 billion users globally. Id. at 132. These and other platforms have enormous reach into individual countries. 80.9% of the U.S. population uses social media. Id. at 92. Facebook advertising reaches 63.7% of all U.S. citizens over the age of 13. Id. at 123. The numbers are even higher in other countries: Facebook reaches 100% of the population in the Philippines and around 90% of the population in Vietnam and Mexico. Id. The online revolution in human interaction means that social media platforms have tremendous knowledge about billions of people worldwide and their relationship with others. Scientia est potentia.

The amassing of knowledge about the many in the hands of the few allows platforms to control what individuals learn about the world by both granting and denying access to information. Using “community standards” created by the platforms themselves, they curate content and removing hundreds of millions of posts. Facebook’s hate speech policy, for instance, deletes material considered “a direct attack against people—rather than concepts or institutions—on the basis of what we call protected characteristics.” Facebook Community Standards, Hate Speech, Policy Details. Between October 2020 and September 2021, Facebook removed 105.9 million posts, only 3% of which had been reported by users. Id. at Data. The company also deletes “content that is particularly violent or graphic.” Facebook Community Standards, Violent and Graphic Content, Policy Details. Between October 2020 and September 2021, Facebook removed 106.5 million posts, only 0.5% of which had been reported by users. Id. at Data. By curating content and using processes largely invisible to users, social media platforms set the contours of contemporary debate about some of the most important issues societies face.

Beyond curating content, social media platforms have the power to silence individuals, hampering their ability to communicate ideas or to engage in political, economic, social, or even scientific debate. This is more than just removing posts: they can de-platform users who fail to adhere to community standards, effectively banishing them from a particular platform’s digital society. Perhaps the most infamous example involves then-President Donald Trump. Shortly after the riot at the U.S. Capitol on Wednesday, January 6, 2021, Twitter instituted a 12-hour ban on Trump’s personal account. Late Friday night, the company made the ban permanent, shut down his personal account, blocked his official campaign account and official presidential account, and deleted his posts on other accounts. On January 6, Snapchat locked the president’s account, which had 2 million followers. During the riot, Facebook took down a video from the president and, that evening, instituted a 24-hour ban of his account, which had 33 million followers. Thursday morning, Facebook announced an indefinite suspension of the president’s account, which became a two-year ban. See generally Melina Delkic, Trump’s Banishment from Facebook and Twitter: A Timeline, N.Y. Times, May 13, 2022. See also Rachelle Akuffo, Donald Trump Reinstated on Facebook, Instagram, and Twitter, Yahoo!Finance, Jan. 26, 2023. Whatever one may think of the events leading up to Trump’s removal from certain social media platforms, the sheer power exercised by the platforms cannot be ignored: they effectively silenced a democratically-elected president while he was in office, at least to the audiences of the platforms. The example underscores the enormous influence that social media wields over public communications.

The Dangers of Third Generation Social Media

The national security risks presented by third generation social media are significant and relate to how third generation social media is constructed as well as how it can be weaponized. In addition to the knowledge and power amassed by social media platforms, there is the risk that adversaries and bad actors can harvest user data and use the algorithmic structures and advertising functions of social media to manipulate users into thinking, believing, and acting in certain ways. Countries and non-state actors can employ disinformation, misinformation, and even accurate information to accomplish their aims. While one might think that such operations merely amount to digital propaganda (i.e., online propagation of an idea or narrative intended to influence others), nothing could be further from the truth. Contemporary active measures differ in fundamental ways from the propaganda methods that have come before.

First, the multi-sensory aspect of third generation social media and the apparent validity of information posted by familiar online identities heighten the likelihood that users will believe what they are seeing, hearing, and being told online. A post is very different from a leaflet dropped from a plane as it flies overhead. See, e.g., Daniel Engber, I’m Covered in Leaflets! The Secrets of Airborne Propaganda Distribution, Slate, July 18, 2006. In a media-rich environment, numerous approaches can be used to shape messaging in hidden ways.

In the video realm, for instance, information can be presented out of context: the video may not be an accurate representation of what it claims to depict, or it may be a brief clip from a longer video that reflects a very different narrative. The footage may have been edited and rearranged with key information omitted, or disparate videos may have been spliced together to alter the story being told. Images may have been transformed through doctoring (altering the frame speed, cropping, using Adobe Photoshop, dubbing audio, etc.) or fabricated with artificial intelligence to create high-quality fake images. See, e.g., Adam Satariano and Paul Mozur, The People Onscreen are Fake. The Disinformation is Real, N.Y. Times, Feb. 7, 2023. Regardless of the manner in which images or their meaning is altered, the sensory aspect of visual communication strengthens the impression that what the user is seeing must be real. Seeing is believing. Human beings, moreover, accord a greater level of reliability to information from friends, familiar people and entities, and authority figures. In third generation social media, foreign adversaries and bad actors can look like trusted insiders without ever setting foot in the United States.

Russia has been remarkably successful in leveraging social media to influence public opinion in the United States. Online groups, such as Being Patriotic, Secured Borders, and American.made, appear to be domestic conservative groups but have been traced to the Russian Internet Research Agency (Агентство интернет-исследований), as have numerous social justice and religious groups, including Black Matters, Don’t Shoot Us, LGBT United, and United Muslims of America. Dep’t of Just., Robert S. Mueller, III, Report on the Investigation Into Russian Interference in the 2016 Presidential Election 22-24 (Mar. 2019). Russia’s goal is not to convince U.S. mainstream media or the population in general of the validity of a claim as it might have been with traditional propaganda. Instead, the aim is to keep the user online and in echo chambers that create and augment mistrust in American democratic institutions.

Second, the structure of third generation social media allows information to be personalized, keyed not just to users’ interests but also to their psychological makeup—how they see the world, process information, and make decisions. Perhaps the best example of psychometric targeting for political gain is Cambridge Analytica, a British political consulting firm that combined misappropriation of data assets, data mining, data brokerage, and data analytics to affect elections in countries around the world. Cambridge Analytica used social media to build a training set for algorithms. It began by paying about 270,000 users between $2 and $5 each to take a personality quiz designed to ascertain the user’s OCEAN (openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism). See generally Alex Hern, Cambridge Analytica: How Did It Turn Clicks Into Votes?, The Guardian, May 6, 2018; Nicholas Confessore, Cambridge Analytica and Facebook: The Scandal and the Fallout So Far, N.Y. Times, Apr. 4, 2018. These personality traits could then be clustered into distinctions that held across cultures and time. When a user took the quiz on the app, thisisyourdigitallife, the company harvested as much data about the user as the user’s online friends, thus turning a few hundred thousand into tens of millions of subjects. From this user data, the company built 253 algorithms to predict user behavior. It then assigned a score to each user based on the five OCEAN personality traits. The scores allowed Cambridge Analytica’s clients to target users with political advertisements based on their psychological propensities. For example, voters in general favor a strong economy, but a “conscientious” user-voter cares about the opportunity to succeed and the responsibility of a job while an “open” user-voter prioritizes the opportunity to grow as a person. Cambridge Analytics was hired to employ psychometric targeting to influence elections in Mexico, Malaysia, Brazil, Australia, China, the United States, and the United Kingdom. Id.

Third, in the environment created by third generation social media, adversaries and bad actors can amplify messages in new and powerful ways. For instance, in October 2019, 15,000 Facebook pages with a majority U.S. audience were being run out of Kosovo and Macedonia. Karen Hao, Troll Farms Reached 140 Million Americans a Month on Facebook Before 2020 Election, Internal Report Shows, MIT Tech. Rev., Sept. 16, 2021. Collectively, the pages reached 140 million U.S. users per month and 360 million global users per week. Id. By comparison, Walmart advertising during the same time frame only reached 100 million Americans. Id. Meanwhile, Eastern European “troll farms” ran the most popular Christian American page on Facebook (75 million U.S. users monthly) and the most popular African-American page (30 million users monthly). In these cases, 85-95% of the users never chose to follow the group; they just had information from the page inserted into their newsfeeds. Id.

Adversaries and bad actors can and do use third generation social media to influence elections. They also use social media to exacerbate societal tensions in ways that harm the social fabric. In 2016, for example, Russia used social-media “sock puppets,” “troll farms,” and advertisements to encourage certain users to protest and other users to support Beyoncé’s Super Bowl halftime show. See H. Permanent Select Comm. on Intelligence, Social Media Advertisements. See also Taylor Hatmaker, What We Can Learn From the 3,500 Russian Facebook Ads Meant to Stir Up U.S. Politics, TechCrunch, May 10, 2018. Pro-Beyoncé advertisements targeted users exhibiting “African-American” behaviors, while anti-Beyoncé advertisements targeted individuals with law enforcement or military job titles, such as officer, sergeant, and commander. Id. The goal was to take advantage of a public controversy to deepen societal divisions in America.

Posts like these are emblematic of a much larger national-security threat. Russia is not the only adversary harnessing the power of social media. China has developed and used state and state-affiliated media, “troll farms” and online advertising, cultural centers and events, disinformation, and data collection and analysis to conduct social-media campaigns targeting the United States and Americans. See, e.g., Ryan Serabian and Lee Foster, Pro-PRC Influence Campaign Expands to Dozens of Social Media Platforms, Websites, and Forums in at Least Seven Languages, Attempted to Physically Mobilize Protesters in the U.S., Mandiant, Sept. 7, 2021. Such is the concern that, in late 2022, following an extensive investigation by the Committee on Foreign Investment in the United States into Chinese manipulation of social media and user data, Congress passed, and the President signed, a bill that banned TikTok from federal government devices. See Consolidated Appropriations Act, 2023, Pub. L. 117-328, Division R. See also Sara Morrison, TikTok’s Master Plan to Win Over Washington, Vox, Feb. 2, 2023. Editor’s note: Concerns about TikTok include concerns about its Chinese parent company, ByteDance, and the Chinese government’s access to TikTok’s data and control over the platform. See David McCabe & Cecilia Kang, U.S. Pushes for TikTok Sale to Resolve National Security Concerns, N.Y. Times, Mar. 15, 2023. Like China, Iran engages in active measures. See Emerson T. Brooking & Suzanne Kianpour, Iranian Digital Influence Efforts: Guerrilla Broadcasting for the Twenty-First Century, Atlantic Council, Feb. 11, 2020. Undoubtedly, other nations follow suit.

What makes social media a particularly effective instrument of state power is the level of penetration it affords. Third generation social media provides adversaries with the ability to individualize their approach to each person within society. It provides access 24 hours a day, seven days a week. Propaganda, moreover, can be disseminated via social media at a fraction of what it historically cost. Furthermore, the speed with which it travels makes it difficult to counter with the truth or even simple source attribution. In the interim, social media will have had its engineered and intended effect: siloed conversations, fed by disinformation and misinformation and amplified by bad actors; greater extremism and polarization; and loss of public trust and confidence in government and societal institutions. Public discourse and debate slow and are replaced by anger, frustration, and an increasing willingness to use violence to address what is seen as increasing injustice or threat. American democracy itself hangs in the balance.

The Future of Fourth Generation Social Media

Fourth generation social media threatens to make the situation worse. The coming epoch will be defined by the prevalence of augmented and virtual reality and the advent of artificial intelligence and machine learning. The essential element of the coming age will be the blending of digital and physical worlds through immediate, individualized experience, whether by physically, intellectually, and emotionally experiencing the effects of actions in the online world, projecting the physical world into the online one, or extracting digital characteristics for projection into the physical world.

Augmented reality injects computer-generated sensory data into the physical world via an interface, such as “smart” glasses or mobile device screens. It merges two-dimensional digital reality with the analogical reality of the physical world. Unlike third generation social media, augmented reality does not use physical relationships projected or constructed in digital space. Instead, it relies on digital interactions and data that are brought into the physical world.

Virtual reality, a computer-generated representation of a three-dimensional space, provides the cornerstone for the metaverse or metaverses. Perception in virtual reality is mediated through interfaces: a world infused by sight, smell, sound, touch, and even taste makes for an immersive experience. See, e.g., Tommi Laukkanen et al., Virtual Technologies in Supporting Sustainable Consumption: From a Single-Sensory Stimulus to a Multi-Sensory Experience, 63 Int’l J. of Info. Management, April 2022 (discussing multimodal and multisensory stimuli marking virtual reality). That experience, together with the potential to bring the experience into the physical world as a three-dimensional digital reality, will blend universes. The coming 6G mobile networks and decentralized networks will facilitate further construction of such virtual worlds.

Fourth generation social media will allow digital actors to collect even more detailed information on users and to deliver disinformation, misinformation, and even accurate information in a manner that becomes ever more personalized. Social media platforms will be able to take advantage not just of users’ behavior in the metaverse but also of their mental processing speeds, personal biases, and biometric feedback. In this world, biometric manipulation—the collection of individuals’ biometric data, everything from their micro-expressions and pupil dilation to their emotional states when confronted by certain input, and its use to spur individuals to act in certain ways—will be made possible. Knowing how individuals and communities will react can provide a blueprint for the projections of data in the metaverse. Powered by artificial intelligence and machine learning, the potential for widespread societal manipulation, to say nothing of targeted control of political leaders, is enormous.

What humans know is largely a product of their experience. To the extent that such experience is created, curated, and shaped by social media, the very basis for democratic governance—the decisions citizens make—can be manipulated. In many ways, what we have seen thus far in terms of third generation social media is but a glimpse of what is coming down the pike. Whether the country will be able to react in time—protecting user data, ensuring transparency in information origins and manipulation, and countering biomanipulation on both an individual and a societal scale—remains to be seen.

Laura K. Donohue

Scott K. Ginsberg Professor of Law and National Security at Georgetown Law

Laura K. Donohue is the Scott K. Ginsberg Professor of Law and National Security at Georgetown Law, Director of Georgetown's Center on National Security, and Director of the Center on Privacy and Technology. She has written a number of scholarly articles as well as The Future of Foreign Intelligence: Privacy and Surveillance in a Digital Age (Oxford U. Press, 2016), The Cost of Counterterrorism: Power, Politics, and Liberty (Cambridge U. Press, 2008), and Counterterrorist Law and Emergency Powers in the United Kingdom 1922-2000 (Irish Academic Press, 2007). She is a member of the Standing Committee on Law and National Security, Council on Foreign Relations, and Electronic Privacy Information Center and a Senior Scholar at Georgetown Law's Center for the Constitution. She obtained her AB in Philosophy (with Honors) from Dartmouth College, MA in Peace Studies (with Distinction) from the University of Ulster, JD (with Distinction) from Stanford Law School, and PhD in History from the University of Cambridge. 

Entity:
Topic:
The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.