chevron-down Created with Sketch Beta.
June 25, 2020 HUMAN RIGHTS

Political Advertising on Social Media Platforms

by Lata Nott

Political advertising is a form of campaigning that allows candidates to directly convey their message to voters and influence the political debate. By running ads on various types of media, candidates can reach audiences that otherwise may not have been paying attention to the election and build name recognition, highlight important issues, and call attention to the shortcomings of their opponents. 

In the aftermath of the 2016 presidential election, the public became aware of just how powerful political advertising on social media could be.

In the aftermath of the 2016 presidential election, the public became aware of just how powerful political advertising on social media could be.

Natanaelginting on Freepik

In the past, the vehicles for political ads were newspapers, direct mail, radio, and television. In 2008, Barack Obama became one of the first candidates to use social media advertising in his campaign. That year, 2008, candidates spent a total of $22.25 million on online political ads. Since then, online political advertising on has exploded—in 2016, candidates spent $1.4 billion on them.

In the aftermath of the 2016 presidential election, the public became aware of just how powerful and game changing political advertising on social media could be. Brad Parscale, the Trump campaign’s digital strategist, tweeted that their campaign on Facebook was “100x to 200x” more efficient than the Clinton campaign. The reason for this became clear after whistleblower Christopher Wylie revealed that the Trump campaign’s data analytics team, Cambridge Analytica, “used personal information taken without authorisation in early 2014 to build a system that could profile individual U.S. voters, in order to target them with personalised political advertisements.”

It was also uncovered that some of the ads on social media weren’t coming from candidates at all. A report from the Senate Select Intelligence Committee disclosed that the Russian government spent about $100,000 on Facebook ads in an effort to interfere with the U.S. presidential election. While this might seem like a paltry sum compared to the cost of a television ad, the impact of those ads was amplified by the fact that they were designed to fan division on polarizing issues, such as gun control and race relations, and then targeted toward those most vulnerable to those messages.

As a society, we are still dealing with the fallout from these revelations and trying to determine what kind of controls, if any, should be placed on social media platforms when it comes to political advertising. The debate was reignited in November 2019, when Facebook refused to take down a misleading anti-Biden ad released by President Donald Trump’s reelection campaign. As the 2020 election draws closer, we need to take a look at the policies that social media platforms are implementing for political ads, and what the implications are for our democratic process.

First Amendment and Political Advertising

To understand the challenges of regulating political ads on social media, it’s helpful to look at the history of political advertising in the United States and how it’s been regulated in other forms of media.

There is a long and rich history in our country of candidates lying about their opponents, starting with Thomas Jefferson’s campaign claiming that John Adams was going to take the country to war with France.

Lying in political advertisements is also perfectly legal. This comes as a surprise to some because commercial ads are subject to restrictions that prevent them from making false claims about products or competitors. For example, when Kentucky Fried Chicken tried to claim that fried chicken could be part of an effective diet program in 2004, the Federal Trade Commission (FTC) penalized the company, requiring it to pull the commercials and submit all advertising for FTC review for the next five years.

The same doesn’t hold true for someone running for political office who runs an ad making false claims about their opponent. Why? Because political ads are considered political speech, and First Amendment law protects political speech above all other types of speech. The government has more leeway to penalize or censor commercial speech, but it has very little authority to regulate political ads. The rationale behind this is that voters have a right to uncensored information from candidates, which they can then evaluate themselves before making their decisions at the ballot box.

Because no government agency can impose penalties on a candidate who lies in an ad, the only form of recourse for a victim of a false attack ad is to sue for defamation.

For practical reasons, these lawsuits tend to be rather rare. It’s difficult for candidates for office to succeed in these lawsuits, given that public figures are subject to a higher standard for libel. Just like private plaintiffs, a public figure must establish that false statements of fact were made about them that damaged their reputation. But on top of that, they must prove that the statements were made with “actual malice,” meaning that those who made the ad either knew it was false or didn’t care whether it was true or false. While many candidates might be able to overcome these hurdles and win their suit, for many it may not be worth their time and money, especially when they’re in the midst of running a campaign.

But let’s say a candidate does want to sue for defamation—who can they sue? Obviously, they can go after the individual or organization who created and paid for the ad, but is the media company that actually distributed the ad to the public also liable? Different rules apply to different mediums of communication.

Newspapers are considered publishers and are liable for the ads that they run. A corollary to this is that they have full discretion over the ads they run and have no obligation to run ads that they don’t want to run—in fact, it is their First Amendment right to make their own decisions about what they will print.

In marked contrast, broadcast radio and television stations cannot pick and choose what political ads they air, at least for candidates of the same office. They can either choose not to run any political ads at all or they have to run political ads for all candidates who want them. Why? Because the airwaves that broadcasters use is a scarce resource. There can only be so many broadcast stations on the spectrum, and the resulting scarcity creates the danger that some points of view might never be aired. This danger is why the Federal Communications Commission is authorized to place certain burdens on the First Amendment rights of the broadcasters in order to ensure that the public is being furnished with diverse ideas and information. Because of this, broadcasters are not liable for the ads that they run.

Cable television channels, meanwhile, aren’t subject to the same regulations as broadcast networks. They don’t have the same unique characteristics that broadcast channels do—they’re not limited in number—which means that they have discretion over which political ads they want to run and which ones they don’t. As a result, they’re also liable for any false ads that run and can be sued for libel.

Political Ads on Social Media

As the newest communications medium to enter the fray, social media has several unique qualities that distinguish it from the media that came before it. Like newspapers and cable television stations, practically speaking, there is no limit on how many social media platforms exist. But in practice, there are a few major platforms that dominate the landscape—Facebook (and its subsidiaries WhatsApp and Instagram), Google (and its subsidiary YouTube), and Twitter.

Another quality they have in common with newspapers and cable television stations is that they are under no obligation to run every political ad they receive. Contrary to popular belief, social media platforms do not have to comply with the First Amendment. They are private companies that are free to set their own content policies, and, unlike broadcast stations, there’s no requirement that they offer advertising slots to all candidates.

But unlike newspapers and television stations, social media platforms are not considered publishers at all. They’re considered internet service providers, and because of Section 230 of the Communications Decency Act, they’re not liable for what other people post on them. They can’t be sued for allowing false content on their sites or for running false political ads.

The final and perhaps most crucial difference between social media platforms and the mediums that have come before them is that they allow for a practice called “microtargeting.” Microtargeting can be broadly defined as “a marketing strategy that uses people’s data—about what they like, who they’re connected to, what their demographics are, what they’ve purchased, and more—to segment them into small groups for content targeting.” In the past few years, this practice has become particularly controversial when it comes to targeted political ads.

Each of the major platforms has its own policies when it comes to what political ads they will run and what kind of targeting they will allow for them.

Social Media Policies on Misinformation in Political Ads

In October 2019, President Trump’s reelection campaign released a 30-second video ad accusing former Vice President Joe Biden of promising Ukraine funds for firing a prosecutor investigating a company with ties to Biden’s son, Hunter Biden. The Biden campaign objected to this ad and asked various media outlets and platforms to take it down. The responses to this request have shed light on the different approaches that companies are taking to misinformation in political advertisements.

Some social media platforms, such as Twitter, TikTok, LinkedIn, and Pinterest, have sidestepped the issue by banning political advertisements altogether—but it’s worth noting that political ads were never a prominent feature on any of these platforms. The big players in this space have always been Facebook and Google.

Last year, in anticipation of the 2020 U.S. presidential election, Facebook outlined its plan to combat misinformation on the platform, which included flagging content from state-sponsored media outlets and labeling news stories disputed by third-party factcheckers as “false information.” So, it came as a surprise to many observers when the company refused the Biden campaign’s request to take down the Trump campaign’s ad, and in doing so laid out its rather different approach to misinformation in political ads. “Our approach is grounded in Facebook’s fundamental belief in free expression, respect for the democratic process, and the belief that, in mature democracies with a free press, political speech is already arguably the most scrutinized speech there is,” Facebook’s head of global elections policy, Katie Harbath, wrote in a letter to the Biden campaign. Facebook further explained its position in a blog post: “In the absence of regulation, Facebook and other companies are left to design their own policies. We have based ours on the principle that people should be able to hear from those who wish to lead them, warts and all, and that what they say should be scrutinized and debated in public.”

In contrast, Google has opted for a different approach, explicitly stating that it would not treat ads for politicians any differently from ads for any other product. “Whether you’re running for office or selling office furniture, we apply the same ads policies to everyone; there are no carve-outs. It’s against our policies for any advertiser to make a false claim,” the company stated in an announcement in November 2019.

Nevertheless, the anti-Biden ad can still be found on Google’s subsidiary YouTube. As a Google spokesperson explained, “There’s a difference, in our minds, between what constitutes political hyperbole versus something that could ‘significantly undermine trust in democracy.’ Political hyperbole is not new. There are politicians that exaggerate claims all the time.” While Google’s policy is to remove ads that contain clear and objectively false statements of fact about candidates, the Trump campaign’s ad about Biden is actually filled with false implications. As Wired magazine reported,

If we pull apart the specific claims in the video, it’s not so easy to find one that’s provably false. Maybe Joe Biden didn’t “promise” Ukraine the money, but by his own account, he told Ukraine it was conditioned on firing Shokin—a plan that he says he helped develop. Maybe that wasn’t because of Hunter Biden’s role with Burisma, but Shokin was in charge of the office that had opened an investigation into the company a few years earlier. The insinuation might be dishonest, but the constituent pieces are all at least true-ish.

While Facebook has essentially carved out an exception in its own policies for speech in political ads, Google’s policy toward misinformation in political ads echoes the fundamental principles of libel law, which allows plaintiffs to receive compensatory damages for false statements of fact made about them, but not for opinions or insinuations. Practically speaking, this means that all but the most blatantly fraudulent ads are allowed on the platform, leaving voters to determine which insinuations to believe and which ones to dismiss.

Social Media Policies on Microtargeting Political Ads

Leaving voters to make their own decisions about whether or not they believe a politician’s statements isn’t necessarily a bad thing; one could argue that that’s a fundamental part of the democratic process. In an ideal world, the free marketplace of ideas allows the public to access as much information about the candidates as possible, the free press evaluates the candidates’ statements and exposes any falsehoods, and voters discuss the issues among themselves and then make their choices at the ballot box. This is generally how things have played out when it comes to falsehoods in political ads that run in newspapers, on the radio, and on television. Because these ads are pushed out to large and broad audiences, they immediately receive a great deal of public scrutiny.

But social media has a distinctive characteristic that makes it very different from those traditional mediums of communication—it allows for microtargeting. And microtargeting makes it very hard to distinguish real news and fake news. As the chair of the Federal Election Comission, Ellen L. Weintraub, wrote in an op-ed advocating for social media platforms to ban microtargeted political ads, “It is easy to single out susceptible groups and direct political misinformation to them with little accountability, because the public at large never sees the ad.”

As a result, falsehoods in microtargeted political ads may go unchecked—and these falsehoods can have a significant impact on elections.

However, it’s important to note that microtargeting’s impact on democracy isn’t all bad. It allows for smaller and less-well-funded campaigns to reach voters because online ads tend to be much less expensive than TV and radio spots. It also enables candidates to hone in on real and specific issues that matter to their potential constituents, as opposed to the more vague and generic messages that tend to run on traditional media—this, in turn, can increase voter engagement and turnout.

Facebook and Google presumably weighed both the good and the bad when establishing their policies on microtargeting for political ads but came to strikingly different conclusions. Google’s current policy only allows political ads to be targeted to broad categories of zip code, sex, and age. The platform does allow for contextual targeting, meaning that an ad about, say, immigration policy, can be served to a person reading a story about immigration. As Google stated in its announcement of the policy update last November, “this will align our approach to election ads with long- established practices in media such as TV, radio, and print, and result in election ads being more widely seen and available for public discussion.”

Facebook, on the other hand, has taken a much more permissive stance toward microtargeting, opting not to put any limits on how campaigns can target their ads. Instead, it has pledged to offer users more control over how many political ads they see and make its online library of political ads easier to browse—measures that many critics view as doing very little to expose targeted ads to public scrutiny.

Conclusion

The way that we choose to regulate a new form of communication must take into account the unique characteristics of the technology behind it. A few decades ago, the U.S. Supreme Court found that radio and television broadcasters could be penalized for allowing profanities on air. While this kind of punishment for speech would clearly violate the First Amendment if it were imposed on a newspaper, the Court noted that unlike the printed word, broadcast media is pervasive and invasive—it can enter someone’s ears even if they had no part in turning it on.

Similarly, social media also possesses very different characteristics from the media that have come before it. Regulation of political ads on social media, whether by the platforms themselves or government actors, needs to take into account that allowing candidates to microtarget ads while at the same time refraining from factchecking their statements creates an environment where false information can spread unchecked.

Lata Nott is an attorney with expertise in the intersection of law, technology, and expression. She is a Fellow for the First Amendment at the Freedom Forum, an organization dedicated to fostering First Amendment freedoms for all.