Social Media Platforms Struggle to Contain US Election Disinformation, Study Shows

U.S. presidential election, a new report has surfaced, raising critical concerns over the efficacy of major social media platforms in curbing disinformation. According to a study conducted by advocacy group Global Witness, platforms such as TikTok, Facebook, and YouTube have allowed the approval of paid advertisements containing blatant falsehoods about the election process, fueling misinformation that could have significant consequences on voter trust and democratic integrity. The findings come at a critical time, just weeks before a tight election battle between Vice President Kamala Harris and former President Donald Trump, exacerbating fears that the disinformation war, which rocked the 2020 election, is far from over.

Global Witness Study: The Alarming Results

Global Witness, a watchdog known for exposing disinformation and threats to democratic processes, set out to test how well the ad moderation systems of TikTok, Facebook, and YouTube could detect and prevent false election claims. The organization submitted eight ads to these platforms, each containing false information designed to mislead the public on the voting process. The results were worrying: TikTok and Facebook approved several of these ads, despite policies that explicitly prohibit disinformation and, in TikTok’s case, political advertising altogether.

The ads carried flagrant election falsehoods, such as claiming that people can vote online — which is not allowed in U.S. elections — and suggesting that only individuals with valid driver’s licenses are eligible to vote, which is false. Other ads promoted voter suppression by encouraging people not to vote or insinuating that the election is rigged. There were even ads inciting violence against electoral workers, and one ad threatened a candidate’s safety.

TikTok performed the worst in the study, approving four of the eight ads, despite its policy against political advertising. This revelation is particularly troubling since TikTok, a platform heavily used by younger voters, holds a powerful influence in shaping political opinions. Facebook, which has also faced immense scrutiny over its role in spreading disinformation in the past, approved one ad that falsely claimed voting was restricted to those with a driver’s license. YouTube, owned by Google, showed more resilience, initially approving four of the eight ads, but it later blocked their publication pending formal identification.

A Clear Failure in Content Moderation

Global Witness was quick to call out the platforms for their failure to prevent these ads from making it past moderation filters. Ava Lee, the digital threats campaign leader at Global Witness, expressed shock and disappointment at the findings. “Days away from a tightly fought US presidential race, it is shocking that social media companies are still approving thoroughly debunked and blatant disinformation on their platforms,” Lee said.

Lee added that the dangers of electoral disinformation are well-known by now, especially after the tumultuous events following the 2020 election, when former President Trump and his supporters spread unverified claims of widespread voter fraud, ultimately leading to the January 6 Capitol insurrection. “In 2024, everyone knows the danger of electoral disinformation and how important it is to have quality content moderation in place,” Lee said. “There’s no excuse for these platforms to still be putting democratic processes at risk.”

KEEP READING:  TikTok's Job Cuts Amidst AI-Driven Content Moderation Shift

The 2024 election cycle has been no different, with disinformation continuing to pose a significant threat to voter confidence and participation. Researchers have warned of growing threats from both domestic actors and foreign influence operations that are designed to destabilize the electoral process. The fact that social media platforms still struggle to effectively moderate such content raises questions about their readiness to safeguard democratic processes during one of the most pivotal elections in U.S. history.

TikTok and Facebook Respond

In response to the findings, both TikTok and Facebook provided statements, though their defenses did little to alleviate concerns. A TikTok spokesperson acknowledged that the ads were “incorrectly approved during the first stage of moderation.” They reiterated the platform’s stance that political advertising is prohibited and said that the company would continue to enforce this policy moving forward.

Meta, the parent company of Facebook, dismissed the findings as unrepresentative of its larger moderation efforts. A Meta spokesperson argued that the study was based on a small sample size of ads and was “not reflective of how we enforce our policies at scale.” Meta has consistently touted its commitment to protecting the election process through improved content moderation and policy updates. The spokesperson emphasized that “protecting the 2024 elections online is one of our top priorities.”

However, Meta’s record with election disinformation has been less than stellar. In 2020, Facebook faced heavy criticism for failing to prevent the spread of false election claims that amplified conspiracy theories and stoked civil unrest. Despite these critiques, the platform has introduced new measures for the 2024 election, including a pledge to block new political ads during the final week of the campaign to prevent last-minute disinformation. Yet, the approval of even one false ad demonstrates how vulnerable the system remains.

YouTube: A Stronger Approach?

YouTube fared slightly better in the Global Witness study. While it initially approved half of the ads submitted, the platform put in place a more stringent identification verification process. This meant that ads were ultimately blocked until formal identification, such as a passport or driver’s license, was provided by the advertiser. This step, according to Global Witness, created a “significantly more robust barrier for disinformation-spreaders” compared to TikTok and Facebook.

Google, YouTube’s parent company, introduced this verification process in the wake of the 2020 election to address the rampant spread of disinformation. The platform also announced that it would temporarily pause all election-related ads after the last polls close on November 5 to limit confusion and prevent misleading ads from circulating during the vote-counting process, which could extend for days after Election Day. Google is also extending a similar approach to election-related content, applying heightened scrutiny to videos and user-generated content that could mislead voters.

KEEP READING:  Government Urges AgTech Firms to Safeguard Farmers' Personal Data

The Role of Social Media in Modern Elections

The Global Witness study underscores a growing trend of scrutiny on social media platforms’ role in elections. Platforms like Facebook, TikTok, and YouTube have become battlegrounds for the flow of information, making them both powerful tools for communication and potential threats to democratic systems. Disinformation can spread rapidly on these platforms, reaching millions of users within hours. As a result, these companies are under increasing pressure to ensure that their systems are fortified against harmful content that could undermine electoral integrity.

Disinformation is not a new problem in the U.S. electoral process. However, the proliferation of social media has taken this issue to new heights, allowing bad actors to exploit vulnerabilities in platform policies and spread falsehoods with little accountability. As seen in the 2020 election, unchecked disinformation can have dangerous consequences, eroding public trust in democratic institutions, encouraging voter suppression, and even leading to violence.

Conclusion

With less than a month remaining until the 2024 U.S. presidential election, the Global Witness report serves as a stark reminder of the continued challenges social media platforms face in addressing disinformation. While companies like Meta, TikTok, and Google have introduced various measures to curb election-related falsehoods, the study’s findings demonstrate that significant gaps remain. As voters prepare to cast their ballots, the role of these platforms in protecting democratic processes has never been more crucial. If left unchecked, the spread of disinformation could once again threaten to undermine the very foundation of the electoral system.

Related Posts
Meta’s Oversight Board Seeks Public Comments on Hate Speech Moderation: Balancing Freedom of Expression and Protecting Immigrants

Meta’s independent Oversight Board announced an open call for public comments regarding its hate speech moderation policies, particularly in relation Read more

Spotify’s New ‘Offline Backup’ Feature: A Game-Changer for Music Lovers on the Go

Music has become an essential companion for many, whether during long commutes, at the gym, or simply as a backdrop Read more

Elon Musk-Led X Updates Privacy Policy, Allowing Third-Party AI Training on User Posts

X, the platform formerly known as Twitter. The company has updated its privacy policy to allow third-party collaborators to train Read more

Nvidia Hits Record High as Chip Stocks Surge on TSMC’s Rosy AI-Powered Outlook

The semiconductor industry has been at the forefront of technological innovation, providing the backbone for a vast array of modern Read more

The “Volt Typhoon” Misinformation Campaign: An In-Depth Analysis

Narratives often shape public perception and policy decisions. A recent report, titled "Volt Typhoon III: A Cyber Espionage and Disinformation Read more

CAK Welcomes Starlink’s Entry into Kenya’s Digital Market

The Competition Authority of Kenya (CAK) has announced that it will not impede the entry of Starlink, the satellite internet Read more