Meta, the parent company of Facebook, Instagram, and WhatsApp, has provided an analysis revealing that AI-generated content accounted for less than 1% of election-related misinformation fact-checked during the major elections held in over 40 countries in 2024. The findings, shared by Meta, come as part of its broader efforts to tackle misinformation and disinformation on its platforms, which have become key players in modern political discourse.
The social media giant’s analysis focused on elections in several countries, including the United States, India, Bangladesh, Indonesia, Pakistan, the European Union, France, the United Kingdom, South Africa, Mexico, and Brazil. Meta examined the content that was posted on its platforms during these elections and found that, despite concerns around the proliferation of AI-generated misinformation, such content was a negligible part of the overall problem. This revelation has been significant in discussions about the role of artificial intelligence in spreading false or misleading information during elections, an issue that has sparked intense scrutiny in recent years.
Meta’s data suggests that human-generated content remains the primary source of misinformation in the context of political elections. This insight challenges some of the narratives that AI-driven manipulation is the major driving force behind disinformation campaigns, particularly in high-stakes political events. The company’s findings also highlight the ongoing challenges of moderating content on its platforms, given the sheer volume of posts and interactions that take place during an election period.
One of the key aspects of Meta’s efforts to tackle misinformation has been its collaboration with fact-checking organizations. The company works with more than 40 independent fact-checking partners globally, who help verify the authenticity of content and flag false claims. During the 2024 elections, Meta used a combination of AI tools and human moderators to identify and address misleading or false information that could potentially influence voters.
While AI-generated misinformation was minimal in comparison to human-generated content, the company did highlight that it has faced challenges in moderating content more effectively. Meta acknowledged instances of overreach during the COVID-19 pandemic, where its content moderation policies were sometimes criticized for being too stringent. This was particularly the case with content related to vaccine misinformation and public health guidance. Meta stated that, in hindsight, some of the platform’s moderation decisions could have been more nuanced, admitting to mistakes in the implementation of its COVID-19-related policies.
Moving forward, Meta has pledged to improve its content moderation practices, particularly in response to feedback from users and third-party organizations. This includes enhancing the transparency of its moderation decisions and refining its use of AI to detect and address harmful content without impeding the free flow of information. The company is also investing in AI systems that can better understand the context of posts, helping to differentiate between harmful misinformation and legitimate political discourse.
Despite the relatively low presence of AI-generated misinformation, Meta remains vigilant in its efforts to combat all forms of disinformation. The company’s latest transparency report comes at a time when governments and regulatory bodies worldwide are considering new laws to address the growing challenges of online misinformation. Meta’s findings provide some reassurance that AI’s role in spreading political disinformation may not be as significant as some experts had feared, but they also underscore the ongoing need for robust moderation practices, both human and AI-driven.
As the world approaches future elections, the challenge of balancing free expression with the need to curb misinformation will continue to be a major issue for social media platforms like Meta. The company’s acknowledgment of its past mistakes and its ongoing efforts to improve will likely be key to navigating the evolving landscape of digital misinformation.