Google and Meta Block Political Ads to Curb Misinformation, but Experts Warn It May Be Too Late

November 11, 2024 at 3:00 PM

5 minutes read

Google and Meta Block Political Ads to Curb Misinformation, but Experts Warn It May Be Too Late
Several major online platforms are removing the ability to run new political ads during election week in an effort to prevent the spread of misinformation. Jonathan Raa/NurPhoto/Shutterstock

In a bid to counter election misinformation, social media giants Google, Facebook, Instagram, and YouTube have implemented temporary bans on political ads. Meta (owner of Facebook and Instagram) recently began blocking new ads related to U.S. social issues, elections, or politics, extending the policy through the week. Google announced a similar pause on election-related ads for YouTube and its search platform, set to take effect after polls close on Election Day.


These moves are designed to prevent candidates and supporters from prematurely claiming victory or shaping public opinion during the critical vote-counting period. However, some experts argue that such ad bans might be too little, too late, as platforms have already taken steps back from previous trust and safety measures.


A Shift in Social Media Policies

In stark contrast to Google and Meta’s approach, X (formerly known as Twitter) has not enacted a similar ad pause. X lifted its political ad ban after Elon Musk acquired the platform and has taken a more lenient stance on election-related content. Experts say that this leniency, along with cutbacks on platform trust and safety teams, has opened the door to a resurgence of misinformation.

“Platforms like X have become hotbeds for false narratives, with little effort to rein in misinformation,” said Sacha Haworth, executive director of the watchdog group Tech Oversight Project. “The gap between platforms’ ad policies and their enforcement on organic content has weakened the integrity of the information ecosystem.”


Fighting an Uphill Battle Against Misinformation

In the weeks leading up to the election, election officials and watchdogs have been actively countering false information. Viral claims of voter fraud and unverified allegations about mail ballots and voting machines have already cast doubt on the electoral process. Federal law enforcement has warned that election-related grievances could even incite violence.

Imran Ahmed, CEO of the Center for Countering Digital Hate, noted that election misinformation has been festering for years. “Over the last four years, lies about our electoral process have become a constant drip, undermining trust in democracy,” Ahmed said. “It’s simply too late for temporary ad pauses to reverse the damage.”


Platforms like X, under Musk’s ownership, have taken a more hands-off approach. Musk himself has shared polarizing statements, including a controversial post where he appeared to question why “no one is even trying to assassinate Biden/Kamala,” which he later deleted, calling it a joke.


Social Media Platforms Face New Challenges

The rise of AI-generated content has added another layer of complexity. Deepfake videos, audio, and images, which can make misleading information appear authentic, pose a unique threat in the misinformation landscape. Concerns about such content have pushed tech platforms to bolster their policies and emphasize reliable information sources.

Google-owned YouTube, for example, has pledged to support election integrity by removing content that could mislead voters or promote conspiracy theories. “Responsibility remains our number one priority,” a YouTube spokesperson stated. TikTok, which has banned political ads since 2019, pointed to its U.S. Elections Integrity Hub, a resource aimed at directing users to reliable voting information.


Policy Limitations and Enforcement Gaps

Despite their stated commitments, major platforms are grappling with the enforcement of their policies. Meta, for instance, has said it will reduce the reach of misleading posts in the News Feed but stops short of outright removal. TikTok, Google, and YouTube have implemented similar measures, relying heavily on content labels and informational panels to combat misinformation.

X, meanwhile, has kept its Civic Integrity Policy in place, which aims to discourage posts that mislead voters on how to vote or could incite violence. However, the policy allows for controversial and polarizing content, highlighting the limitations of these approaches in a landscape where misinformation can easily gain traction.


Ahmed summed up the situation: “Pausing political ads is a small step, but it won’t stop misinformation from reaching millions on platforms designed to prioritize high-engagement content, even when that content is false or harmful.”


While social media giants have made incremental changes, experts continue to stress the need for more robust, consistent enforcement. Without meaningful action, platforms could remain fertile ground for misinformation — potentially undermining public confidence in the election results.

Up next