Meta to End Fact-Checking Program, Acknowledges More Harmful Content May Appear
Meta, the parent company of Facebook and Instagram, is making sweeping changes to its content moderation policies, including scrapping its third-party fact-checking program. Instead, the company will rely on “community notes”—a method where users contribute to the correction of false claims. This shift brings Meta more in line with Twitter’s new approach under Elon Musk, which also eliminates fact-checkers in favor of user-generated content moderation.
Zuckerberg’s Bold Move Amid Changing Politics
CEO Mark Zuckerberg made the announcement on Tuesday, just as President-elect Donald Trump is set to take office. The timing of the move has sparked political commentary, as Trump and his supporters have frequently criticized Meta for allegedly censoring conservative viewpoints. Zuckerberg, speaking in a video, explained that the company’s previous fact-checking system had become politically biased and was eroding trust rather than building it. He further stated, “What started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas.”
However, Zuckerberg did acknowledge that this change comes with trade-offs. He warned that harmful content might become more visible as Meta reduces its content moderation efforts.
Meta’s Shift: Moving Away from Third-Party Fact Checkers
Meta’s newly appointed Chief of Global Affairs, Joel Kaplan, voiced similar concerns about the political bias of third-party fact-checkers. He noted that the intention behind the partnerships was good, but they became too politically charged in their execution. Kaplan’s remarks align with a broader shift in Meta’s leadership, as the company adjusts its policies ahead of Trump’s inauguration. This includes appointing key figures with strong ties to Trump, such as UFC CEO Dana White, who joined Meta’s board.
Kaplan also suggested that the move was in response to the political climate and to support free expression, noting the opportunity with a new administration. Meta reportedly informed Trump’s team in advance about the policy shift, underscoring the political undertones of the change.
Backlash: Critics See Political Pandering
While Zuckerberg and Kaplan argue that the shift promotes free expression, critics see it as political pandering. The Real Facebook Oversight Board, an independent group of experts that holds Meta accountable, condemned the changes as a retreat from responsible content moderation, accusing Meta of aligning with right-wing interests.
A Reversal of Content Moderation Policies
This shift marks a dramatic reversal of Meta’s stance on misinformation. In 2016, Meta launched its fact-checking program to address concerns about disinformation—especially in the context of the 2016 U.S. elections. Over the years, Meta added safety teams, automated systems to flag false claims, and the Oversight Board to handle complex moderation decisions. Yet, critics—especially from conservative circles—have long complained that fact-checking suppressed their voices.
Now, Zuckerberg is following Musk’s example, dismantling fact-checking teams in favor of a community-driven system. Starting in the U.S., Meta will introduce user-generated “community notes” to help flag false content across Facebook, Instagram, and Threads. Kaplan praised Musk’s influence on reshaping the conversation around free expression and said that Meta’s new approach reflects that shift.
Adjusting Automated Systems and Content Restrictions
In addition to changes in fact-checking, Meta will also modify its automated content moderation systems. These systems, which had been filtering out too much non-violating content, will now focus only on high-severity violations such as terrorism, child sexual exploitation, drugs, fraud, and scams. Content moderation concerns that don’t fall into these categories will require user reports for further evaluation.
Zuckerberg explained that the previous system made too many mistakes, removing content that didn’t violate any policies. While the new system may reduce censorship errors, it could also mean more harmful content slips through the cracks.
Meta will also relax restrictions on sensitive topics, such as immigration and gender identity, and allow more political content to appear in users’ feeds.
Moving Trust and Safety Teams to Texas
To help rebuild trust in its content policies, Meta plans to relocate its trust and safety teams from California to Texas and other U.S. locations. Zuckerberg believes this move will help ensure the team operates in environments where concerns about political bias are less pronounced.
Conclusion: A Polarizing Change in Policy
Meta’s shift in content moderation policy marks a significant and controversial change in how the company addresses misinformation and harmful content. While it positions itself as a defender of free expression, critics warn that this could open the door for more divisive and harmful content to thrive on its platforms. The decision, which has clear political undertones, will likely continue to stir debate as the company navigates its relationship with users, regulators, and the broader public.