Meta fact checking

Zuckerberg says goodbye to third-party fact-checking on Meta: what it means and how Facebook and Instagram are changing

Important news at home Half: yesterday Mark ZuckerbergCEO of the tech giant that owns Facebook, Instagram, Threads and WhatsApp, announced a carried out in its content moderation policies. After eight years, the third-party fact-checking programintroduced in 2016 to combat misinformation, will come replaced by a new system based on “Community Notes”. This model, similar to the one already used by Elon Musk on X (formerly Twitter), leaves users responsible for adding context and correcting any errors in the information shared. It is therefore not surprising that Musk commented on the news on his platform with «This is awesome» (“It’s fantastic”).

The decision, which will initially take effect in the United States (there are no immediate plans for the European Union and the United Kingdom), has raised heated debates. On the one hand, Meta argues that the change promotes freedom of expression and reduces political prejudices; on the other hand, experts fear that it could increase the spread of false content and reduce the effectiveness in the fight against disinformation and fake news. All this happens in a context of “approach” between Mark Zuckerberg And Donald Trumpwith important implications not only for American domestic politics but also for the role of social networks as a global information tool.

Meta’s third-party fact-checking program: what it was and how it worked

Meta’s fact-checking system was a program that provided the collaboration with independent fact-checkers certified byIFCN (International Fact-Checking Network) and which had the declared purpose of fight misinformation and offer reliable information. Since 2016, the fact-checking program has involved over 90 organizations that have focused on combating viral hoaxes. The fact-checking process involved three phases:

  1. Identification: fact-checkers identified misinformation on their own or through signals provided by Meta, such as user feedback, similarity detection, and monitoring the rapid spread of content.
  2. Revision: fact-checkers verified the accuracy of information through original investigations, primary sources and media analysis.
  3. Action: if a piece of content is rated false, Meta reduces its visibility, warns users who have shared it or want to do so, and applies a label with the fact-checker’s report. AI extends these actions to duplicates as well.

Meta fact-checking: why it was activated and why it will retire

The fact-checking program was developed by Meta in the midst of controversies following the 2016 US presidential electionwon by Donald Trump. The company had been accused of encouraging the spread of fake news, i.e. false news designed to influence public opinion. The system, based on collaborations with independent journalistic organizations (which verified the veracity of the contents, classifying them and, in the most serious cases, limiting their visibility), should have represented a solution to this problem. Over the years, however, this strategy has met criticism for alleged political bias and one perception of excessive censorship by the community.

Mark Zuckerberg himself, in communicating the abandonment of this approach, underlined how the current system had reached “a critical point”, characterized by too many errors and a level of moderation perceived as excessive. His vision is that of a return to the origins of social networks, conceived as open platforms where freedom of expression is a priority. This change, according to Meta, aims to reduce the censorship of legitimate content and simplify moderation policies, entrusting users with a central role in regulating public debate.

The new system, defined by Mr. Zuck “Community Notes”works collaboratively. Users can report misleading content and propose corrections or add context. However, this model presents some critical issues. On X, where it is already in use, Community Notes has shown clear limitations in checking for false information, especially in cases where verification requires specialized skills or in-depth knowledge of the facts. Entrusting moderation to the community therefore risks amplify group biaseswith the potential to transform the platform into an arena of conflict rather than a space for constructive discussion.

And if you are wondering why Meta has decided to proceed now, you must take into account that the announcement comes at a strategic moment, which sees theZuckerberg’s rapprochement with Trump and the Republican Party. After years of tensions culminating in the suspension of Trump’s Facebook and Instagram accounts following the assault on the Capitol on January 6, 2021, in 2023 Meta “readmitted” Trump to Facebook and Instagram while maintaining some restrictions, which were removed completely in July 2024. In recent months, Zuckerberg has undertaken a series of initiatives that signal his political repositioning. Among these, a dinner at Mar-a-Lago with Trump and the donation of 1 million dollars for his inauguration in the White House. Furthermore, Joel Kaplan, a representative close to the Republicans, was appointed head of the group’s global policies.

This turning point, described by New York Times like «a clear sign of how the company is repositioning itself for the Trump era», reflects a rethinking of Meta’s strategic priorities.

What changes for users: pros and cons of stopping fact-checking

For us users, these changes could translate into a greater margin of maneuver in publishing content and reducing censorship (two definitely positive “pros”), but also in a greater responsibility in evaluating what we read and share: as someone would say, «with great power comes great responsibility». The abandonment of the third-party fact-checking program, in fact, could increase misinformationwith the proliferation of false or at least misleading information, due to potential lack of effectiveness of the new system which relies on the common sense of the community to verify the facts. The era of professional fact-checkers on Facebook, Instagram and Threads therefore seems destined to end (Meta will introduce the novelty in the United States within the next two months).

The question now is whether this evolution will be able to preserve the balance between freedom and truthful, quality information in an increasingly complex digital world. In this regard, the reflection of the president of the is interesting Federal Trade Commission, Lina Khanwho commenting on Meta’s announcement stated:

We should have an economy in which the decisions of a single company or a single executive do not have an extraordinary impact on free speech online.