Several users of Instagram They reported a sudden increase in disturbing content on the platform, in particular Reel with violent scenes such as fights, accidents, animal violence, all without censorship also for users who have chosen to limit the contents that fall into the category Nsfw (Not Safe for Work). In these hours Halfthe company that deals with the development of the social network, in fact confirmed this by apologizing for what happened. Instagram users have particularly reported a sudden increase in violent and disturbing content in the feed relating to reel (the videos of a few seconds that appear “shaking” the screen), even when the highest level of protection had been set in the “Control of sensitive content” of the app. Meta said that it was a promptly correct technical error.
This episode, however, comes in a particular moment: the company has recently changed the way in which the contents moderate, giving greater visibility to post previously downgraded by automated systems. This change came following an important “change of course” wanted by the CEO of Meta, Mark Zuckerbergwhich also includes a revision of policies on political content and the removal of some controls on disinformation. The timing of these changes, which coincide with the return of Donald Trump On the US political scene, it raises questions about the possible relationship between the “relaxation” of the rules of moderation and the greater spread of controversial content.
Why has the Instagram algorithm suggested disturbing kingi? What is going on
The visibility of violent content would be due to an error in the algorithm of the social network. THE’Instagram algorithm is based on a system of artificial intelligence which selects and promotes content based on the interests and interactions of users. This system relies on a mix of technologies, including the Machine Learning (automatic learning), to identify the most engaging content and show them in each user’s feed. Despite the use of such an efficient algorithm, the management of sensitive content remains complex: while existing guidelines that prohibit explicitly violent images, Meta allows some exceptions, for example for posts that denounce violations of human rights or events of public relevance.
The recent case of the violent king suggested to users may therefore have actually been caused by a Error in moderation filtersas stated by Meta, who through a spokesman issued the following declaration to CNBC:
We corrected an error that made some users see Instagram feeds Reel content that should not have been recommended. We apologize for the mistake.
The role of goal in the new American political scenario
Some, however, are doubtful about the real reason why the Algorithm of Meta proposed the contents in question. According to these hypotheses, the problem could be deeper and linked to a structural change in the way the platform manages the visibility of content.
In January, Meta announced a revision of its policies, going from a rigid and automated control over all potential violations to a more selective approach, focusing only on «illegal and high severity violations, such as terrorism, sexual exploitation of minors, drugs, fraud and scams». For violations considered “less serious”, however, the platform entrusts the responsibility to report them to users. Translated in poor words, this means that some contents that in the past would have been automatically downgraded or hidden may now be visible until they are manually reported by the user on duty.
The timing of these changes is not accidental. The change in the rules of moderation took place shortly after Meta’s decision to review its policy on political content and information. Zuckerberg said he wanted to reduce the intervention of the platform in the moderation of political posts and stop the third-party fact-checking program in the United States, replacing it with a system similar to that adopted by X Of Elon Muskcalled “Community Notes”. This model provides that the users themselves add contextualizations to the contents deemed questionable, instead of direct intervention by the platform.
These moves have been interpreted as an attempt to re -establish relationships with Donald Trump and his supporters, who in the past have criticized destination for limiting the spread of conservative content. Zuckerberg also met the White house To discuss the role of destination in promoting American technological leadership, feeding the speculations on the possible political pressures to which “Mr. Zuck “.
The uncertain future of the moderation of goal
In summary, therefore, on the one hand Meta states that the suggestion of distinct king reel has been the result of a technical error, on the other the new approach to moderation may have contributed (voluntarily or involuntarily) to make the algorithm more permissive, reducing the ability of the systems to filter controversial content before they reach users. It must be said that the cut of over 21,000 employees between 2022 and 2023which struck in particular the teams that dealt with counteracting dangerous content and disinformation, may have further reduced the effectiveness of moderation. The question that arises spontaneously at this point of our consideration is: what will be the future of goal moderation?