All AI programs that generate erotic deepfakes are banned

All AI programs that generate erotic deepfakes are banned

In Italy, the debate on the dangers of deepfakes has reignited after the Prime Minister, Giorgia Meloni, reported that false photos were circulating online in which her face ended up on the body of a woman in underwear. Programs that generate photos like these will soon be banned across the EU.

The EU Council and the European Parliament have reached an agreement on a series of amendments to the so-called Ai Act, the European regulation on artificial intelligence which came into force in August 2024. The agreement, which now must be formally adopted by both institutions, is part of the seventh EU regulatory simplification package (the so-called “Omnibus VII”), introduces two important innovations: the explicit ban on artificial intelligence systems used to create non-consensual sexual images (so-called nudifiers) or child sexual abuse material, and a postponement of deadlines for businesses to comply with rules on high-risk systems.

Changing the law

The Ai Act was adopted in May 2024 after years of negotiations and entered into force in August of the same year. This is the first organic regulatory framework in the world on artificial intelligence.

The regulation already contained absolute prohibitions, for example mass biometric recognition in real time in public spaces or social control systems, but it did not yet concern the case of sexual deepfakes, a phenomenon that is still not widespread or known. Legislators therefore took advantage of the Commission’s request to simplify the rules of the text, to fill this gap.

“In addition to simplification measures, we are banning nudity apps and, of course, the production of child pornography using artificial intelligence systems. This way, we have the tools necessary to intervene if providers do not intervene on artificial intelligence systems that compromise fundamental rights or human dignity,” claimed the Irish Liberal Michael McNamaraone of the speakers of the text.

The ban on sexual deepfakes

So-called nudifiers are applications (often also available for free online) that use artificial intelligence to generate images or videos in which a person can be portrayed naked or in sexually explicit poses, without their consent. In Italy, with a law that came into force last October, the production and dissemination of these images were prohibited. But the EU goes a step further, completely banning any program that could produce them.

The agreement introduces a new article into the regulation which prohibits three distinct behaviors: placing artificial intelligence systems specifically designed to create such content on the European market; market systems without “reasonable security measures” to prevent this type of use; and finally, for those who use these systems (the so-called deployers), deliberately use them for this purpose. The ban covers images, videos and audio content.

The same logic applies to artificially generated child sexual abuse material: the agreement explicitly prohibits any system designed to create it or that does not have adequate measures to prevent its generation.

“It is absurd that today just three clicks are enough to manipulate images of women and children and create nude photos or pornographic material. Today we are putting a definitive end to this type of violence against people and children,” declared Dutch Green Kim van Sparrentak. The MEP guaranteed that “the exemptions granted to the industry will not trump our security”, because “artificial intelligence must always be safe and must not discriminate or constitute a danger to fundamental rights”.

Businesses will have until December 2, 2026 to adapt their systems to these requirements. Those who do not comply risk fines of up to 35 million euros or 7 percent of the global annual turnover, whichever is greater.

An extra year for high-risk systems

The second major chapter of the agreement concerns the roadmap for systems classified as “high risk”, giving companies more time to adapt to a whole series of new obligations: registering the system in a European database, documenting how it works and what data it has been trained on, demonstrating that it has been tested and that the risks are under control, ensuring human supervision in its use, and undergoing verification by independent bodies before putting it on the market.

The Ai Act distinguishes between two types of high-risk systems. The first type is “pure” software: for example, an algorithm that analyzes resumes for hiring, a facial recognition system, or a program used by law enforcement. The second type is artificial intelligence embedded in a physical object: the software of a pacemaker, the automatic braking system of a car, the intelligent sensors of an industrial machinery. For both types the deadlines are postponed: the first group will have to be in compliance from December 2027, the second from August 2028. The reason for the postponement is practical: complying with the law requires precise technical standards (how do you certify that an algorithm is “sufficiently secure”?) and designated national authorities to which companies can turn for checks and controls. Neither one nor the other is ready yet.

The watermark on generative AI

Another change concerns the opposite direction: the obligation to mark content generated by artificial intelligence (so-called watermarking) is brought forward, not postponed. With the agreement, generative AI systems will have to apply marking techniques for their products (photos, videos and texts) by 2 December 2026, rather than by 2 February 2027 as originally foreseen in the Commission proposal. Watermarking is a tool that allows you to detect and track content of artificial origin, reducing the risk of misinformation and manipulation.

Less bureaucracy for SMEs

The agreement extends some exemptions already provided for small and medium-sized enterprises (SMEs) also to the so-called “small mid-caps” (in English Small Mid-Cap, Smc), i.e. companies which, despite having exceeded the size thresholds of SMEs, remain relatively small in size. These entities, excluded from certain procedural obligations provided for by the AI ​​Act, will be able to develop and distribute artificial intelligence systems with a reduced bureaucratic burden. The stated objective is to prevent the legislation from disproportionately penalizing companies that do not have the resources of a large industrial or technological operator.