youtube AI utilizzo dati

YouTube is using AI to verify the age of users in the US: Attack on privacy or protection of minors?

YouTube is testing in the United States a system of age verification based on artificial intelligence. Unlike traditional methods, which are based on the age declared by users at the time of registration, this method analyzes behaviors on the platform to estimate whether or not a user is of age. The stated goal is to offer a saving experience to minors, automatically activating protection measures for those who are less than 18 years old. Currently active also in Swiss And United Kingdomthe system could soon be extended to other countries. The novelty, however, raised a heated debate in the United States: is it really an effective protection, or a new privacy threat?

Let’s see how this system works and what are the implications for user privacy.

How the automatic estimation system of the YouTube age works works

On August 13, 2025 YouTube began to test a model of in the United States artificial intelligence which analyzes i behavior user – including the searches made, the videos watched, and how long the account is active – to estimate their age. In this way, the verify age It no longer depends on what was declared at the time of registration, but it comes obtained directly from YouTube through profiling techniquesThat is, by collecting and analyzing personal data to obtain a “profile” of the user who describes the main characteristics. These practices are already widespread in areas such as advertising and social media, and have long raised worry related to amount of data And information collected. Although YouTube has specified not to collect new data, but to use those already available, it did not clarify all the information used by the AI ​​model and in what way they are treated.

What happens if the system assumes that the user is minor?

The declared goal of this system is to offer greater protection to minors. So when the algorithm estimates that a user is under 18, some measures are automatically activated to make his Safe navigation. Among these:

  • the impossibility of viewing content with age restrictions,
  • the deactivation of the advertising personalized,
  • the activation of memorandum To take breaks from the screen or go to sleep,
  • the removal of recommendations relating to videos with violent content or who promote unrealistic ideals of body or lifestyle.

These protections already existed for users who declared to be minors, but now they come apply so automaticbased on the behavior detected. The problem presents itself when the system is wrong: even the best artificial intelligence models have a margin of error, and there is always the risk that users are classified incorrectly, especially if they have atypical behaviors or if they are close to 18 years (therefore between 16 and 20 years).

If the algorithm makes an error and erroneously classifies an adult as a teenager, the user is still possible return to use YouTube normallybut must prove to be an adult by loading a identity documentone credit card or one face photo. These measures, however, open delicate scenarios in terms of privacy.

The critical issues on privacy

The main concerns concern the data processing. YouTube did not specify clearly which information are used To estimate users’ age, nor did you provide details on how the loaded data are managed to confirm their identity in case of error. It is not clear, for example, how long these data are kept, whether they are shared with third parties or how they are possibly deleted. Some privacy experts, such as Suzanne Bernstein of the Electronic Privacy Information Center -an American research center focused on privacy protection -underlined the lack of transparency by the company on the possible errors of the AI ​​and the absence of independent assessments on the effectiveness of this new system. The collection of sensitive data such as images and identity documents, in the absence of clear guarantees, can represent a risk particular for vulnerable categories such as Political dissidents, victims of abuse or users who rely on anonymity online.

The challenge today is find a balance Between two legitimate needs: on the one hand, guarantee a safe environment for younger users; on the other, protect the right to privacy and anonymity, especially in a platform born as an open and accessible space. Much will depend on how YouTube will be able to manage these aspects in the long term, but, if this test is successful and the system will be expanded globally, also by the role that the public institutions And the users themselves will want to play in defining the limits of the technological intervention.