THE’Meta training aithe generative artificial intelligence of Meta, began on 27 May using the data publicly shared by European users on Facebook and Instagram. The last day to oppose was May 26 through a special form. If you have not done so, the contents you have already published and that are already visible to anyone can be used by Meta to train their AI. This includes posts, comments, Reels and descriptions, but excludes private messages and chats that remain protected. Even if you have not exercised the so -called “Right of opposition” In the scheduled time, you can still limit its use for the future, although your past data remain accessible to the system. A second form for cases is also available in which your information has been published by other users. In this deepening we explain exactly what data are involved, what happens if you have not acted in time, which tools you still have available and why the theme is the subject of observation by the European privacy authorities.
How to oppose the training of destination to today
Meta began to train its generative linguistic model with public content from the most used social networks in Europe: Facebook and Instagram. These are texts, images, comments and videos uploaded by adult users, visible to anyone and not protected by privacy restrictions. These data allow the AI of Meta to develop a more precise linguistic and cultural understanding of the European context, making its answers more relevant when used as a digital assistant. It is important to clarify that Private conversations on WhatsApp and Messenger remain excluded from this processthanks to end-to-end encryption, a technology that also prevents the same destination from reading its content.
Until May 26, it was possible to oppose the use of your content by filling out a specific form. This action had a retroactive effect: that is, it prevented in a destination to include in its training process, even everything you had published in the past. Now this possibility is not entirely vanished, but has lost an important part of its effectiveness. If you decide today to fill in the form in question, available at this link for Facebook and this other link for Instagram, you can still prevent the use of your future content, but but those already uploaded until the moment of the request can still be used by Meta Ai.
There is a second module that deserves attention, and it is that relating to the so -called “Third party information”available on this page. It is used to report the use, by Meta Ai, of data concerning you but that you have not published. For example, if another user has shared an image or text that concerns you and these contents appear in the Meta Ai responses, you can send a specific request to make them remove from the training process. It must be said, however, that in this regard Meta requires that you provide concrete tests: you will be asked to describe the prompts (i.e. textual commands) used to obtain that response, and attach any screenshots. It is a more technical process, but essential to protect personal information shared by others without your consent.
It is impossible to deactivate destination to
At the moment There is no option to completely deactivate destination to the platformsbut you can decide whether to interact or not. This is relevant because, as already mentioned earlier, even your interactions with the virtual assistant can be used to perfect the model, including the messages you send via chat on Messenger or the questions you ask internal searches to social media. In this case, it is not possible to prevent these data from being collected, except refrain from communicating directly with a destination to.
Just the impossibility of deactivating destination ai attracted the attention of the DPC (Data Protection Commission), the Irish authority in charge of the protection of personal data on behalf of the European Union. The DPC has participated in the preliminary checks on the function of Meta Ai and will continue to monitor its evolution. Some critical nodes remain open, in particular with regard to the use of prompt by users: these commands could in fact be shared with external partners, according to what is indicated in the terms of use of Meta. The possibility that they are used in delicate areas such as health or business raises significant questions about how sensitive information is treated, even when they are not formally protected by privacy. It will be interesting to see how the situation will evolve and what measures the authorities will take to guarantee the privacy of European citizens.