razzismo ai

AI is not neutral: it can learn our society’s biases

THE’artificial intelligence It’s an amazing tool powerful And versatile. We use it to recognize faces, to assist with medical diagnoses and to generate images, videos and texts. The numerous applications of this technology are making us make enormous progress and can improve our quality of life, but they hide a relevant problem: AI can learn, amplify and resubmit the prejudices present in the data with which it is trained. This “automation of biases” is a now recognized problem and arises from the way AI learns, extrapolating rules and patterns from the data given to it. In this article we see a notable case of “prejudice automation”, we delve into what types of prejudices – or “bias” – there may be in AI models and let’s see in which directions research is moving to try to solve the problem.

Bias automation comes through data

A striking case of discrimination emerged in 2017, when the then student of MIT Joy Buolamwini was writing her thesis on software facial recognition. Analyzing the results of three large manufacturers (IBM, Microsoft, and Face++) he discovered that the software was extremely accurate in recognize the faces of people with light complexion, but Not they were just as keen on recognizing people with lighter skin tones dark, especially women. This was due to the dataset on which the software had been trained. This dataset (later nicknamed the pale male dataseti.e. “the pale man dataset”) largely contained images of Caucasian men and only to a lesser extent men and women of other ethnic groups. Learning from this unbalanced data, artificial intelligence had become extremely good at recognizing Caucasian men, but not other categories, creating problems related, for example, to bank and cell phone recognition systems. In response, Dr. Buolamwini started the Gender Shades project, to highlight problems in datasets used to train artificial intelligence. Gender Shades has helped raise public awareness, showing how i bias there are not only technical errors in the data but reflect inequalities And prejudices rooted, which risk being amplified when re-proposed by technology.

to prejudices bias ia

The different types of biases that make AI less than neutral

Biases in AI models can profoundly affect the reliability and impartiality of results, and we can divide them into three broad categories.

Interaction bias

Some types of models use a learning said “for reinforcement”. That is, they are designed to continue learning and updating based oninteraction with the outside world. As it is easy to imagine, if those who interact with the model of this type have a prejudice, this will immediately be learned and re-proposed by the model. For example, in 2016, a Microsoft chatbot designed to learn to speak by interacting with Twitter users became racist and conspiracy theorist in less than 24 hours, even going so far as to deny the Holocaust.

Latent biases

This type of bias is linked to data used for training. The data is the mirror of the society from which it is taken, for this reason, if there is one latent inequality in societythis will be reflected in the data and consequently also in the models trained on that data. For example, as Wired highlighted in 2023, Midjourney – an image-generating software – tended to depict white men in roles such as “leader,” “manager,” or “scientist.” Because these positions have historically been held by men, the datasets on which the AI ​​learned how to generate the images had an overabundance of men, leading the AI ​​to associate these words with primarily male images.

Selection bias

Again, the bias comes from the data on which the AI ​​is trained. However, the problem here does not so much reflect the prejudices present in society in general, but rather the choices of those who build the dataset. If, for example, a European person were to create a model that recognizes different types of dresses, she will probably select long, white dresses, typical of the European tradition, and not extremely colorful dresses, typical of other cultures, as “wedding dresses”. In this rather harmless example, the model would not be able to recognize that of an Indian girl as a wedding dress, because it would have associated that type of dress solely with the color white.

Even with the best intentions, it is virtually impossible to separate our human biases from the technology we create.

bias prejudices of the racial cultures

How can we remove AI bias?

Bias in AI is a complex problem so There is no single solution yetbut which is being addressed on two main fronts: making i more balanced datasets and make the more explainable models. On the one hand, it is important to diversify the datasets, so that they are representative of different social and demographic groupsand make the entire data organization and selection process transparent. On the other, the possibility of explain uniquely why a model makes certain decisions is critical to identify potential biases in the model itself. Ensuring transparency, the explainability It allows model developers to identify and correct bias issues, ensuring that the model is ethical and interpretable for end users. By integrating these practices, we can help develop fairer and more reliable AI systems.