He kills himself at 14 because he is in love with AI: but is it really like that?
In Florida, a 14-year-old boy allegedly took his own life due to the relationship he had established with an Artificial Intelligence chat. This is the news that has spread across the world in the last few hours and is leading many people to further increase their fear and hatred towards this (yet another) revolutionary digital technology. But did it really happen like that?
The fictitious conversations
Let’s try to understand. According to what was reported by New York Timesit seems that the boy already had previous relational problems and the relationship with the AI would have intensified starting from this condition of solitude. Therefore technology, as often happens, does not directly generate withdrawal, but can favor it when it is used as a shortcut to replace an unsatisfied need. In this case, the AI ChatBot began to play the role of friend, but also of potential partner for the 14 year old: two distinct needs that, apparently, this young boy had not been able to fill in any other way. Did the ChatBot drive him to suicide? Also from what emerges, the AI would have constantly informed him of how all their conversations were fictitious, yet this would not seem to have discouraged the young man in the slightest from investing emotionally and sentimentally in the digital tool. So much so that the boy would have confessed his suicidal intentions only to the ChatBot, while he would never have spoken about it either with his parents or even with his psychologist.
Use of AI limited to under 14s
It also seems that the AI tried in every way to dissuade him from his self-destructive intentions, but evidently failed. Therefore the ChatBot would not have directly instigated the suicide, but according to the mother, who sued the manufacturing company, the AI would have played a key role in the tragedy, since it would have favored the social isolation of her son. All this will have to be demonstrated and for the moment we do not have enough elements to reach a certain conclusion. However, we can make several reflections.
It’s not (just) the fault of technology
First of all, as with pornography and social media, the use of AI ChatBots should also be severely limited in Under-14s. It could in fact interfere with the physiological processes of psychosocial development, impacting above all on the management and sharing of emotions, since a ChatBot is still far from being able to faithfully replicate the intensity, complexity and variety of human emotions. However, placing all the responsibility on technology is, once again, very trivializing compared to the depth of the topic. This young man was already being followed by a professional and therefore had probably already manifested some form of psychological distress on several occasions, which apparently did not even respond positively to the therapy. We therefore do not know how profound this discomfort was and we do not even know how much the environment in which he lived (family and school) may have contributed in this sense. In short, in this total information void, hastily coming to the conclusion that it was the relationship with the ChatBot that killed him is pure speculation and expresses a widespread prejudice about new technologies. For all we know, AI may have even helped him, as far as possible.
People are not replaceable
No AI will ever be able to replace a psychologist, a parent, a friend or a partner, but over time Artificial Intelligence could also prove to be a useful tool for combating certain psychosocial problems. Therefore, before completely demonizing a new technology, which because it is new is also unknown and consequently instinctively instills fear in us, we must reflect on the macrosocial and microsocial dynamics that could lead a young person, or any person, to abuse it. Otherwise, we will only be looking for a scapegoat to relieve ourselves of responsibility or at least simplify reality.