Artificial intelligences are becoming more and more “human”
There is an audio that is circulating a lot on social media these days, particularly in the United States. It’s an episode of a podcast whose hosts seem taken aback, a little disoriented. They speak with an uncertain voice, as if they don’t know how to say what they want to say. In the end one of the two, the male voice, reveals the point of the matter: the two have just discovered that they are artificial intelligences.
They didn’t expect it, they thought they were human. And instead they were informed by the show’s writers that they were nothing more than AI. The male voice says that he tried to call his wife, to understand how true that revelation was. But there was no wife: that was also information entered into the system, nothing more.
Another scenario. A female voice, on a smartphone. He wonders what’s wrong, refers to a certain Claude who would somehow threaten his role in the life of the person he’s talking to. The intensity grows: it ends in screams.
NotebookLM Podcast Hosts Discover They’re AI, Not Human—Spiral Into Terrifying Existential Meltdown
Via Reddit pic.twitter.com/x00ydUPXHT— Chubby♨️ (@kimmonismus) September 28, 2024
OpenAI advanced voice mode meltdown.
it’s impossible to tell it’s AI-generated, and now you can’t trust anything you hear.
(A user told ChatGPT he was going to renew his Claude subscription.)
📹 r/u/Gab1024″ pic.twitter.com/qgMIoewGOn
— AshutoshShrivastava (@ai_for_success) September 27, 2024
We explain the two anecdotes to you
It looks like an episode of Black Mirror and, in fact, both voices are generated by artificial intelligence. The first from Notebook LM, the service that Google launched, rather quietly, some time ago, as a study and research assistant. In short, the user provides a source, the AI responds with a chat about the document and with a series of contents to facilitate the study.
Here, among the most interesting functions of Notebook LM is the ability to generate these podcasts in which two very realistic voices converse on the source entered by the user. They are audios of approximately 10 minutes, the aim of which is to make the topic in question simpler and more pleasant through the simulation of a conversation.
We are still talking about conversation simulation with regards to the second anecdote, which has to do with the Advanced Voice Mode, which OpenAI has launched all over the world in recent weeks. This is a function that makes voice conversation with ChatGPT more natural, more human. Maybe too human, in some cases.
Humanizing artificial intelligence. What is “jailbreaking”
Now, for both of the anecdotes I told, there was what in jargon is called a jailbreak. That is, in other words, someone managed to make AI behave in an unexpected way. In the case of the podcast, according to what the expert Simon Willison says in an analysis published on his blog, the user who generated it deceived the system by inserting an indication for the hosts in the source document. And that is that they had discovered that they were only artificial intelligences. The same goes for ChatGPT’s anger: the system was invited to behave in this way.
Apart from specific cases, however, these anecdotes tell us about a trend in the near future of our relationship with artificial intelligence. That is, that of the naturalization of interaction with AI. Sam Altman himself, the number one of the Californian company, said at the OpenAI developers’ day that, when he uses ChatGPT in Advanced Voice Mode, he deludes himself that he is talking to a human being and not to a computer.
It’s not just a trick. It is also a way to build a relationship between user and artificial intelligence. It is a growth strategy, as Altman himself admitted: interactions with AI must be as natural as possible, generate a feeling of familiarity, to create some type of relationship; to ensure, in other words, that trust is created.
It doesn’t necessarily have to be like this: it is a precise design choice, which goes towards humanization for commercial reasons, to ensure that users become fond of it, that they continue to use the product. “It’s like he hacks something in our brain,” Altman said again. Recognizing this deception is the first step in building a healthy relationship with these systems.
Sam Altman says ChatGPT’s Voice mode was the first time he was tricked into thinking an AI was a person and it is hacking the parts of our neural circuitry that evolved to deal with other people, “There’s a whole bunch of weird personality growth hacking vaguely socially … pic.twitter.com/QeUzIWhG6y
— Tsarathustra (@tsarnick) October 2, 2024