OpenAI has developed one tool that allows you to recognize texts created with ChatGPT with an effectiveness of 99.9%thanks to a digital watermark applied to the contents generated by the company’s artificial intelligence. The tool has been around for about a year, but according to what it claims The Wall Street Journal in an article, the company directed by Sam Altman is undecided whether to make the tool accessible to everyone or not, as this could have significant repercussions, including the abandonment of ChatGPT by a significant portion of users.
How OpenAI’s Tool for Detecting AI-Generated Text Works
This is not the first time that OpenAI has developed a system to recognize content generated by artificial intelligence. However, it is the first time that the company has developed one that features a 99.9% effectiveness rate on raw, unprocessed texts. The tool for detecting text generated with ChatGPT works so well for a very simple reason: the text is given a digital watermark or watermarkwhich is invisible to human users but is perfectly visible to the tool in question.
However, it is important to point out that the system has some limitations. OpenAI itself has stated regarding the tool in question:
While it has been shown to be highly accurate and even effective against localized tampering, such as paraphrasing, it is less robust against globalized tampering, such as using translation systems, rephrasing with another generative model, or asking the model to insert a special character between each word and then deleting that character, making it easily bypassable by attackers.
According to what was stated by TechCrunch“with text watermarking, OpenAI would focus exclusively on detecting writing from ChatGPT, not from other companies’ models.” The tool would do this by making small changes to the way ChatGPT selects words, which would create the invisible watermark mentioned earlier, which OpenAI’s AI-detector tool would easily detect.
What might happen if OpenAI’s tool were released
If OpenAI’s tool were actually released, what could be the possible implications for us users and, more generally, for the AI industry? According to OpenAI itself, the release of the tool could “stigmatize the use of AI as a useful writing tool for non-English speakers.”
In addition, the company said that potential consequences of the tool’s release could include impacts on ecosystems outside of ChatGPT, and that ChatGPT users may most likely decide to abandon the service to avoid having their AI-generated texts identified by teachers, employers, colleagues, and so on, perhaps resorting to competitive products as Gemini from Google or Co-pilot by Microsoft. This fear is not unfounded: surveys say that about a third of ChatGPT users could stop using the tool if OpenAI released its recognition tool.