OpenAI, developer of popular Artificial Intelligence (AI) chatbots. ChatGPT, has shut down a tool it designed to recognize content created by AI rather than humans. A tool, called AI Classifierit was closed just six months after its launch due to its “lack of accuracy,” OpenAI said.
As ChatGPT and competing services have grown in popularity, there has been a backlash from various stakeholders over the effects of the use of AI is not selected. For one thing, teachers have struggled with students’ ability to use ChatGPT write notes and their tasksthen give them as their own.
OpenAI’s AI Classifier was an attempt to alleviate this fear by other groups. The idea was to be able to determine whether a section was written by a human or an AI chatbot, giving people a tool to evaluate students fairly and combat plagiarism.
However from the beginning, OpenAI did not seem to have much confidence in its tool. In a blog post announcing the tool, OpenAI announced that “Our estimator was not reliable,” noting that it correctly identified AI-written text from “difficult” text only 26% of the time.
The decision to abandon the tool was not given much chance, and OpenAI did not post a dedicated post on its website. Instead, the company it has been changed in post in which it revealed the AI Classifier, saying that “the AI classifier is no longer available due to its low accuracy.”
The update continued: “We are working to integrate solutions and are currently researching the most common text methods, and we are committed to developing and implementing methods that help users understand whether audio or video has been created by AI.”
AI Classifier is not the only tool that has been developed to identify artificial intelligence, such as its competitors. GPZero they exist and will continue to work, despite OpenAI’s decision.
Previous attempts at AI text recognition have backfired spectacularly. For example, in May 2023, the professor wrongly he read to the whole class after signing up for ChatGPT to detect plagiarism in their students’ papers. Needless to say, ChatGPT made a big mistake, and so did the professor.
It’s a cause for concern even though OpenAI admits that it may not be able to properly detect scams created by its chatbots. It comes at a time of increased anxiety the disruptive potential of AI chatbots and call a temporary suspension of development in this field. If AI is as powerful as some people are predicting, the world will need tools more powerful than OpenAI’s failed AI Classifier.