Techspace - A tool to determine if text is produced by a human or a computer was unveiled on Tuesday by artificial intelligence research startup OpenAI.
The launch comes two months after OpenAI, a company that creates chatbots that produce text that appears to have been written by a person in response to a person's suggestion, gained the public's interest with the introduction of ChatGPT. Following the tidal wave of interest, Microsoft disclosed last week that it had invested several billion dollars in OpenAI and would be incorporating the AI models of the startup into its consumer and commercial products.
With a new tool that can assist teachers in determining whether a student or artificial intelligence did the assignment, ChatGPT's creator is attempting to reduce its reputation as a free-roaming cheating machine.
Following weeks of debate at schools and colleges over concerns that ChatGPT's capacity to write virtually anything on demand could encourage academic dishonesty and learning of students, OpenAI on Tuesday unveiled its new AI Text Classifier.
In order to evaluate whether someone is attempting to pass off generated language as having been written by a human, it can be helpful, according to OpenAI.
Although using the classifier tool requires a free OpenAI account, it is a rather straightforward tool. It only requires you to paste text into a box and press a button to determine whether the text is very unlikely, unlikely, unclear whether it is, possibly, or likely to have been produced by artificial intelligence.
According to the press release from OpenAI, "pairs of human-written text and AI-written text on the same topic" were used to train the model that powers the application.
The Tool is Still A Work in Progress
Picture: Gaby Jones/BLOOMBERG
OpenAI warns that its new tool, like other ones already on the market, is not perfect. Jan Leike, leader of the OpenAI alignment team tasked with making its systems safer, stated that the mechanism for detecting AI-written material "is flawed and it will be wrong sometimes."
Because of that, Leike stated, "it shouldn't be the only factor considered when making judgments."
Additionally, the business claims that sometimes it will "confidently but mistakenly" classify human-written material as coming from an AI, particularly if it deviates significantly from the training data. It is obvious that the classifier is still very much a "work in progress" from this.
Text created by ChatGPT; shortly after the chatbot gained popularity, so did websites like GPTZero, which was developed by a student named Edward Tian to "identify AI plagiarism."
Inputs with fewer than 1,000 characters cannot be effectively analyzed by the new tool, and OpenAI does not advise utilizing it for languages other than English. The workers added that AI-generated text can be slightly changed to prevent the classifier from accurately concluding that it wasn't primarily done by a human.
According to OpenAI policy researcher Lama Ahmad, "Like many other technologies, it's possible that one district will determine that it's not fit for usage in their classrooms.
"We don't really press them in a particular direction. We simply want to provide them with the knowledge they require to make the best decisions possible for themselves.
France’s digital economy minister Jean-Noël Barrot recently met in California with OpenAI executives, including CEO Sam Altman, and a week later told an audience at the World Economic Forum in Davos, Switzerland that he was optimistic about the technology.
But the government minister who is a former professor at the Massachusetts Institute of Technology and the French business school HEC in Paris said there are also difficult ethical questions that will need to be addressed.
According to him, it will become more crucial for users to comprehend the fundamentals of how these systems function so they are aware of any potential biases.
Leave a comment