Picture: Got It AI
Techspace - By demonstrating the power of generative AI, which enables regular people to direct AI to develop a wide range of things, from computer programs to unique compositions, ChatGPT has caught the tech world by storm. Some of the works are very amazing. But ChatGPT's mistake rate is one of its drawbacks.
In an interview with VentureBeat, Peter Relan, co-founder of the conversational AI firm Got It AI, argued that enterprise knowledge bases cannot afford to have chatbots for conversational AI that are 15% to 20% of the time incorrect. Relan refers to ChatGPT's incorrect responses as "hallucinations." To determine when ChatGPT is "hallucinating" (creating made-up responses) in regard to responding to queries based on a huge collection of articles or content in a knowledge base, his own firm developed the "truth checker."
He said that this development enables the deployment of ChatGPT-like experiences without running the danger of giving customers or staff members factually erroneous answers. According to him, businesses may reliably deploy conversational AIs that draw on substantial knowledge bases, such as those used in customer care or for internal knowledge base queries, by combining ChatGPT and the truth checker.
When given a target domain of content (such as a sizable knowledge base or a collection of articles), the autonomous truth checking AI uses a cutting-edge Large Language Model (LLM) based AI system to train itself independently and without human intervention specifically for the purpose of truth checking.
When used to respond to queries in a multi-turn chat dialog, ChatGPT, when given content from the same domain of content, first verifies that each response is true before displaying it to the user. According to Relan, whenever a delusion is found, the user is not given the answer; instead, a link to pertinent articles that hold the solution is given.
According to Chandra Khatri, a former Alexa Prize team leader and co-founder of Got It AI, "We tested our product with a dataset of 1,000+ articles across numerous different knowledge sources using multi-turn interactions with complex language forms such as co-reference, context, and topic switches." 20% of the queries received inaccurate answers from ChatGPT LLM. 90% of the incorrect responses were able to be identified by the autonomous truth-checking AI. In order to further optimize the truth-checking AI and find any residual errors and virtually remove all wrong responses, we also gave the consumer a straightforward user interface.
According to Amol Kelkar, co-founder of Got It AI, "our technology is a major advancement in autonomous conversational AI for "known" domains of content, such as enterprise knowledge bases. We fully expect OpenAI, over time, to address the "hallucination" problem in its base ChatGPT LLM models for "open domain" conversations about any topic on the internet. This is not about fine-tuning, prompt engineering, or merely a UI layer. We can supply scalable, precise, and fluid conversational AI for consumers planning to use ChatGPT right away thanks to this LLM-based AI system. The primary capacity that bridges the gap between a research and development system and an enterprise-ready system is the cost-effective truth checking of the generated responses.
Picture: Germany Detail Zero
Tracking Down Hallucinations
There is a comprehensive list of all the known errors, Relan stated. "Potentially up to 20%, to put it very roughly. It causes hallucinations and fabrications.”
He mentioned that ChatGPT is a free forum where anyone can discuss anything. A math issue, Julius Caesar, and video games. The internet has been swallowed, but only through 2021. Got It AI doesn't attempt to verify all of that. However, it can also focus on a specific group of content, such as an enterprise knowledge base.
“We're not claiming to capture hallucinations for the internet,” he added, adding that everything on the internet that might be fact-checked. “This is a hallucination, as we can tell. And we simply provide you with a response," Relan said. “We think we can offer a 90% decrease in ChatGPT's hallucinations or more right out of the box.”
Others are also attempting to address the accuracy issues. However, Relan noted that considering the size of the issue, it is challenging to obtain high precision figures. Instead than giving you a ton of search results, he added, "We'll provide you a good user experience so you can check the answer."
Through its Autonomous Articlebot offering, which makes use of the same OpenAI generative LLMs as chatGPT, Got It AI is distributing its truth-checking AI. Users can begin testing Got It AI's Articlebot's contextual, multi-turn, enterprise-grade conversational AI customer care, help desk, and agent assistance applications within minutes of signing up by pointing it towards a knowledge base or collection of articles without the need for configuration. About 40 individuals work for Got It AI, which has so far raised about $15 million, including money from Relan's own venture fund.