banner

News

Mar 14, 2024

AI Chatbots' Tendency to Spout Falsehoods Poses Challenges

The issue of artificial intelligence (AI) chatbots, like ChatGPT, generating false information is a growing concern for businesses, organizations, and individuals. This problem affects various fields, from psychotherapy to legal brief writing. Daniela Amodei, co-founder of AI chatbot Claude 2, acknowledges that all models today suffer from some degree of “hallucination”, as they are primarily designed to predict the next word and often do so inaccurately. Developers such as Anthropic and OpenAI, which created ChatGPT, are actively working to address this issue and improve the truthfulness of their AI systems.

However, experts in the field suggest that complete eradication of this problem may not be feasible. Linguistics professor Emily Bender explains that the mismatch between AI technology and its proposed use cases poses inherent limitations. The reliability of generative AI technology holds significant importance, with projected contributions of $2.6 trillion to $4.4 trillion to the global economy. For instance, Google aims to provide news-writing AI products to news organizations, emphasizing the need for accuracy.

The implications of false information extend beyond written texts. Computer scientist Ganesh Bagler, in collaboration with hotel management institutes in India, has been leveraging AI systems to invent recipes. Inaccurate outputs can make a substantial difference in the taste and quality of a meal, underscoring the need for precision in generative AI.

While OpenAI CEO Sam Altman remains optimistic about improving AI systems’ tendency to produce false information, skeptics like Bender argue that improvements may not be enough. Language models are primarily designed to predict word sequences based on training data, and the output they generate is essentially creative writing rather than factual information. These models tend to fail in subtle ways that are challenging for readers to identify.

Despite concerns, some companies view AI chatbots’ tendency to generate false information as a valuable feature. Marketing firms, like Jasper AI, leverage the AI’s creative outputs to generate unique ideas for clients’ pitches. However, the demand for accurate content remains high, and efforts are underway by companies such as Google to address the issue.

Bill Gates and other proponents of AI express optimism that improvements can be made to help AI models distinguish between fact and fiction. Nonetheless, it remains to be seen whether complete accuracy can be achieved in AI-generated text in the future.

SHARE