Newly-developed artificial intelligence (AI) systems have demonstrated the ability to perform at human-like levels in some areas. But one serious problem with the tools remains – they can produce false or harmful information repeatedly.
The development of such systems, known as “chatbots,” has progressed greatly in recent months. Chatbots have shown the ability to interact smoothly with humans and produce complex writing based on short, written commands. Such tools are also known as “generative AI” or “large language models.”
Chatbots are one of many different AI systems currently under development. Others include tools that can produce new images, video and music or can write computer programs. As the technology continues to progress, some experts worry that AI tools may never be able to learn how to avoid false, outdated or damaging results.
The term hallucination has been used to describe when chatbots produce inaccurate or false information. Generally, hallucination describes something that is created in a person’s mind, but is not happening in real life.
Daniela Amodei is co-creator and president of Anthropic, a company that produced a chatbot called Claude 2. She told the Associated Press, "I don’t think that there’s any model today that doesn’t suffer from some hallucination.”
Amodei added that such tools are largely built “to predict the next word.” With this kind of design, she said, there will always be times when the model gets information or context wrong.
Anthropic, ChatGPT-maker OpenAI and other major developers of such AI systems say they are working to make AI tools that make fewer mistakes. Some experts question how long that process will take or if success is even possible.
“This isn’t fixable,” says Professor Emily Bender. She is a language expert and director of the University of Washington’s Computational Linguistics Laboratory. Bender told the AP she considers the general relationship between AI tools and proposed uses of the technology a “mismatch.”
Indian computer scientist Ganesh Bagler has been working for years to get AI systems to create recipes for South Asian foods. He said a chatbot can generate misinformation in the food industry that could hurt a food business. A single “hallucinated” recipe element could be the difference between a tasty meal or a terrible one.
Bagler questioned OpenAI chief Sam Altman during an event on AI technology held in India in June. “I guess hallucinations in ChatGPT are still acceptable, but when a recipe comes out hallucinating, it becomes a serious problem,” Bagler said.
Altman answered by saying he was sure developers of AI chatbots would be able to get “the hallucination problem to a much, much better place" in the future. But he noted such progress could take years. “At that point we won’t still talk about these,” Altman said. “There’s a balance between creativity and perfect accuracy, and the model will need to learn when you want one or the other.”
Other experts who have long studied the technology say they do not expect such improvements to happen anytime soon.
The University of Washington’s Bender describes a language model as a system that has been trained on written data to “model the likelihood of different strings of word forms.” Many people depend on a version of this technology whenever they use the “autocomplete” tool when writing text messages or emails.
The latest chatbot tools try to take that method to the next level, by generating whole new passages of text. But Bender says the systems are still just repeatedly choosing the most predictable next word in a series. Such language models "are designed to make things up. That’s all they do,” she noted.
Some businesses, however, are not so worried about the ways current chatbot tools generate their results. Shane Orlick is head of marketing technology company Jasper AI. He told the AP, “Hallucinations are actually an added bonus." He explained many chatbot users were pleased that the company’s AI tool had "created takes on stories or angles that they would have never thought of themselves.”
I’m Bryan Lynn.
The Associated Press and Reuters reported this story. Bryan Lynn adapted the reports for VOA Learning English.
Words in This Story
generate – v. to produce something
inaccurate – adj. not correct or exact
context – n. all the facts, opinions, situations, etc. relating to a particular thing or event
mismatch – n. a situation when people or things are put together but are not suitable for each other
recipe – n. a set of instructions and ingredients for preparing a particular food dish
bonus – n. a pleasant extra thing
angle – n. a position from which something is looked at
What do you think of this story? We want to hear from you. We have a new comment system. Here is how it works:
- Write your comment in the box.
- Under the box, you can see four images for social media accounts. They are for Disqus, Facebook, Twitter and Google.
- Click on one image and a box appears. Enter the login for your social media account. Or you may create one on the Disqus system. It is the blue circle with “D” on it. It is free.
Each time you return to comment on the Learning English site, you can use your account and see your comments and replies to them. Our comment policy is here.