搜索关注在线英语听力室公众号:tingroom,领取免费英语资料大礼包。
(单词翻译)
Can AI Chatbots Learn to Be More Truthful1?
Newly-developed artificial intelligence (AI) systems have demonstrated the ability to perform at human-like levels in some areas. But one serious problem with the tools remains2 – they can produce false or harmful information repeatedly.
The development of such systems, known as "chatbots," has progressed greatly in recent months. Chatbots have shown the ability to interact smoothly3 with humans and produce complex writing based on short, written commands. Such tools are also known as "generative AI" or "large language models."
Chatbots are one of many different AI systems currently under development. Others include tools that can produce new images, video and music or can write computer programs. As the technology continues to progress, some experts worry that AI tools may never be able to learn how to avoid false, outdated4 or damaging results.
The term hallucination has been used to describe when chatbots produce inaccurate5 or false information. Generally, hallucination describes something that is created in a person's mind, but is not happening in real life.
Daniela Amodei is co-creator and president of Anthropic, a company that produced a chatbot called Claude 2. She told the Associated Press, "I don't think that there's any model today that doesn't suffer from some hallucination."
Amodei added that such tools are largely built "to predict the next word." With this kind of design, she said, there will always be times when the model gets information or context wrong.
Anthropic, ChatGPT-maker OpenAI and other major developers of such AI systems say they are working to make AI tools that make fewer mistakes. Some experts question how long that process will take or if success is even possible.
"This isn't fixable," says Professor Emily Bender. She is a language expert and director of the University of Washington's Computational Linguistics6 Laboratory. Bender told the AP she considers the general relationship between AI tools and proposed uses of the technology a "mismatch."
Indian computer scientist Ganesh Bagler has been working for years to get AI systems to create recipes for South Asian foods. He said a chatbot can generate misinformation in the food industry that could hurt a food business. A single "hallucinated" recipe element could be the difference between a tasty meal or a terrible one.
Bagler questioned OpenAI chief Sam Altman during an event on AI technology held in India in June. "I guess hallucinations in ChatGPT are still acceptable, but when a recipe comes out hallucinating, it becomes a serious problem," Bagler said.
Altman answered by saying he was sure developers of AI chatbots would be able to get "the hallucination problem to a much, much better place" in the future. But he noted7 such progress could take years. "At that point we won't still talk about these," Altman said. "There's a balance between creativity and perfect accuracy, and the model will need to learn when you want one or the other."
Other experts who have long studied the technology say they do not expect such improvements to happen anytime soon.
The University of Washington's Bender describes a language model as a system that has been trained on written data to "model the likelihood of different strings8 of word forms." Many people depend on a version of this technology whenever they use the "autocomplete" tool when writing text messages or emails.
The latest chatbot tools try to take that method to the next level, by generating whole new passages of text. But Bender says the systems are still just repeatedly choosing the most predictable next word in a series. Such language models "are designed to make things up. That's all they do," she noted.
Some businesses, however, are not so worried about the ways current chatbot tools generate their results. Shane Orlick is head of marketing9 technology company Jasper AI. He told the AP, "Hallucinations are actually an added bonus." He explained many chatbot users were pleased that the company's AI tool had "created takes on stories or angles that they would have never thought of themselves."
Words in This Story
generate – v. to produce something
inaccurate – adj. not correct or exact
context – n. all the facts, opinions, situations, etc. relating to a particular thing or event
mismatch – n. a situation when people or things are put together but are not suitable for each other
recipe – n. a set of instructions and ingredients for preparing a particular food dish
bonus – n. a pleasant extra thing
angle – n. a position from which something is looked at
1 truthful | |
adj.真实的,说实话的,诚实的 | |
参考例句: |
|
|
2 remains | |
n.剩余物,残留物;遗体,遗迹 | |
参考例句: |
|
|
3 smoothly | |
adv.平滑地,顺利地,流利地,流畅地 | |
参考例句: |
|
|
4 outdated | |
adj.旧式的,落伍的,过时的;v.使过时 | |
参考例句: |
|
|
5 inaccurate | |
adj.错误的,不正确的,不准确的 | |
参考例句: |
|
|
6 linguistics | |
n.语言学 | |
参考例句: |
|
|
7 noted | |
adj.著名的,知名的 | |
参考例句: |
|
|
8 strings | |
n.弦 | |
参考例句: |
|
|
9 marketing | |
n.行销,在市场的买卖,买东西 | |
参考例句: |
|
|
本文本内容来源于互联网抓取和网友提交,仅供参考,部分栏目没有内容,如果您有更合适的内容,欢迎 点击提交 分享给大家。