搜索关注在线英语听力室公众号:tingroom,领取免费英语资料大礼包。
(单词翻译)
Technology Industry Divided over How to Govern AI Development
Technology leaders have shown major support for laws to govern artificial intelligence use. At the same time, they are seeking to guarantee that any future AI rules work in their favor.
The technology industry is increasingly divided about how to govern AI. One side supports an "open science" method to AI development; the other supports a closed method.
Facebook parent Meta and IBM recently launched a new group called the AI Alliance. The group supports the "open science" method of AI development. On the other side are companies such as Google, Microsoft and ChatGPT-maker OpenAI.
Safety is at the heart of the debate. But, tech leaders are also arguing about who should profit from AI developments.
What is open-source AI?
The term "open-source" comes from a common method of building software in which the code is widely available at no cost. Anyone can examine and make changes to it.
Open-source AI involves more than just code. Computer scientists differ on how to define "open source." They say the identifications are dependent on which parts of the technology are publicly available and if there are restrictions1 on use.
Some computer scientists use the term "open science" to describe the wider philosophy.
IBM and Meta lead the AI Alliance. Members include Dell, Sony, chipmakers AMD and Intel, and several universities and smaller AI companies. The alliance is coming together to say "that the future of AI is going to be built ... on top of the open scientific exchange of ideas and on open innovation, including open source and open technologies," said Darío Gil of IBM. Gil made the comment in a discussion with The Associated Press.
Concerns about open-source AI
Part of the confusion about open-source AI is that the company that built ChatGPT and the image-generator DALL-E is called OpenAI. But its AI systems are closed.
"There are near-term and commercial incentives3 against open source," said Ilya Sutskever, OpenAI's chief scientist and co-founder, in a video with Stanford University in April.
But there is also a longer-term worry about the open development method. Sutskever noted4 one worry is that an AI system with powerful abilities could be too dangerous to make available to the public.
For example, he described a possible AI system that could learn how to start its own biological laboratory.
Even current AI models present risks. They could create disinformation campaigns, for example, said David Evan Harris of the University of California, Berkeley. Such campaigns could disrupt democratic elections, he said.
"Open source is really great in so many dimensions of technology," but AI is different, Harris said.
The Center for Humane5 Technology, a longtime critic of Meta's social media activities, is among the groups drawing attention to the risks of open-source or leaked AI models.
"As long as there are no guardrails in place right now, it's just completely irresponsible to be deploying6 these models to the public," said the group's Camille Carlton.
Benefits and dangers
An increasingly public debate has appeared over the good and bad of using an open-source method to AI development.
Meta's chief AI scientist, Yann LeCun, this fall criticized OpenAI, Google, and Anthropic on social media for what he described as "massive corporate7 lobbying." Le Cun argues that the companies are trying to write rules in a way that help their high-performing AI models and could help them hold their power over the technology's development. The three companies, along with OpenAI's key partner Microsoft, have formed their own industry group called the Frontier Model Forum8.
LeCun said on X, formerly9 Twitter, "Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture."
For IBM, the dispute feeds into a much longer competition that began before the AI boom. IBM was an early supporter of the open-source Linux operating system in the 1990s.
Chris Padilla leads IBM's international government affairs team. The companies are trying to raise fear about open-source innovation as they have in the past, he suggested.
He added, "I mean, this has been the Microsoft model for decades, right? They always opposed open-source programs that could compete with Windows or Office. They're taking a similar approach here."
Words in This Story
innovation – n. the act of introducing new ideas, devices, or methods
incentive2 – n. something that encourages a person to do something
disrupt – v. to interrupt the normal progress or activity of something
dimension – n. a part of something
guardrail – n. a protective device along the side of a road that prevents vehicles from driving off the road (can be used metaphorically)
lobby – v. to try to influence government officials to make decisions for or against something
1 restrictions | |
约束( restriction的名词复数 ); 管制; 制约因素; 带限制性的条件(或规则) | |
参考例句: |
|
|
2 incentive | |
n.刺激;动力;鼓励;诱因;动机 | |
参考例句: |
|
|
3 incentives | |
激励某人做某事的事物( incentive的名词复数 ); 刺激; 诱因; 动机 | |
参考例句: |
|
|
4 noted | |
adj.著名的,知名的 | |
参考例句: |
|
|
5 humane | |
adj.人道的,富有同情心的 | |
参考例句: |
|
|
6 deploying | |
(尤指军事行动)使展开( deploy的现在分词 ); 施展; 部署; 有效地利用 | |
参考例句: |
|
|
7 corporate | |
adj.共同的,全体的;公司的,企业的 | |
参考例句: |
|
|
8 forum | |
n.论坛,讨论会 | |
参考例句: |
|
|
9 formerly | |
adv.从前,以前 | |
参考例句: |
|
|
本文本内容来源于互联网抓取和网友提交,仅供参考,部分栏目没有内容,如果您有更合适的内容,欢迎 点击提交 分享给大家。