英语 英语 日语 日语 韩语 韩语 法语 法语 德语 德语 西班牙语 西班牙语 意大利语 意大利语 阿拉伯语 阿拉伯语 葡萄牙语 葡萄牙语 越南语 越南语 俄语 俄语 芬兰语 芬兰语 泰语 泰语 泰语 丹麦语 泰语 对外汉语

TED演讲:如何赋予AI力量而不是被它压倒(5)

时间:2021-11-10 05:17来源:互联网 提供网友:nan   字体: [ ]
特别声明:本栏目内容均从网络收集或者网友提供,供仅参考试用,我们无法保证内容完整和正确。如果资料损害了您的权益,请与站长联系,我们将及时删除并致以歉意。
    (单词翻译:双击或拖选)

We invented the car, screwed up a bunch of times -- invented the traffic light, the seat belt and the airbag,

我们发明了汽车,又一不小心搞砸了很多次--发明了红绿灯,安全带和安全气囊,

but with more powerful technology like nuclear weapons and AGI, learning from mistakes is a lousy strategy, don't you think?

但对于更强大的科技,像是核武器和AGI,要去从错误中学习,似乎是个比较糟糕的策略,你们怎么看?

It's much better to be proactive rather than reactive;

事前的准备比事后的补救要好得多;

plan ahead and get things right the first time because that might be the only time we'll get.

提早做计划,争取一次成功,因为有时我们或许没有第二次机会。

But it is funny because sometimes people tell me, "Max, shhh, don't talk like that. That's Luddite scaremongering."

但有趣的是,有时候有人告诉我。“麦克斯,嘘,别那样说话。那是勒德分子在制造恐慌。”

But it's not scaremongering. It's what we at MIT call safety engineering.

但这并不是制造恐慌。在麻省理工学院,我们称之为安全工程。

Think about it: before NASA launched the Apollo 11 mission,

想想看:在美国航天局(NASA)部署阿波罗11号任务之前,

they systematically1 thought through everything that could go wrong

他们全面地设想过所有可能出错的状况,

when you put people on top of explosive fuel tanks and launch them somewhere where no one could help them.

毕竟是要把人类放进易燃易爆的太空舱里,再将他们发射上一个无人能助的境遇。

And there was a lot that could go wrong. Was that scaremongering? No.

可能出错的情况非常多,那是在制造恐慌吗?不是。

That's was precisely2 the safety engineering that ensured the success of the mission,

那正是在做安全工程的工作,以确保任务顺利进行,

and that is precisely the strategy I think we should take with AGI.

这正是我认为处理AGI时应该采取的策略。

Think through what can go wrong to make sure it goes right.

想清楚什么可能出错,然后避免它的发生。

So in this spirit, we've organized conferences,

基于这样的精神,我们组织了几场大会,

bringing together leading AI researchers and other thinkers to discuss how to grow this wisdom we need to keep AI beneficial.

邀请了世界顶尖的人工智能研究者和其他有想法的专业人士,来探讨如何发展这样的智慧,从而确保人工智能对人类有益。

Our last conference was in Asilomar, California last year and produced this list of 23 principles

我们最近的一次大会去年在加州的阿西洛玛举行,我们得出了23条原则,

which have since been signed by over 1,000 AI researchers and key industry leaders,

自此已经有超过1000位人工智能研究者,以及核心企业的领导人参与签署,

and I want to tell you about three of these principles.

我想要和各位分享其中的三项原则。


点击收听单词发音收听单词发音  

1 systematically 7qhwn     
adv.有系统地
参考例句:
  • This government has systematically run down public services since it took office.这一屆政府自上台以来系统地削减了公共服务。
  • The rainforest is being systematically destroyed.雨林正被系统地毀灭。
2 precisely zlWzUb     
adv.恰好,正好,精确地,细致地
参考例句:
  • It's precisely that sort of slick sales-talk that I mistrust.我不相信的正是那种油腔滑调的推销宣传。
  • The man adjusted very precisely.那个人调得很准。
本文本内容来源于互联网抓取和网友提交,仅供参考,部分栏目没有内容,如果您有更合适的内容,欢迎点击提交分享给大家。
------分隔线----------------------------
TAG标签:   TED演讲
顶一下
(0)
0%
踩一下
(0)
0%
最新评论 查看所有评论
发表评论 查看所有评论
请自觉遵守互联网相关的政策法规,严禁发布色情、暴力、反动的言论。
评价:
表情:
验证码:
听力搜索
推荐频道
论坛新贴