英语 英语 日语 日语 韩语 韩语 法语 法语 德语 德语 西班牙语 西班牙语 意大利语 意大利语 阿拉伯语 阿拉伯语 葡萄牙语 葡萄牙语 越南语 越南语 俄语 俄语 芬兰语 芬兰语 泰语 泰语 泰语 丹麦语 泰语 对外汉语

美国国家公共电台 NPR Sam Harris: What Happens When Humans Develop Super Intelligent AI?

时间:2017-09-18 03:06来源:互联网 提供网友:nan   字体: [ ]
特别声明:本栏目内容均从网络收集或者网友提供,供仅参考试用,我们无法保证内容完整和正确。如果资料损害了您的权益,请与站长联系,我们将及时删除并致以歉意。
    (单词翻译:双击或拖选)

 

GUY RAZ, HOST:

On the show today, Future Consequences, how our decisions about science and technology today will impact our future tomorrow. And in a lot of ways, one aspect of the future has already been imagined for us.

(SOUNDBITE OF FILM, "TERMINATOR 2")

LINDA HAMILTON: (As Sarah Connor) Three-billion human lives ended on August 29th, 1997.

RAZ: I mean, this has been part of our culture for decades.

(SOUNDBITE OF FILM, "TERMINATOR 2")

HAMILTON: (As Sarah Connor) The survivors1 of the nuclear fire called the war Judgment2 Day. They lived only to face a new nightmare, the war against the machines.

RAZ: OK so you might recognize this scene. It's Sarah Connor in "Terminator 2" describing how artificial intelligence sparked a nuclear attack and then waged war against the surviving humans, which is something that should terrify all of us, right?

SAM HARRIS: Yeah, and what especially worries me about artificial intelligence is that I'm freaked out by my inability to marshal the appropriate emotional response.

RAZ: Which should be what?

HARRIS: Oh, I think potentially it's - it's the most worrisome future possible because we're talking about the most powerful possible technology.

RAZ: This is Sam Harris.

HARRIS: I am a writer, and podcaster (ph), and neuroscientist and also armchair philosopher.

RAZ: And Sam says our inability to react to this future with urgency poses a big problem.

HARRIS: The quote from Stuart Russell, the computer scientist at Berkeley, is, imagine we received a communication from an alien civilization which said, people of Earth, we will arrive on your planet in 50 years. Get ready. Right, now just imagine that. Now, that is the circumstance we are in, fundamentally. We're talking about a seeming inevitability3 that we will produce superhuman intelligence - intelligence, which, once it becomes superhuman, then it becomes the engine of its own improvements. Then there's really kind of a just a runaway4 effect where we can't even imagine how much better it could be than we are.

RAZ: Sam picks up this idea from the TED5 stage.

(SOUNDBITE OF TED TALK)

HARRIS: At a certain point, we will build machines that are smarter than we are. And once we have machines that are smarter than we are, they will begin to improve themselves. And then we risk what the mathematician6 I. J. Good called an intelligence explosion - that the process could get away from us. Now, this is often caricatured as a fear that armies of malicious7 robots will attack us, but that isn't the most likely scenario8. It's not that our machines will become spontaneously malevolent9. The concern is really that we will build machines that are so much more competent than we are that the slightest divergence10 between their goals and our own could destroy us.

Just think about how we relate to ants, OK? We don't hate them. We don't go out of our way to harm them. In fact, sometimes, we take pains not to harm them. We just - we step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, we annihilate11 them without a qualm. The concern is that we will one day build machines that could treat us with similar disregard. It's crucial to realize that the rate of progress doesn't matter. It does - any progress is enough to get us into the end zone. We don't need Moore's law to continue. We don't need exponential progress. We just need to keep going. So we will do this if we can. The train is already out of the station, and there's no break to pull.

(SOUNDBITE OF MUSIC)

RAZ: You believe that it is inevitable12 that at some point, we humans will create the technology to more or less replicate13 humans.

HARRIS: Yeah, I think that the moment you recognize that intelligence is platform independent, then you just have to give up this idea that there's any barrier to machines becoming superhuman. And when they do become superhuman in their abilities to design experiments, to engineer new machines, then they will be the best at doing that.

RAZ: But we're not there, right? I mean, at this point, there are a lot of things humans do that machines and robots just can't do.

HARRIS: Yeah, I think this notion of a goal of human-level intelligence is quite misleading. We're living with what's called narrow AI, which is superhuman in its area of application but is not at all general, and therefore, isn't nearly as good as the human mind is now. It - or - the best chess player in the world is a computer. And now the best go player in the world is a computer. The best facial recognition system is a computer. And yet, none of these systems is good at anything else, really.

So I think the goal is to have general intelligence which allows for kind of flexible learning across many different tasks and in many different environments. But once we have something that's truly generalizable and seamless, it won't be human-level. It will become superhuman, and it won't be human-level unless we consciously degrade its capacity to be human-level, and we would never do that.

RAZ: Right, right, because that's just not a normal human response, because when most people hear about AI or talk about it, it's with this incredible optimism. I mean, this technology will enable us to do things we can't do. We're going to be able to crunch14 numbers in ways that we can't right now and solve diseases through data. And machines are going to be able to do tasks and do them much faster. So, I mean, there is a sense of wonder about the future of artificial intelligence.

HARRIS: Yeah, I think that's appropriate because intelligence is our most important resource, and we want more of it. Just think about it in terms of, every problem you see in the world has some intelligent solution if a solution is compatible with the laws of physics, right? And so it's just, we want to figure out these solutions, and we want to improve human life.

But yeah, we are racing15 toward something we don't understand. And the scary thing is that many people thinking about the potential upside here don't seem too aware of the ways in which it could go wrong and in fact are - just deny that there's really anything worth thinking about here.

(SOUNDBITE OF TED TALK)

HARRIS: Imagine we just built a superintelligent AI - right? - that was no smarter than your average team of researchers at Stanford or MIT. Well, electronic circuits function about a million times faster than biochemical ones, OK? So this machine should think about a million times faster than the minds that built it. So you set it running for a week, and it will perform 20,000 years of human-level intellectual work week, after week after week. How could we even understand, much less constrain16, a mind making this sort of progress?

The other thing that's worrying, frankly17, is that, imagine the best-case scenario. So imagine we hit upon a design of superintelligent AI that has no safety concerns. We have the perfect design the first time around. It's as though we'd been handed an oracle18 that behaves exactly as intended. Well, this machine would be the perfect labor-saving device. It can design the machine that can build the machine that can do any physical work powered by sunlight more or less for the cost of raw materials. OK, so we're talking about the end of human drudgery19. We're also talking about the end of most intellectual work.

Now, what would the Russians or the Chinese do if they heard that some company in Silicon20 Valley was about to deploy21 a super intelligent AI? This machine would be capable of waging war, right, whether terrestrial or cyber, with unprecedented22 power. This is a winner-take-all scenario. To be six months ahead of the competition here is to be 500,000 years ahead at a minimum. OK, so it seems that even mere23 rumors24 of this kind of breakthrough could cause our species to go berserk.

(SOUNDBITE OF MUSIC)

RAZ: I mean, it seems like one big consequence of not taking this seriously, you know, is that we will essentially25 be giving up control over our own destiny as a species.

HARRIS: Yeah, well, potentially. And so that - obviously there are the people who say, well, we would never do that. We would never give up control, right?

RAZ: Right.

HARRIS: But there's just no guarantee of that, particularly when you imagine the power that awaits anyone, you know, any government, any research team, any individual ultimately, who creates a system that is superhuman in its abilities and general in its abilities. Well, then no one can really compete with you in anything. It's really hard to picture what the intellectual and and scientific inequality that that suddenly could open up.

RAZ: I mean, based on, you know, the pace of this technology, I mean, how much longer will humans be the - sort of the dominant26 species on planet Earth?

HARRIS: Well, you know, I really - I have no idea. I just think that the pace of change suggests that the next 50 years could represent an astonishing epoch27 of change. Just look at the pace of change in our own lives in the last 20 years. You know, most of us have only been on the Internet for about 20 years. Twenty years ago, you had people saying the Internet is going to be a bust28, right? I mean, there's no there there, right?

RAZ: (Laughter) Right, right.

HARRIS: No one's going to use this thing, right?

RAZ: Yes.

HARRIS: And look at the world we're in now. And this is a comparatively old kind of breakthrough. I mean, we're not - nothing of the last 20 years has been transformed fundamentally by artificial intelligence, so I think the next 50 years could change everything.

(SOUNDBITE OF TED TALK)

HARRIS: Now, unfortunately, I don't have a solution to this problem apart from recommending that more of us think about it. I think we need something like a Manhattan Project on the topic of artificial intelligence, not to build it because I think we'll inevitably29 do that, but to to understand how to avoid an arms race and to build it in a way that is aligned30 with our interests. When you're talking about a super intelligent AI that can make changes to itself, it seems that we only have one chance to get the initial conditions right. And even then, we will need to absorb the economic and political consequences of getting them right. But the moment we admit that information processing is the source of intelligence, that some appropriate computational system is what the basis of intelligence is and we admit that we will improve these systems continuously, then we have to admit that we're in the process of building some sort of God. Now would be a good time to make sure it's a God we can live with. Thank you very much.

(APPLAUSE)

RAZ: Writer and neuroscientist and philosopher Sam Harris. He's also the host of the podcast "Waking Up With Sam Harris." You should definitely check that out. And you can watch all of Sam's talks at ted.com.

(SOUNDBITE OF SONG, "YOSHIMI BATTLES THE PINK ROBOTS PT. 1")

FLAMING LIPS: (Singing) Oh, Yoshimi, they don't believe me, but you won't let those robots defeat me. Yoshimi, they don't believe me, but you won't have those robots defeat me.

RAZ: Hey, thanks for listening to our episode Future Consequences this week. If you want to find out more about who was on it, go to ted.npr.org. To see hundreds more TED Talks, check out ted.com or the TED app. Our production staff at NPR includes Jeff Rogers, Sanaz Meshkinpour, Jinae West, Neva Grant and Rund Abdelfatah, with help from Daniel Shukin and Tony Liu. Our partners at TED are Chris Anderson, Colin Helms, Anna Phelan and Janet Lee.

If you want to let us know what you think about the show, please go to Apple Podcasts and write a review. Please also subscribe31 to our podcast at Apple Podcasts or however you get your podcasts. You can also write us directly. It's [email protected]. And you can follow us on Twitter. That's @TEDRadioHour. I'm Guy Raz, and you've been listening to ideas worth spreading right here on the TED Radio Hour from NPR.


点击收听单词发音收听单词发音  

1 survivors 02ddbdca4c6dba0b46d9d823ed2b4b62     
幸存者,残存者,生还者( survivor的名词复数 )
参考例句:
  • The survivors were adrift in a lifeboat for six days. 幸存者在救生艇上漂流了六天。
  • survivors clinging to a raft 紧紧抓住救生筏的幸存者
2 judgment e3xxC     
n.审判;判断力,识别力,看法,意见
参考例句:
  • The chairman flatters himself on his judgment of people.主席自认为他审视人比别人高明。
  • He's a man of excellent judgment.他眼力过人。
3 inevitability c7Pxd     
n.必然性
参考例句:
  • Evolutionism is normally associated with a belief in the inevitability of progress. 进化主义通常和一种相信进步不可避免的看法相联系。
  • It is the tide of the times, an inevitability of history. 这是时代的潮流,历史的必然。
4 runaway jD4y5     
n.逃走的人,逃亡,亡命者;adj.逃亡的,逃走的
参考例句:
  • The police have not found the runaway to date.警察迄今没抓到逃犯。
  • He was praised for bringing up the runaway horse.他勒住了脱缰之马受到了表扬。
5 ted 9gazhs     
vt.翻晒,撒,撒开
参考例句:
  • The invaders gut ted the village.侵略者把村中财物洗劫一空。
  • She often teds the corn when it's sunny.天好的时候她就翻晒玉米。
6 mathematician aoPz2p     
n.数学家
参考例句:
  • The man with his back to the camera is a mathematician.背对着照相机的人是位数学家。
  • The mathematician analyzed his figures again.这位数学家再次分析研究了他的这些数字。
7 malicious e8UzX     
adj.有恶意的,心怀恶意的
参考例句:
  • You ought to kick back at such malicious slander. 你应当反击这种恶毒的污蔑。
  • Their talk was slightly malicious.他们的谈话有点儿心怀不轨。
8 scenario lZoxm     
n.剧本,脚本;概要
参考例句:
  • But the birth scenario is not completely accurate.然而分娩脚本并非完全准确的。
  • This is a totally different scenario.这是完全不同的剧本。
9 malevolent G8IzV     
adj.有恶意的,恶毒的
参考例句:
  • Why are they so malevolent to me?他们为什么对我如此恶毒?
  • We must thwart his malevolent schemes.我们决不能让他的恶毒阴谋得逞。
10 divergence kkazz     
n.分歧,岔开
参考例句:
  • There is no sure cure for this transatlantic divergence.没有什么灵丹妙药可以消除大西洋两岸的分歧。
  • In short,it was an age full of conflicts and divergence of values.总之,这一时期是矛盾与价值观分歧的时期。
11 annihilate Peryn     
v.使无效;毁灭;取消
参考例句:
  • Archer crumpled up the yellow sheet as if the gesture could annihilate the news it contained.阿切尔把这张黄纸揉皱,好象用这个动作就会抹掉里面的消息似的。
  • We should bear in mind that we have to annihilate the enemy.我们要把歼敌的重任时刻记在心上。
12 inevitable 5xcyq     
adj.不可避免的,必然发生的
参考例句:
  • Mary was wearing her inevitable large hat.玛丽戴着她总是戴的那顶大帽子。
  • The defeat had inevitable consequences for British policy.战败对英国政策不可避免地产生了影响。
13 replicate PVAxN     
v.折叠,复制,模写;n.同样的样品;adj.转折的
参考例句:
  • The DNA of chromatin must replicate before cell division.染色质DNA在细胞分裂之前必须复制。
  • It is also easy to replicate,as the next subsection explains.就像下一个小节详细说明的那样,它还可以被轻易的复制。
14 crunch uOgzM     
n.关键时刻;艰难局面;v.发出碎裂声
参考例句:
  • If it comes to the crunch they'll support us.关键时刻他们是会支持我们的。
  • People who crunch nuts at the movies can be very annoying.看电影时嘎吱作声地嚼干果的人会使人十分讨厌。
15 racing 1ksz3w     
n.竞赛,赛马;adj.竞赛用的,赛马用的
参考例句:
  • I was watching the racing on television last night.昨晚我在电视上看赛马。
  • The two racing drivers fenced for a chance to gain the lead.两个赛车手伺机竞相领先。
16 constrain xpCzL     
vt.限制,约束;克制,抑制
参考例句:
  • She tried to constrain herself from a cough in class.上课时她竭力忍住不咳嗽。
  • The study will examine the factors which constrain local economic growth.这项研究将考查抑制当地经济发展的因素。
17 frankly fsXzcf     
adv.坦白地,直率地;坦率地说
参考例句:
  • To speak frankly, I don't like the idea at all.老实说,我一点也不赞成这个主意。
  • Frankly speaking, I'm not opposed to reform.坦率地说,我不反对改革。
18 oracle jJuxy     
n.神谕,神谕处,预言
参考例句:
  • In times of difficulty,she pray for an oracle to guide her.在困难的时候,她祈祷神谕来指引她。
  • It is a kind of oracle that often foretells things most important.它是一种内生性神谕,常常能预言最重要的事情。
19 drudgery CkUz2     
n.苦工,重活,单调乏味的工作
参考例句:
  • People want to get away from the drudgery of their everyday lives.人们想摆脱日常生活中单调乏味的工作。
  • He spent his life in pointlessly tiresome drudgery.他的一生都在做毫无意义的烦人的苦差事。
20 silicon dykwJ     
n.硅(旧名矽)
参考例句:
  • This company pioneered the use of silicon chip.这家公司开创了使用硅片的方法。
  • A chip is a piece of silicon about the size of a postage stamp.芯片就是一枚邮票大小的硅片。
21 deploy Yw8x7     
v.(军)散开成战斗队形,布置,展开
参考例句:
  • The infantry began to deploy at dawn.步兵黎明时开始进入战斗位置。
  • The president said he had no intention of deploying ground troops.总统称并不打算部署地面部队。
22 unprecedented 7gSyJ     
adj.无前例的,新奇的
参考例句:
  • The air crash caused an unprecedented number of deaths.这次空难的死亡人数是空前的。
  • A flood of this sort is really unprecedented.这样大的洪水真是十年九不遇。
23 mere rC1xE     
adj.纯粹的;仅仅,只不过
参考例句:
  • That is a mere repetition of what you said before.那不过是重复了你以前讲的话。
  • It's a mere waste of time waiting any longer.再等下去纯粹是浪费时间。
24 rumors 2170bcd55c0e3844ecb4ef13fef29b01     
n.传闻( rumor的名词复数 );[古]名誉;咕哝;[古]喧嚷v.传闻( rumor的第三人称单数 );[古]名誉;咕哝;[古]喧嚷
参考例句:
  • Rumors have it that the school was burned down. 有谣言说学校给烧掉了。 来自《简明英汉词典》
  • Rumors of a revolt were afloat. 叛变的谣言四起。 来自《简明英汉词典》
25 essentially nntxw     
adv.本质上,实质上,基本上
参考例句:
  • Really great men are essentially modest.真正的伟人大都很谦虚。
  • She is an essentially selfish person.她本质上是个自私自利的人。
26 dominant usAxG     
adj.支配的,统治的;占优势的;显性的;n.主因,要素,主要的人(或物);显性基因
参考例句:
  • The British were formerly dominant in India.英国人从前统治印度。
  • She was a dominant figure in the French film industry.她在法国电影界是个举足轻重的人物。
27 epoch riTzw     
n.(新)时代;历元
参考例句:
  • The epoch of revolution creates great figures.革命时代造就伟大的人物。
  • We're at the end of the historical epoch,and at the dawn of another.我们正处在一个历史时代的末期,另一个历史时代的开端。
28 bust WszzB     
vt.打破;vi.爆裂;n.半身像;胸部
参考例句:
  • I dropped my camera on the pavement and bust it. 我把照相机掉在人行道上摔坏了。
  • She has worked up a lump of clay into a bust.她把一块黏土精心制作成一个半身像。
29 inevitably x7axc     
adv.不可避免地;必然发生地
参考例句:
  • In the way you go on,you are inevitably coming apart.照你们这样下去,毫无疑问是会散伙的。
  • Technological changes will inevitably lead to unemployment.技术变革必然会导致失业。
30 aligned 165f93b99f87c219277d70d866425da6     
adj.对齐的,均衡的
参考例句:
  • Make sure the shelf is aligned with the top of the cupboard.务必使搁架与橱柜顶端对齐。
31 subscribe 6Hozu     
vi.(to)订阅,订购;同意;vt.捐助,赞助
参考例句:
  • I heartily subscribe to that sentiment.我十分赞同那个观点。
  • The magazine is trying to get more readers to subscribe.该杂志正大力发展新订户。
本文本内容来源于互联网抓取和网友提交,仅供参考,部分栏目没有内容,如果您有更合适的内容,欢迎点击提交分享给大家。
------分隔线----------------------------
TAG标签:   NPR  美国国家电台  英语听力
顶一下
(0)
0%
踩一下
(0)
0%
最新评论 查看所有评论
发表评论 查看所有评论
请自觉遵守互联网相关的政策法规,严禁发布色情、暴力、反动的言论。
评价:
表情:
验证码:
听力搜索
推荐频道
论坛新贴