-
(单词翻译:双击或拖选)
GUY RAZ, HOST:
On the show today, Future Consequences, how our decisions about science and technology today will impact our future tomorrow. And in a lot of ways, one aspect of the future has already been imagined for us.
(SOUNDBITE OF FILM, "TERMINATOR 2")
LINDA HAMILTON: (As Sarah Connor) Three-billion human lives ended on August 29th, 1997.
RAZ: I mean, this has been part of our culture for decades.
(SOUNDBITE OF FILM, "TERMINATOR 2")
HAMILTON: (As Sarah Connor) The survivors1 of the nuclear fire called the war Judgment2 Day. They lived only to face a new nightmare, the war against the machines.
RAZ: OK so you might recognize this scene. It's Sarah Connor in "Terminator 2" describing how artificial intelligence sparked a nuclear attack and then waged war against the surviving humans, which is something that should terrify all of us, right?
SAM HARRIS: Yeah, and what especially worries me about artificial intelligence is that I'm freaked out by my inability to marshal the appropriate emotional response.
RAZ: Which should be what?
HARRIS: Oh, I think potentially it's - it's the most worrisome future possible because we're talking about the most powerful possible technology.
RAZ: This is Sam Harris.
HARRIS: I am a writer, and podcaster (ph), and neuroscientist and also armchair philosopher.
RAZ: And Sam says our inability to react to this future with urgency poses a big problem.
HARRIS: The quote from Stuart Russell, the computer scientist at Berkeley, is, imagine we received a communication from an alien civilization which said, people of Earth, we will arrive on your planet in 50 years. Get ready. Right, now just imagine that. Now, that is the circumstance we are in, fundamentally. We're talking about a seeming inevitability3 that we will produce superhuman intelligence - intelligence, which, once it becomes superhuman, then it becomes the engine of its own improvements. Then there's really kind of a just a runaway4 effect where we can't even imagine how much better it could be than we are.
RAZ: Sam picks up this idea from the TED5 stage.
(SOUNDBITE OF TED TALK)
HARRIS: At a certain point, we will build machines that are smarter than we are. And once we have machines that are smarter than we are, they will begin to improve themselves. And then we risk what the mathematician6 I. J. Good called an intelligence explosion - that the process could get away from us. Now, this is often caricatured as a fear that armies of malicious7 robots will attack us, but that isn't the most likely scenario8. It's not that our machines will become spontaneously malevolent9. The concern is really that we will build machines that are so much more competent than we are that the slightest divergence10 between their goals and our own could destroy us.
Just think about how we relate to ants, OK? We don't hate them. We don't go out of our way to harm them. In fact, sometimes, we take pains not to harm them. We just - we step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, we annihilate11 them without a qualm. The concern is that we will one day build machines that could treat us with similar disregard. It's crucial to realize that the rate of progress doesn't matter. It does - any progress is enough to get us into the end zone. We don't need Moore's law to continue. We don't need exponential progress. We just need to keep going. So we will do this if we can. The train is already out of the station, and there's no break to pull.
(SOUNDBITE OF MUSIC)
RAZ: You believe that it is inevitable12 that at some point, we humans will create the technology to more or less replicate13 humans.
HARRIS: Yeah, I think that the moment you recognize that intelligence is platform independent, then you just have to give up this idea that there's any barrier to machines becoming superhuman. And when they do become superhuman in their abilities to design experiments, to engineer new machines, then they will be the best at doing that.
RAZ: But we're not there, right? I mean, at this point, there are a lot of things humans do that machines and robots just can't do.
HARRIS: Yeah, I think this notion of a goal of human-level intelligence is quite misleading. We're living with what's called narrow AI, which is superhuman in its area of application but is not at all general, and therefore, isn't nearly as good as the human mind is now. It - or - the best chess player in the world is a computer. And now the best go player in the world is a computer. The best facial recognition system is a computer. And yet, none of these systems is good at anything else, really.
So I think the goal is to have general intelligence which allows for kind of flexible learning across many different tasks and in many different environments. But once we have something that's truly generalizable and seamless, it won't be human-level. It will become superhuman, and it won't be human-level unless we consciously degrade its capacity to be human-level, and we would never do that.
RAZ: Right, right, because that's just not a normal human response, because when most people hear about AI or talk about it, it's with this incredible optimism. I mean, this technology will enable us to do things we can't do. We're going to be able to crunch14 numbers in ways that we can't right now and solve diseases through data. And machines are going to be able to do tasks and do them much faster. So, I mean, there is a sense of wonder about the future of artificial intelligence.
HARRIS: Yeah, I think that's appropriate because intelligence is our most important resource, and we want more of it. Just think about it in terms of, every problem you see in the world has some intelligent solution if a solution is compatible with the laws of physics, right? And so it's just, we want to figure out these solutions, and we want to improve human life.
But yeah, we are racing15 toward something we don't understand. And the scary thing is that many people thinking about the potential upside here don't seem too aware of the ways in which it could go wrong and in fact are - just deny that there's really anything worth thinking about here.
(SOUNDBITE OF TED TALK)
HARRIS: Imagine we just built a superintelligent AI - right? - that was no smarter than your average team of researchers at Stanford or MIT. Well, electronic circuits function about a million times faster than biochemical ones, OK? So this machine should think about a million times faster than the minds that built it. So you set it running for a week, and it will perform 20,000 years of human-level intellectual work week, after week after week. How could we even understand, much less constrain16, a mind making this sort of progress?
The other thing that's worrying, frankly17, is that, imagine the best-case scenario. So imagine we hit upon a design of superintelligent AI that has no safety concerns. We have the perfect design the first time around. It's as though we'd been handed an oracle18 that behaves exactly as intended. Well, this machine would be the perfect labor-saving device. It can design the machine that can build the machine that can do any physical work powered by sunlight more or less for the cost of raw materials. OK, so we're talking about the end of human drudgery19. We're also talking about the end of most intellectual work.
Now, what would the Russians or the Chinese do if they heard that some company in Silicon20 Valley was about to deploy21 a super intelligent AI? This machine would be capable of waging war, right, whether terrestrial or cyber, with unprecedented22 power. This is a winner-take-all scenario. To be six months ahead of the competition here is to be 500,000 years ahead at a minimum. OK, so it seems that even mere23 rumors24 of this kind of breakthrough could cause our species to go berserk.
(SOUNDBITE OF MUSIC)
RAZ: I mean, it seems like one big consequence of not taking this seriously, you know, is that we will essentially25 be giving up control over our own destiny as a species.
HARRIS: Yeah, well, potentially. And so that - obviously there are the people who say, well, we would never do that. We would never give up control, right?
RAZ: Right.
HARRIS: But there's just no guarantee of that, particularly when you imagine the power that awaits anyone, you know, any government, any research team, any individual ultimately, who creates a system that is superhuman in its abilities and general in its abilities. Well, then no one can really compete with you in anything. It's really hard to picture what the intellectual and and scientific inequality that that suddenly could open up.
RAZ: I mean, based on, you know, the pace of this technology, I mean, how much longer will humans be the - sort of the dominant26 species on planet Earth?
HARRIS: Well, you know, I really - I have no idea. I just think that the pace of change suggests that the next 50 years could represent an astonishing epoch27 of change. Just look at the pace of change in our own lives in the last 20 years. You know, most of us have only been on the Internet for about 20 years. Twenty years ago, you had people saying the Internet is going to be a bust28, right? I mean, there's no there there, right?
RAZ: (Laughter) Right, right.
HARRIS: No one's going to use this thing, right?
RAZ: Yes.
HARRIS: And look at the world we're in now. And this is a comparatively old kind of breakthrough. I mean, we're not - nothing of the last 20 years has been transformed fundamentally by artificial intelligence, so I think the next 50 years could change everything.
(SOUNDBITE OF TED TALK)
HARRIS: Now, unfortunately, I don't have a solution to this problem apart from recommending that more of us think about it. I think we need something like a Manhattan Project on the topic of artificial intelligence, not to build it because I think we'll inevitably29 do that, but to to understand how to avoid an arms race and to build it in a way that is aligned30 with our interests. When you're talking about a super intelligent AI that can make changes to itself, it seems that we only have one chance to get the initial conditions right. And even then, we will need to absorb the economic and political consequences of getting them right. But the moment we admit that information processing is the source of intelligence, that some appropriate computational system is what the basis of intelligence is and we admit that we will improve these systems continuously, then we have to admit that we're in the process of building some sort of God. Now would be a good time to make sure it's a God we can live with. Thank you very much.
(APPLAUSE)
RAZ: Writer and neuroscientist and philosopher Sam Harris. He's also the host of the podcast "Waking Up With Sam Harris." You should definitely check that out. And you can watch all of Sam's talks at ted.com.
(SOUNDBITE OF SONG, "YOSHIMI BATTLES THE PINK ROBOTS PT. 1")
FLAMING LIPS: (Singing) Oh, Yoshimi, they don't believe me, but you won't let those robots defeat me. Yoshimi, they don't believe me, but you won't have those robots defeat me.
RAZ: Hey, thanks for listening to our episode Future Consequences this week. If you want to find out more about who was on it, go to ted.npr.org. To see hundreds more TED Talks, check out ted.com or the TED app. Our production staff at NPR includes Jeff Rogers, Sanaz Meshkinpour, Jinae West, Neva Grant and Rund Abdelfatah, with help from Daniel Shukin and Tony Liu. Our partners at TED are Chris Anderson, Colin Helms, Anna Phelan and Janet Lee.
If you want to let us know what you think about the show, please go to Apple Podcasts and write a review. Please also subscribe31 to our podcast at Apple Podcasts or however you get your podcasts. You can also write us directly. It's [email protected]. And you can follow us on Twitter. That's @TEDRadioHour. I'm Guy Raz, and you've been listening to ideas worth spreading right here on the TED Radio Hour from NPR.
1 survivors | |
幸存者,残存者,生还者( survivor的名词复数 ) | |
参考例句: |
|
|
2 judgment | |
n.审判;判断力,识别力,看法,意见 | |
参考例句: |
|
|
3 inevitability | |
n.必然性 | |
参考例句: |
|
|
4 runaway | |
n.逃走的人,逃亡,亡命者;adj.逃亡的,逃走的 | |
参考例句: |
|
|
5 ted | |
vt.翻晒,撒,撒开 | |
参考例句: |
|
|
6 mathematician | |
n.数学家 | |
参考例句: |
|
|
7 malicious | |
adj.有恶意的,心怀恶意的 | |
参考例句: |
|
|
8 scenario | |
n.剧本,脚本;概要 | |
参考例句: |
|
|
9 malevolent | |
adj.有恶意的,恶毒的 | |
参考例句: |
|
|
10 divergence | |
n.分歧,岔开 | |
参考例句: |
|
|
11 annihilate | |
v.使无效;毁灭;取消 | |
参考例句: |
|
|
12 inevitable | |
adj.不可避免的,必然发生的 | |
参考例句: |
|
|
13 replicate | |
v.折叠,复制,模写;n.同样的样品;adj.转折的 | |
参考例句: |
|
|
14 crunch | |
n.关键时刻;艰难局面;v.发出碎裂声 | |
参考例句: |
|
|
15 racing | |
n.竞赛,赛马;adj.竞赛用的,赛马用的 | |
参考例句: |
|
|
16 constrain | |
vt.限制,约束;克制,抑制 | |
参考例句: |
|
|
17 frankly | |
adv.坦白地,直率地;坦率地说 | |
参考例句: |
|
|
18 oracle | |
n.神谕,神谕处,预言 | |
参考例句: |
|
|
19 drudgery | |
n.苦工,重活,单调乏味的工作 | |
参考例句: |
|
|
20 silicon | |
n.硅(旧名矽) | |
参考例句: |
|
|
21 deploy | |
v.(军)散开成战斗队形,布置,展开 | |
参考例句: |
|
|
22 unprecedented | |
adj.无前例的,新奇的 | |
参考例句: |
|
|
23 mere | |
adj.纯粹的;仅仅,只不过 | |
参考例句: |
|
|
24 rumors | |
n.传闻( rumor的名词复数 );[古]名誉;咕哝;[古]喧嚷v.传闻( rumor的第三人称单数 );[古]名誉;咕哝;[古]喧嚷 | |
参考例句: |
|
|
25 essentially | |
adv.本质上,实质上,基本上 | |
参考例句: |
|
|
26 dominant | |
adj.支配的,统治的;占优势的;显性的;n.主因,要素,主要的人(或物);显性基因 | |
参考例句: |
|
|
27 epoch | |
n.(新)时代;历元 | |
参考例句: |
|
|
28 bust | |
vt.打破;vi.爆裂;n.半身像;胸部 | |
参考例句: |
|
|
29 inevitably | |
adv.不可避免地;必然发生地 | |
参考例句: |
|
|
30 aligned | |
adj.对齐的,均衡的 | |
参考例句: |
|
|
31 subscribe | |
vi.(to)订阅,订购;同意;vt.捐助,赞助 | |
参考例句: |
|
|