美国国家公共电台 NPR Sam Harris: What Happens When Humans Develop Super Intelligent AI?(在线收听

 

GUY RAZ, HOST:

On the show today, Future Consequences, how our decisions about science and technology today will impact our future tomorrow. And in a lot of ways, one aspect of the future has already been imagined for us.

(SOUNDBITE OF FILM, "TERMINATOR 2")

LINDA HAMILTON: (As Sarah Connor) Three-billion human lives ended on August 29th, 1997.

RAZ: I mean, this has been part of our culture for decades.

(SOUNDBITE OF FILM, "TERMINATOR 2")

HAMILTON: (As Sarah Connor) The survivors of the nuclear fire called the war Judgment Day. They lived only to face a new nightmare, the war against the machines.

RAZ: OK so you might recognize this scene. It's Sarah Connor in "Terminator 2" describing how artificial intelligence sparked a nuclear attack and then waged war against the surviving humans, which is something that should terrify all of us, right?

SAM HARRIS: Yeah, and what especially worries me about artificial intelligence is that I'm freaked out by my inability to marshal the appropriate emotional response.

RAZ: Which should be what?

HARRIS: Oh, I think potentially it's - it's the most worrisome future possible because we're talking about the most powerful possible technology.

RAZ: This is Sam Harris.

HARRIS: I am a writer, and podcaster (ph), and neuroscientist and also armchair philosopher.

RAZ: And Sam says our inability to react to this future with urgency poses a big problem.

HARRIS: The quote from Stuart Russell, the computer scientist at Berkeley, is, imagine we received a communication from an alien civilization which said, people of Earth, we will arrive on your planet in 50 years. Get ready. Right, now just imagine that. Now, that is the circumstance we are in, fundamentally. We're talking about a seeming inevitability that we will produce superhuman intelligence - intelligence, which, once it becomes superhuman, then it becomes the engine of its own improvements. Then there's really kind of a just a runaway effect where we can't even imagine how much better it could be than we are.

RAZ: Sam picks up this idea from the TED stage.

(SOUNDBITE OF TED TALK)

HARRIS: At a certain point, we will build machines that are smarter than we are. And once we have machines that are smarter than we are, they will begin to improve themselves. And then we risk what the mathematician I. J. Good called an intelligence explosion - that the process could get away from us. Now, this is often caricatured as a fear that armies of malicious robots will attack us, but that isn't the most likely scenario. It's not that our machines will become spontaneously malevolent. The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.

Just think about how we relate to ants, OK? We don't hate them. We don't go out of our way to harm them. In fact, sometimes, we take pains not to harm them. We just - we step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, we annihilate them without a qualm. The concern is that we will one day build machines that could treat us with similar disregard. It's crucial to realize that the rate of progress doesn't matter. It does - any progress is enough to get us into the end zone. We don't need Moore's law to continue. We don't need exponential progress. We just need to keep going. So we will do this if we can. The train is already out of the station, and there's no break to pull.

(SOUNDBITE OF MUSIC)

RAZ: You believe that it is inevitable that at some point, we humans will create the technology to more or less replicate humans.

HARRIS: Yeah, I think that the moment you recognize that intelligence is platform independent, then you just have to give up this idea that there's any barrier to machines becoming superhuman. And when they do become superhuman in their abilities to design experiments, to engineer new machines, then they will be the best at doing that.

RAZ: But we're not there, right? I mean, at this point, there are a lot of things humans do that machines and robots just can't do.

HARRIS: Yeah, I think this notion of a goal of human-level intelligence is quite misleading. We're living with what's called narrow AI, which is superhuman in its area of application but is not at all general, and therefore, isn't nearly as good as the human mind is now. It - or - the best chess player in the world is a computer. And now the best go player in the world is a computer. The best facial recognition system is a computer. And yet, none of these systems is good at anything else, really.

So I think the goal is to have general intelligence which allows for kind of flexible learning across many different tasks and in many different environments. But once we have something that's truly generalizable and seamless, it won't be human-level. It will become superhuman, and it won't be human-level unless we consciously degrade its capacity to be human-level, and we would never do that.

RAZ: Right, right, because that's just not a normal human response, because when most people hear about AI or talk about it, it's with this incredible optimism. I mean, this technology will enable us to do things we can't do. We're going to be able to crunch numbers in ways that we can't right now and solve diseases through data. And machines are going to be able to do tasks and do them much faster. So, I mean, there is a sense of wonder about the future of artificial intelligence.

HARRIS: Yeah, I think that's appropriate because intelligence is our most important resource, and we want more of it. Just think about it in terms of, every problem you see in the world has some intelligent solution if a solution is compatible with the laws of physics, right? And so it's just, we want to figure out these solutions, and we want to improve human life.

But yeah, we are racing toward something we don't understand. And the scary thing is that many people thinking about the potential upside here don't seem too aware of the ways in which it could go wrong and in fact are - just deny that there's really anything worth thinking about here.

(SOUNDBITE OF TED TALK)

HARRIS: Imagine we just built a superintelligent AI - right? - that was no smarter than your average team of researchers at Stanford or MIT. Well, electronic circuits function about a million times faster than biochemical ones, OK? So this machine should think about a million times faster than the minds that built it. So you set it running for a week, and it will perform 20,000 years of human-level intellectual work week, after week after week. How could we even understand, much less constrain, a mind making this sort of progress?

The other thing that's worrying, frankly, is that, imagine the best-case scenario. So imagine we hit upon a design of superintelligent AI that has no safety concerns. We have the perfect design the first time around. It's as though we'd been handed an oracle that behaves exactly as intended. Well, this machine would be the perfect labor-saving device. It can design the machine that can build the machine that can do any physical work powered by sunlight more or less for the cost of raw materials. OK, so we're talking about the end of human drudgery. We're also talking about the end of most intellectual work.

Now, what would the Russians or the Chinese do if they heard that some company in Silicon Valley was about to deploy a super intelligent AI? This machine would be capable of waging war, right, whether terrestrial or cyber, with unprecedented power. This is a winner-take-all scenario. To be six months ahead of the competition here is to be 500,000 years ahead at a minimum. OK, so it seems that even mere rumors of this kind of breakthrough could cause our species to go berserk.

(SOUNDBITE OF MUSIC)

RAZ: I mean, it seems like one big consequence of not taking this seriously, you know, is that we will essentially be giving up control over our own destiny as a species.

HARRIS: Yeah, well, potentially. And so that - obviously there are the people who say, well, we would never do that. We would never give up control, right?

RAZ: Right.

HARRIS: But there's just no guarantee of that, particularly when you imagine the power that awaits anyone, you know, any government, any research team, any individual ultimately, who creates a system that is superhuman in its abilities and general in its abilities. Well, then no one can really compete with you in anything. It's really hard to picture what the intellectual and and scientific inequality that that suddenly could open up.

RAZ: I mean, based on, you know, the pace of this technology, I mean, how much longer will humans be the - sort of the dominant species on planet Earth?

HARRIS: Well, you know, I really - I have no idea. I just think that the pace of change suggests that the next 50 years could represent an astonishing epoch of change. Just look at the pace of change in our own lives in the last 20 years. You know, most of us have only been on the Internet for about 20 years. Twenty years ago, you had people saying the Internet is going to be a bust, right? I mean, there's no there there, right?

RAZ: (Laughter) Right, right.

HARRIS: No one's going to use this thing, right?

RAZ: Yes.

HARRIS: And look at the world we're in now. And this is a comparatively old kind of breakthrough. I mean, we're not - nothing of the last 20 years has been transformed fundamentally by artificial intelligence, so I think the next 50 years could change everything.

(SOUNDBITE OF TED TALK)

HARRIS: Now, unfortunately, I don't have a solution to this problem apart from recommending that more of us think about it. I think we need something like a Manhattan Project on the topic of artificial intelligence, not to build it because I think we'll inevitably do that, but to to understand how to avoid an arms race and to build it in a way that is aligned with our interests. When you're talking about a super intelligent AI that can make changes to itself, it seems that we only have one chance to get the initial conditions right. And even then, we will need to absorb the economic and political consequences of getting them right. But the moment we admit that information processing is the source of intelligence, that some appropriate computational system is what the basis of intelligence is and we admit that we will improve these systems continuously, then we have to admit that we're in the process of building some sort of God. Now would be a good time to make sure it's a God we can live with. Thank you very much.

(APPLAUSE)

RAZ: Writer and neuroscientist and philosopher Sam Harris. He's also the host of the podcast "Waking Up With Sam Harris." You should definitely check that out. And you can watch all of Sam's talks at ted.com.

(SOUNDBITE OF SONG, "YOSHIMI BATTLES THE PINK ROBOTS PT. 1")

FLAMING LIPS: (Singing) Oh, Yoshimi, they don't believe me, but you won't let those robots defeat me. Yoshimi, they don't believe me, but you won't have those robots defeat me.

RAZ: Hey, thanks for listening to our episode Future Consequences this week. If you want to find out more about who was on it, go to ted.npr.org. To see hundreds more TED Talks, check out ted.com or the TED app. Our production staff at NPR includes Jeff Rogers, Sanaz Meshkinpour, Jinae West, Neva Grant and Rund Abdelfatah, with help from Daniel Shukin and Tony Liu. Our partners at TED are Chris Anderson, Colin Helms, Anna Phelan and Janet Lee.

If you want to let us know what you think about the show, please go to Apple Podcasts and write a review. Please also subscribe to our podcast at Apple Podcasts or however you get your podcasts. You can also write us directly. It's [email protected]. And you can follow us on Twitter. That's @TEDRadioHour. I'm Guy Raz, and you've been listening to ideas worth spreading right here on the TED Radio Hour from NPR.

  原文地址:http://www.tingroom.com/lesson/npr2017/9/415349.html