-
(单词翻译:双击或拖选)
'Age of Danger' explores potential risks because AI doesn't understand rules of war
NPR's Steve Inskeep speaks with Thom Shanker, co-author of the book Age of Danger, about the threats artificial intelligence poses to national security.
LEILA FADEL, HOST:
The journalist Thom Shanker spent decades covering American wars and national security. He wrote for The New York Times. Now he stepped away. And he tells our co-host, Steve Inskeep, that he's thinking about threats in the not-too-distant future.
THOM SHANKER: There's a lot of very scary things out there. And for 20 years, we make the case that this government focused on counterterrorism, zoom-like focus. And for those 20 years, we ignored lots of rising threats. And they are now upon us. And we are really unprepared. The system is unprepared. The public is unprepared. We haven't thought about some of these things.
STEVE INSKEEP, HOST:
Shanker co-authored a book with Andrew Hoehn called "Age Of Danger." It's a catalogue of threats that might keep people up at night if only they knew. He says national security professionals warn about diseases designed to destroy American crops. They think about low-lying naval2 bases that may be underwater in a few decades thanks to climate change. They think about ways to counter the advanced weaponry of China. Shanker does not advocate a bigger military budget to counter these threats, but he does argue the government needs to make smarter use of the resources it has. He says a prime example is the dangers of computers run by artificial intelligence.
SHANKER: Most of the public discussion of AI so far has been about, will it write my kid's homework? That's bad. Will it put law clerks out of a job? That's bad. Will it tell me to break up with my dog? That's bad. Will it compose symphonies - I don't know - if they're good symphonies. So those are real-world problems, Steve. But when you get into the national security space, it gets very, very scary what AI can do, an autonomous3 weaponry that operates without a human in the kill chain.
INSKEEP: When you say autonomous weaponry, what do we mean, like a tank with no person that drives itself, finds its own target and shoots it?
SHANKER: It's already happening out there now. There's a military axiom that says speed kills. If you see first, if you assess first, if you decide first, if you act first, you have an incredible advantage. And this is already part of American military hardware, like the Patriot5 anti-missile batteries that we've given to Ukraine. Incoming missiles, you really don't have time for a human to get his iPad out and work out trajectories6 and all that. So they're programmed to respond without a human doing very much. It's called eyes on, hands off.
INSKEEP: Does a human still pull the trigger or press the button in that case?
SHANKER: Certainly can. Absolutely. Absolutely. But sometimes, if all of the data coming in indicates truly it's an adversary7 missile, it will respond. And here's where it gets scary. As weapons get faster, like hypersonics, when they can attack at network speed, like cyberattacks, humans simply cannot be involved in that. So you have to program, you have to put your best human intellectual power into these machines and hope that they respond accordingly. But as we know in the real world, humans make mistake. Hospitals get blown up. Innocents get killed. How do we prevent that human error from going into a program that allows a machine to defend us at network speed, far faster than a human can?
INSKEEP: I'm thinking about the way the United States and Russia - or in another context, perhaps, the United States and China - have their militaries aimed at each other and prepared to respond proportionally to each other. In a worst-case scenario8, a nuclear attack might be answered by a nuclear attack. Is it possible that through these incredibly fast computers, we could get into a cycle where our computers are shooting at each other and escalating9 a war within minutes or seconds?
SHANKER: That's not where we are now. But that, of course, is the concern not only of real-world strategists, but of screenplay writers, like "Dr. Strangelove," those sorts of things.
INSKEEP: I was going to ask you if you had seen "Dr. Strangelove." Clearly, you have.
SHANKER: You should ask me how many times I've seen "Dr. Strangelove."
INSKEEP: Let's describe - I don't think we're giving away too much - the machine that turns out to be the big reveal in "Dr. Strangelove." What is the doomsday machine?
SHANKER: Well, the Kremlin leader has ordered a machine created that if the Soviet10 Union is ever attacked, then the entire Soviet arsenal11 would be unleashed12 on the adversary. And in some ways, you can make the case that is a deterrent13 because no matter who attacks with one missile or 1,000, the response will be overwhelming.
(SOUNDBITE OF FILM, "DR. STRANGELOVE OR: HOW I LEARNED TO STOP WORRYING AND LOVE THE BOMB")
PETER SELLERS: (As Dr. Strangelove) Because of the automated14 and irrevocable decision-making process, which rules out human meddling15, the doomsday machine is terrifying and simple to understand and completely credible4 and convincing.
GEORGE C SCOTT: (As General Turgidson) Gee16, I wish we had one of them doomsday machines, Stainsey.
SHANKER: But the joke of the movie is they were going to announce it on the Soviet leader's birthday the following week. So the world doesn't know that this deterrent system is set up. And basically, Armageddon is assured.
INSKEEP: What's going to happen is there's going to be a random17 attack.
SHANKER: And the machine will respond, as programmed by humans. And the challenge today is, right now, most of the missiles fly over the pole. We have pretty good warning time. But as the Chinese in particular experiment with hypersonic weapons, we might not have the warning time. And there might someday be an argument to design systems that would respond autonomously18 to such a sneak19 hypersonic attack.
INSKEEP: When I think about the historic connections between the Pentagon, defense20 contractors21 and Silicon22 Valley and all the computing23 power that's in Silicon Valley, I would like to imagine that the United States is on top of this problem. Are they on top of this problem?
SHANKER: Some of the best minds are on top of it. And Andy Hoehn and I spoke24 to a number of people in the private sector25, number of people in the public sector, in government. And they really are aware of the problem. They're asking questions like, how do we design artificial intelligence that has limits, that understands the laws of war, that understands the rules of retaliation26, that won't assign itself a mission that the humans don't like? But even people like Eric Schmidt, you know, the founder27 of Google, who's spending a lot of time and money in this exact space, spoke to us on the record. He's extremely worried about these questions.
INSKEEP: It seems to me there are two interrelated problems. One is that an adversary like China gets ahead of the United States and can defeat the United States. But the other is that some effort by the United States gets out of control and we destroy ourselves.
SHANKER: That is a concern. And that could be your next screenplay. And the problem is you're raising a problem, Steve, that nobody has an answer for. I mean, how does one design AI with real intelligence and compassion28 and rationality, because at the end of the day, it's just ones and zeros?
INSKEEP: Tom Shanker is co-author of the new book "Age Of Danger." Thanks so much.
SHANKER: It was an honor to be here, Steve. Thank you so much for having me.
(SOUNDBITE OF SONG, "WE'LL MEET AGAIN")
VERA LYNN: (Singing) We'll meet again, don't know where.
1 transcript | |
n.抄本,誊本,副本,肄业证书 | |
参考例句: |
|
|
2 naval | |
adj.海军的,军舰的,船的 | |
参考例句: |
|
|
3 autonomous | |
adj.自治的;独立的 | |
参考例句: |
|
|
4 credible | |
adj.可信任的,可靠的 | |
参考例句: |
|
|
5 patriot | |
n.爱国者,爱国主义者 | |
参考例句: |
|
|
6 trajectories | |
n.弹道( trajectory的名词复数 );轨道;轨线;常角轨道 | |
参考例句: |
|
|
7 adversary | |
adj.敌手,对手 | |
参考例句: |
|
|
8 scenario | |
n.剧本,脚本;概要 | |
参考例句: |
|
|
9 escalating | |
v.(使)逐步升级( escalate的现在分词 );(使)逐步扩大;(使)更高;(使)更大 | |
参考例句: |
|
|
10 Soviet | |
adj.苏联的,苏维埃的;n.苏维埃 | |
参考例句: |
|
|
11 arsenal | |
n.兵工厂,军械库 | |
参考例句: |
|
|
12 unleashed | |
v.把(感情、力量等)释放出来,发泄( unleash的过去式和过去分词 ) | |
参考例句: |
|
|
13 deterrent | |
n.阻碍物,制止物;adj.威慑的,遏制的 | |
参考例句: |
|
|
14 automated | |
a.自动化的 | |
参考例句: |
|
|
15 meddling | |
v.干涉,干预(他人事务)( meddle的现在分词 ) | |
参考例句: |
|
|
16 gee | |
n.马;int.向右!前进!,惊讶时所发声音;v.向右转 | |
参考例句: |
|
|
17 random | |
adj.随机的;任意的;n.偶然的(或随便的)行动 | |
参考例句: |
|
|
18 autonomously | |
adv. 自律地,自治地 | |
参考例句: |
|
|
19 sneak | |
vt.潜行(隐藏,填石缝);偷偷摸摸做;n.潜行;adj.暗中进行 | |
参考例句: |
|
|
20 defense | |
n.防御,保卫;[pl.]防务工事;辩护,答辩 | |
参考例句: |
|
|
21 contractors | |
n.(建筑、监造中的)承包人( contractor的名词复数 ) | |
参考例句: |
|
|
22 silicon | |
n.硅(旧名矽) | |
参考例句: |
|
|
23 computing | |
n.计算 | |
参考例句: |
|
|
24 spoke | |
n.(车轮的)辐条;轮辐;破坏某人的计划;阻挠某人的行动 v.讲,谈(speak的过去式);说;演说;从某种观点来说 | |
参考例句: |
|
|
25 sector | |
n.部门,部分;防御地段,防区;扇形 | |
参考例句: |
|
|
26 retaliation | |
n.报复,反击 | |
参考例句: |
|
|
27 Founder | |
n.创始者,缔造者 | |
参考例句: |
|
|
28 compassion | |
n.同情,怜悯 | |
参考例句: |
|
|