英语 英语 日语 日语 韩语 韩语 法语 法语 德语 德语 西班牙语 西班牙语 意大利语 意大利语 阿拉伯语 阿拉伯语 葡萄牙语 葡萄牙语 越南语 越南语 俄语 俄语 芬兰语 芬兰语 泰语 泰语 泰语 丹麦语 泰语 对外汉语

美国国家公共电台 NPR--'Age of Danger' explores potential risks because AI doesn't understand rules of war

时间:2023-12-28 02:07来源:互联网 提供网友:nan   字体: [ ]
特别声明:本栏目内容均从网络收集或者网友提供,供仅参考试用,我们无法保证内容完整和正确。如果资料损害了您的权益,请与站长联系,我们将及时删除并致以歉意。
    (单词翻译:双击或拖选)

'Age of Danger' explores potential risks because AI doesn't understand rules of war

Transcript1

NPR's Steve Inskeep speaks with Thom Shanker, co-author of the book Age of Danger, about the threats artificial intelligence poses to national security.

LEILA FADEL, HOST:

The journalist Thom Shanker spent decades covering American wars and national security. He wrote for The New York Times. Now he stepped away. And he tells our co-host, Steve Inskeep, that he's thinking about threats in the not-too-distant future.

THOM SHANKER: There's a lot of very scary things out there. And for 20 years, we make the case that this government focused on counterterrorism, zoom-like focus. And for those 20 years, we ignored lots of rising threats. And they are now upon us. And we are really unprepared. The system is unprepared. The public is unprepared. We haven't thought about some of these things.

STEVE INSKEEP, HOST:

Shanker co-authored a book with Andrew Hoehn called "Age Of Danger." It's a catalogue of threats that might keep people up at night if only they knew. He says national security professionals warn about diseases designed to destroy American crops. They think about low-lying naval2 bases that may be underwater in a few decades thanks to climate change. They think about ways to counter the advanced weaponry of China. Shanker does not advocate a bigger military budget to counter these threats, but he does argue the government needs to make smarter use of the resources it has. He says a prime example is the dangers of computers run by artificial intelligence.

SHANKER: Most of the public discussion of AI so far has been about, will it write my kid's homework? That's bad. Will it put law clerks out of a job? That's bad. Will it tell me to break up with my dog? That's bad. Will it compose symphonies - I don't know - if they're good symphonies. So those are real-world problems, Steve. But when you get into the national security space, it gets very, very scary what AI can do, an autonomous3 weaponry that operates without a human in the kill chain.

INSKEEP: When you say autonomous weaponry, what do we mean, like a tank with no person that drives itself, finds its own target and shoots it?

SHANKER: It's already happening out there now. There's a military axiom that says speed kills. If you see first, if you assess first, if you decide first, if you act first, you have an incredible advantage. And this is already part of American military hardware, like the Patriot5 anti-missile batteries that we've given to Ukraine. Incoming missiles, you really don't have time for a human to get his iPad out and work out trajectories6 and all that. So they're programmed to respond without a human doing very much. It's called eyes on, hands off.

INSKEEP: Does a human still pull the trigger or press the button in that case?

SHANKER: Certainly can. Absolutely. Absolutely. But sometimes, if all of the data coming in indicates truly it's an adversary7 missile, it will respond. And here's where it gets scary. As weapons get faster, like hypersonics, when they can attack at network speed, like cyberattacks, humans simply cannot be involved in that. So you have to program, you have to put your best human intellectual power into these machines and hope that they respond accordingly. But as we know in the real world, humans make mistake. Hospitals get blown up. Innocents get killed. How do we prevent that human error from going into a program that allows a machine to defend us at network speed, far faster than a human can?

INSKEEP: I'm thinking about the way the United States and Russia - or in another context, perhaps, the United States and China - have their militaries aimed at each other and prepared to respond proportionally to each other. In a worst-case scenario8, a nuclear attack might be answered by a nuclear attack. Is it possible that through these incredibly fast computers, we could get into a cycle where our computers are shooting at each other and escalating9 a war within minutes or seconds?

SHANKER: That's not where we are now. But that, of course, is the concern not only of real-world strategists, but of screenplay writers, like "Dr. Strangelove," those sorts of things.

INSKEEP: I was going to ask you if you had seen "Dr. Strangelove." Clearly, you have.

SHANKER: You should ask me how many times I've seen "Dr. Strangelove."

INSKEEP: Let's describe - I don't think we're giving away too much - the machine that turns out to be the big reveal in "Dr. Strangelove." What is the doomsday machine?

SHANKER: Well, the Kremlin leader has ordered a machine created that if the Soviet10 Union is ever attacked, then the entire Soviet arsenal11 would be unleashed12 on the adversary. And in some ways, you can make the case that is a deterrent13 because no matter who attacks with one missile or 1,000, the response will be overwhelming.

(SOUNDBITE OF FILM, "DR. STRANGELOVE OR: HOW I LEARNED TO STOP WORRYING AND LOVE THE BOMB")

PETER SELLERS: (As Dr. Strangelove) Because of the automated14 and irrevocable decision-making process, which rules out human meddling15, the doomsday machine is terrifying and simple to understand and completely credible4 and convincing.

GEORGE C SCOTT: (As General Turgidson) Gee16, I wish we had one of them doomsday machines, Stainsey.

SHANKER: But the joke of the movie is they were going to announce it on the Soviet leader's birthday the following week. So the world doesn't know that this deterrent system is set up. And basically, Armageddon is assured.

INSKEEP: What's going to happen is there's going to be a random17 attack.

SHANKER: And the machine will respond, as programmed by humans. And the challenge today is, right now, most of the missiles fly over the pole. We have pretty good warning time. But as the Chinese in particular experiment with hypersonic weapons, we might not have the warning time. And there might someday be an argument to design systems that would respond autonomously18 to such a sneak19 hypersonic attack.

INSKEEP: When I think about the historic connections between the Pentagon, defense20 contractors21 and Silicon22 Valley and all the computing23 power that's in Silicon Valley, I would like to imagine that the United States is on top of this problem. Are they on top of this problem?

SHANKER: Some of the best minds are on top of it. And Andy Hoehn and I spoke24 to a number of people in the private sector25, number of people in the public sector, in government. And they really are aware of the problem. They're asking questions like, how do we design artificial intelligence that has limits, that understands the laws of war, that understands the rules of retaliation26, that won't assign itself a mission that the humans don't like? But even people like Eric Schmidt, you know, the founder27 of Google, who's spending a lot of time and money in this exact space, spoke to us on the record. He's extremely worried about these questions.

INSKEEP: It seems to me there are two interrelated problems. One is that an adversary like China gets ahead of the United States and can defeat the United States. But the other is that some effort by the United States gets out of control and we destroy ourselves.

SHANKER: That is a concern. And that could be your next screenplay. And the problem is you're raising a problem, Steve, that nobody has an answer for. I mean, how does one design AI with real intelligence and compassion28 and rationality, because at the end of the day, it's just ones and zeros?

INSKEEP: Tom Shanker is co-author of the new book "Age Of Danger." Thanks so much.

SHANKER: It was an honor to be here, Steve. Thank you so much for having me.

(SOUNDBITE OF SONG, "WE'LL MEET AGAIN")

VERA LYNN: (Singing) We'll meet again, don't know where.


点击收听单词发音收听单词发音  

1 transcript JgpzUp     
n.抄本,誊本,副本,肄业证书
参考例句:
  • A transcript of the tapes was presented as evidence in court.一份录音带的文字本作为证据被呈交法庭。
  • They wouldn't let me have a transcript of the interview.他们拒绝给我一份采访的文字整理稿。
2 naval h1lyU     
adj.海军的,军舰的,船的
参考例句:
  • He took part in a great naval battle.他参加了一次大海战。
  • The harbour is an important naval base.该港是一个重要的海军基地。
3 autonomous DPyyv     
adj.自治的;独立的
参考例句:
  • They proudly declared themselves part of a new autonomous province.他们自豪地宣布成为新自治省的一部分。
  • This is a matter that comes within the jurisdiction of the autonomous region.这件事是属于自治区权限以内的事务。
4 credible JOAzG     
adj.可信任的,可靠的
参考例句:
  • The news report is hardly credible.这则新闻报道令人难以置信。
  • Is there a credible alternative to the nuclear deterrent?是否有可以取代核威慑力量的可靠办法?
5 patriot a3kzu     
n.爱国者,爱国主义者
参考例句:
  • He avowed himself a patriot.他自称自己是爱国者。
  • He is a patriot who has won the admiration of the French already.他是一个已经赢得法国人敬仰的爱国者。
6 trajectories 5c5d2685e0c45bbfa4a80b6d43c087fa     
n.弹道( trajectory的名词复数 );轨道;轨线;常角轨道
参考例句:
  • To answer this question, we need to plot trajectories of principal stresses. 为了回答这个问题,我们尚须画出主应力迹线图。 来自辞典例句
  • In the space program the theory is used to determine spaceship trajectories. 在空间计划中,这个理论用于确定飞船的轨道。 来自辞典例句
7 adversary mxrzt     
adj.敌手,对手
参考例句:
  • He saw her as his main adversary within the company.他将她视为公司中主要的对手。
  • They will do anything to undermine their adversary's reputation.他们会不择手段地去损害对手的名誉。
8 scenario lZoxm     
n.剧本,脚本;概要
参考例句:
  • But the birth scenario is not completely accurate.然而分娩脚本并非完全准确的。
  • This is a totally different scenario.这是完全不同的剧本。
9 escalating 1b4e810e65548c7656e9ea468e403ca1     
v.(使)逐步升级( escalate的现在分词 );(使)逐步扩大;(使)更高;(使)更大
参考例句:
  • The cost of living is escalating. 生活费用在迅速上涨。 来自《简明英汉词典》
  • The cost of living is escalating in the country. 这个国家的生活费用在上涨。 来自辞典例句
10 Soviet Sw9wR     
adj.苏联的,苏维埃的;n.苏维埃
参考例句:
  • Zhukov was a marshal of the former Soviet Union.朱可夫是前苏联的一位元帅。
  • Germany began to attack the Soviet Union in 1941.德国在1941年开始进攻苏联。
11 arsenal qNPyF     
n.兵工厂,军械库
参考例句:
  • Even the workers at the arsenal have got a secret organization.兵工厂工人暗中也有组织。
  • We must be the great arsenal of democracy.我们必须成为民主的大军火库。
12 unleashed unleashed     
v.把(感情、力量等)释放出来,发泄( unleash的过去式和过去分词 )
参考例句:
  • The government's proposals unleashed a storm of protest in the press. 政府的提案引发了新闻界的抗议浪潮。
  • The full force of his rage was unleashed against me. 他把所有的怒气都发泄在我身上。 来自《简明英汉词典》
13 deterrent OmJzY     
n.阻碍物,制止物;adj.威慑的,遏制的
参考例句:
  • Large fines act as a deterrent to motorists.高额罚款是对开车的人的制约。
  • I put a net over my strawberries as a deterrent to the birds.我在草莓上罩了网,免得鸟歇上去。
14 automated fybzf9     
a.自动化的
参考例句:
  • The entire manufacturing process has been automated. 整个生产过程已自动化。
  • Automated Highway System (AHS) is recently regarded as one subsystem of Intelligent Transport System (ITS). 近年来自动公路系统(Automated Highway System,AHS),作为智能运输系统的子系统之一越来越受到重视。
15 meddling meddling     
v.干涉,干预(他人事务)( meddle的现在分词 )
参考例句:
  • He denounced all "meddling" attempts to promote a negotiation. 他斥责了一切“干预”促成谈判的企图。 来自辞典例句
  • They liked this field because it was never visited by meddling strangers. 她们喜欢这块田野,因为好事的陌生人从来不到那里去。 来自辞典例句
16 gee ZsfzIu     
n.马;int.向右!前进!,惊讶时所发声音;v.向右转
参考例句:
  • Their success last week will gee the team up.上星期的胜利将激励这支队伍继续前进。
  • Gee,We're going to make a lot of money.哇!我们会赚好多钱啦!
17 random HT9xd     
adj.随机的;任意的;n.偶然的(或随便的)行动
参考例句:
  • The list is arranged in a random order.名单排列不分先后。
  • On random inspection the meat was found to be bad.经抽查,发现肉变质了。
18 autonomously 7d7df118f987129bac059bd8fe8107ed     
adv. 自律地,自治地
参考例句:
  • To learn autonomously in the network environment is totally new to students. 基于网络环境下的自主学习对学生来说,是一种全新的学习方式。
  • The QC-RS can operate autonomously or by remote control. QC-RS能实现自动操作或通过遥控来操作。
19 sneak vr2yk     
vt.潜行(隐藏,填石缝);偷偷摸摸做;n.潜行;adj.暗中进行
参考例句:
  • He raised his spear and sneak forward.他提起长矛悄悄地前进。
  • I saw him sneak away from us.我看见他悄悄地从我们身边走开。
20 defense AxbxB     
n.防御,保卫;[pl.]防务工事;辩护,答辩
参考例句:
  • The accused has the right to defense.被告人有权获得辩护。
  • The war has impacted the area with military and defense workers.战争使那个地区挤满了军队和防御工程人员。
21 contractors afd5c0fd2ee43e4ecee8159c7a7c63e4     
n.(建筑、监造中的)承包人( contractor的名词复数 )
参考例句:
  • We got estimates from three different contractors before accepting the lowest. 我们得到3个承包商的报价后,接受了最低的报价。 来自《简明英汉词典》
  • Contractors winning construction jobs had to kick back 2 per cent of the contract price to the mafia. 赢得建筑工作的承包商得抽出合同价格的百分之二的回扣给黑手党。 来自《简明英汉词典》
22 silicon dykwJ     
n.硅(旧名矽)
参考例句:
  • This company pioneered the use of silicon chip.这家公司开创了使用硅片的方法。
  • A chip is a piece of silicon about the size of a postage stamp.芯片就是一枚邮票大小的硅片。
23 computing tvBzxs     
n.计算
参考例句:
  • to work in computing 从事信息处理
  • Back in the dark ages of computing, in about 1980, they started a software company. 早在计算机尚未普及的时代(约1980年),他们就创办了软件公司。
24 spoke XryyC     
n.(车轮的)辐条;轮辐;破坏某人的计划;阻挠某人的行动 v.讲,谈(speak的过去式);说;演说;从某种观点来说
参考例句:
  • They sourced the spoke nuts from our company.他们的轮辐螺帽是从我们公司获得的。
  • The spokes of a wheel are the bars that connect the outer ring to the centre.辐条是轮子上连接外圈与中心的条棒。
25 sector yjczYn     
n.部门,部分;防御地段,防区;扇形
参考例句:
  • The export sector will aid the economic recovery. 出口产业将促进经济复苏。
  • The enemy have attacked the British sector.敌人已进攻英国防区。
26 retaliation PWwxD     
n.报复,反击
参考例句:
  • retaliation against UN workers 对联合国工作人员的报复
  • He never said a single word in retaliation. 他从未说过一句反击的话。 来自《简明英汉词典》
27 Founder wigxF     
n.创始者,缔造者
参考例句:
  • He was extolled as the founder of their Florentine school.他被称颂为佛罗伦萨画派的鼻祖。
  • According to the old tradition,Romulus was the founder of Rome.按照古老的传说,罗穆卢斯是古罗马的建国者。
28 compassion 3q2zZ     
n.同情,怜悯
参考例句:
  • He could not help having compassion for the poor creature.他情不自禁地怜悯起那个可怜的人来。
  • Her heart was filled with compassion for the motherless children.她对于没有母亲的孩子们充满了怜悯心。
本文本内容来源于互联网抓取和网友提交,仅供参考,部分栏目没有内容,如果您有更合适的内容,欢迎点击提交分享给大家。
------分隔线----------------------------
TAG标签:   美国新闻  英语听力  NPR
顶一下
(0)
0%
踩一下
(0)
0%
最新评论 查看所有评论
发表评论 查看所有评论
请自觉遵守互联网相关的政策法规,严禁发布色情、暴力、反动的言论。
评价:
表情:
验证码:
听力搜索
推荐频道
论坛新贴