-
(单词翻译:双击或拖选)
Now: the fears around the development of artificial intelligence.
Computer superintelligence is a long, long way from the stuff of sci-fi movies, but several high-profile leaders and thinkers have been worrying quite publicly about what they see as the risks to come.
Our economics correspondent, Paul Solman, explores that. It's part of his weekly series, Making Sense.
I want to talk to you about the greatest scientific event in the history of man.
Are you building an A.I.?
A.I., artificial intelligence.
Do you think I might be switched off?
It's not up to me.
Why is it up to anyone?
I think the development of full artificial intelligence could spell the end of the human race.
And just this week, Tesla and SpaceX entrepreneur Elon Musk4 told the ?MDNMˉNational Governors Association:
A.I. is a fundamental existential risk for human civilization. And I don't think people fully5 appreciate that.
founding director Nick Bostrom leads a team trying to figure out how best to invest in, well, the future of humanity.
We are in this very peculiar7 situation of looking back at the history of our species, 100,000 years old,
and now finding ourselves just before the threshold to what looks like it will be this transition to some post-human era of superintelligence that can colonize8 the universe, and then maybe last for billions of years.
Philosopher Bostrom has been perhaps the most prominent thinker about the benefits and dangers to humanity of what he calls superintelligence for many years.
Once there is superintelligence, the fate of humanity may depend on what that superintelligence does.
There are plenty of ways to invest in humanity, he says, giving money to anti-disease charities, for example.
But Bostrom thinks longer-term, about investing to lessen9 existential risks, those that threaten to wipe out the human species entirely10.
Global warming might be one. But plenty of other people are worrying about that, he says. So, he thinks about other risks.
What are the greatest of those risks?
The greatest existential risks arise from certain anticipated technological11 breakthroughs that we might make,
in particular, machine superintelligence, nanotechnology, and synthetic12 biology, fundamentally because we don't have the ability to uninvent anything that we invent.
We don't, as a human civilization, have the ability to put the genie13 back into the bottle. Once something has been published, then we are stuck with that knowledge.
So Bostrom wants money invested in how to manage A.I.
Specifically on the question, if and when in the future you could build machines that were really smart, maybe superintelligent, smarter than humans,
how could you then ensure that you could control what those machines do, that they were beneficial, that they were aligned14 with human intentions?
How likely is it that machines would develop basically a mind of their own, which is what you're saying, right?
I do think that advanced A.I., including superintelligence, is a sort of portal through which humanity will have passage, assuming we don't destroy ourselves prematurely15 in some other way.
Right now, the human brain is where it's at. It's the source of almost all of the technologies we have.
I'm relieved to hear that. And the complex social organization we have. Right.
It's why the modern condition is so different from the way that the chimpanzees live.
It's all through the human brain's ability to discover and communicate.
But there is no reason to think that human intelligence is anywhere near the greatest possible level of intelligence that could exist, that we are sort of the smartest possible species.
I think, rather, that we are the stupidest possible species that is capable of creating technological civilization.
For the PBS NewsHour, this is economics correspondent Paul Solman, reporting from Oxford, England. undefined
点击收听单词发音
1 scenario | |
n.剧本,脚本;概要 | |
参考例句: |
|
|
2 luminaries | |
n.杰出人物,名人(luminary的复数形式) | |
参考例句: |
|
|
3 hawking | |
利用鹰行猎 | |
参考例句: |
|
|
4 musk | |
n.麝香, 能发出麝香的各种各样的植物,香猫 | |
参考例句: |
|
|
5 fully | |
adv.完全地,全部地,彻底地;充分地 | |
参考例句: |
|
|
6 Oxford | |
n.牛津(英国城市) | |
参考例句: |
|
|
7 peculiar | |
adj.古怪的,异常的;特殊的,特有的 | |
参考例句: |
|
|
8 colonize | |
v.建立殖民地,拓殖;定居,居于 | |
参考例句: |
|
|
9 lessen | |
vt.减少,减轻;缩小 | |
参考例句: |
|
|
10 entirely | |
ad.全部地,完整地;完全地,彻底地 | |
参考例句: |
|
|
11 technological | |
adj.技术的;工艺的 | |
参考例句: |
|
|
12 synthetic | |
adj.合成的,人工的;综合的;n.人工制品 | |
参考例句: |
|
|
13 genie | |
n.妖怪,神怪 | |
参考例句: |
|
|
14 aligned | |
adj.对齐的,均衡的 | |
参考例句: |
|
|
15 prematurely | |
adv.过早地,贸然地 | |
参考例句: |
|
|