-
(单词翻译:双击或拖选)
HARI SREENIVASAN: It was a stunning1 finding, even in a digital age where stories of all kind go viral. During the last three months of the presidential campaign, fake or false news headlines actually generated more engagement on Facebook than true ones. Facebook and other social media platforms were criticized for not doing enough to flag or dispute these posts.
Today, Facebook launched several new tools to flag and dispute what it calls the "worst of the worst" when it comes to clear lies. Those tools are essentially2 embedded3 in your individual feed.
Here's a bit of a video the company posted about how it will work.
NARRATOR: You may see an alert before you share some links that have been disputed by third-party fact checkers. You can then cancel or continue with the post. If you suspect a news story is fake, you can report it. It just takes a few taps. Your report helps us track and prevent fake news from spreading.
HARI SREENIVASAN: Let's learn more about this effort to detect and slow the spread of fake news, part of our occasional series on the subject. Will Oremus has been writing about this extensively for "Slate4" and working on that site's own new tool for identifying false stories.
First, Will, let's talk a little bit about what Facebook announce today. How is it going to work?
WILL OREMUS, Slate, So, Facebook's approach to fake news has several components6. One thing it's going to try to do is make it easier for users to report it when they see fake news in their feeds. The next thing they're going to do is they're going to take that information about stories that are being reported as fake, and they're going to use some software, run some algorithms and create a dashboard of stories that might be fake and give access to that dashboard to third-party checking organizations. So, these are like Snopes or PolitiFact, Factcheck.org.
Those fact checkers are going to have their human editors evaluate some of the most viral of the stories that have been flagged as fake, and if they determine it is in fact a fake news story, Facebook is going to treat it differently. It's going to show it to fewer people in its feeds. It's going to make it go less viral and it's also going to give people a warning before they try to share that story, saying this story has been disputed. It will still let you share it. It's not censoring7 or filtering out anything. But it is downgrading it in the ranking algorithm and it is letting people know that this has been disputed.
HARI SREENIVASAN: So, Facebook is not the arbiter8 of the truth. There are third parties checking this for them, right?
WILL OREMUS: Yes, and Facebook has been incredibly reluctant to become the arbiter of what's true for good reason. Facebook, the value of its business, depends on appealing to people on both sides, all across the political spectrum9.
So, it doesn't want to be a media company. It has said this many times. What it is doing here is shrewd, I think. It is delegating the responsibility to respected, non-profit, third-party organizations whose whole job is to figure out what's true and what's not.
新工具将帮助Facebook用户识别假新闻
HARI SREENIVASAN: You have been covering this space for a while. You want to draw a distinction between what's fake news and what are just outright10 lies and conspiracies11. There is a distinction.
WILL OREMUS:Yes, the term "fake news" is relatively12 news. A few years ago if somebody said, "fake news," you wouldn't know necessarily what they are talking about, maybe they were talking about a satire13 site like "The Onion "or "The Daily Show." It came in to currency in recent months because of the rise of a particular type of thing, which is a story that's basically made up. It was very popular during the election season for people to– for hoaxsters to make up stories that played to people's political biases16.
So, something like, you know, Hillary Clinton is about to be arrested by the New York Police Department for email crimes.
HARI SREENIVASAN: Yes.
WILL OREMUS: They would just make that up. They would publish it. And it would get shared widely on Facebook. Since then, the term has become applied17 — it has become a political football. And people call — you hear people on the right calling the "New York Times" fake news, people on the left saying Breitbart is fake news. But originally, it was actual hoaxes18 that were made up out of whole cloth.
HARI SREENIVASAN: Now, people have been trying to fix the fake news problem. There was a recent hack-a-thon, and some Princeton students came up with what they thought was a fix. Your folks at "Slate" had actually worked on a tool. You guys just launched this, not coincidentally on Monday.
Let's take a look at how this works. We're going to put this up on the screen here. So, if I come across a fake news story in my feed, and there's this big red banner saying, "This news story is fake. Here's how we know. Share the proof."
How do you know? Identify by this as fake. This is the tool.
WILL OREMUS: Yes, that's right. So, what we wanted to do was not just flag stories as fake when they appear in your Facebook feed. We actually wanted to give users the power to do something about it, because — I mean, it's so frustrating19, right? You try to be a good consumer of the media, you try to evaluate what's true and what's credible20, but then you see friends and relatives sharing this stuff.
So, what we do is we actually provide a link to a reputable debunking22 of that particular story that will appear automatically. And then we prompt you to share that link with the person who posted the fake news so that they and all of their followers23 can see that that story is fake or they can go to the debunking site and judge of the evidence for themselves.
HARI SREENIVASAN: Now, there's a tool you can actually add to your browser24. It's kind of an extension, a Chrome extension and a button that works there. We can look at other examples of stories as well.
Who is the arbiter of truth in your system? Who decided25 that this story was false, even though it looks just like an ABC news site?
WILL OREMUS: Yes, I mean, that's a good question, and this is really the trickiest26 question on this whole thing. This is going to be an issue for Facebook, too. I mean, if one of these fact check organizations says this story has some parts that are true, some parts that are false. Is that a fake news story?
I think what we've done and in fact it seems what Facebook has done as well is to try to set a really high bar for what counts as fake. It's not just a story that might be misleading.
HARI SREENIVASAN: Yes.
WILL OREMUS: Or has a couple of factual errors in it. It's a story intentionally27 designed to mislead people and it's just — you know, it's a hoax14, basically. So we have human editors who are going to be reviewing the posts that are flagged by our users as potentially fake and they're going to be looking for, again, a reputable third-party site that has used evidence to debunk21 that. We're not going to be, you know, doing the debunking ourselves.
HARI SREENIVASAN: Can technology solve this problem? There is a recent Pew study saying 14 percent of people out there shared a fake news story, even after they knew it was fake.
WILL OREMUS: Yes. No, technology can't solve the whole problem. I think technology can be a part of the solution. And that's because it's not just a technological28 problem or just a human problem.
And there are human issues at work here in why fake news is shared. There's confirmation29 bias15. There's the desire for something to be true. I mean, you want something to be true.
HARI SREENIVASAN: Yes.
WILL OREMUS: What's your incentive30 to go and check it out. But there is also a technological component5, which is that Facebook in particular has had this leveling effect on the media where a story from abcnews.com, which is a big, reputable news site, looks just the same in your Facebook feed as a story from abcnews.com.co, which is a hoax site designed to trick people.
And so, Facebook has created the conditions for this fake news to thrive. And that's why I think, you know, technology, whether it's Facebook or a tool like ours, technology can be part of the solution. But it has to be human, too.
Will Oremus from "Slate" — thanks so much.
WILL OREMUS: Thanks for having me.
点击收听单词发音
1 stunning | |
adj.极好的;使人晕倒的 | |
参考例句: |
|
|
2 essentially | |
adv.本质上,实质上,基本上 | |
参考例句: |
|
|
3 embedded | |
a.扎牢的 | |
参考例句: |
|
|
4 slate | |
n.板岩,石板,石片,石板色,候选人名单;adj.暗蓝灰色的,含板岩的;vt.用石板覆盖,痛打,提名,预订 | |
参考例句: |
|
|
5 component | |
n.组成部分,成分,元件;adj.组成的,合成的 | |
参考例句: |
|
|
6 components | |
(机器、设备等的)构成要素,零件,成分; 成分( component的名词复数 ); [物理化学]组分; [数学]分量; (混合物的)组成部分 | |
参考例句: |
|
|
7 censoring | |
删剪(书籍、电影等中被认为犯忌、违反道德或政治上危险的内容)( censor的现在分词 ) | |
参考例句: |
|
|
8 arbiter | |
n.仲裁人,公断人 | |
参考例句: |
|
|
9 spectrum | |
n.谱,光谱,频谱;范围,幅度,系列 | |
参考例句: |
|
|
10 outright | |
adv.坦率地;彻底地;立即;adj.无疑的;彻底的 | |
参考例句: |
|
|
11 conspiracies | |
n.阴谋,密谋( conspiracy的名词复数 ) | |
参考例句: |
|
|
12 relatively | |
adv.比较...地,相对地 | |
参考例句: |
|
|
13 satire | |
n.讽刺,讽刺文学,讽刺作品 | |
参考例句: |
|
|
14 hoax | |
v.欺骗,哄骗,愚弄;n.愚弄人,恶作剧 | |
参考例句: |
|
|
15 bias | |
n.偏见,偏心,偏袒;vt.使有偏见 | |
参考例句: |
|
|
16 biases | |
偏见( bias的名词复数 ); 偏爱; 特殊能力; 斜纹 | |
参考例句: |
|
|
17 applied | |
adj.应用的;v.应用,适用 | |
参考例句: |
|
|
18 hoaxes | |
n.恶作剧,戏弄( hoax的名词复数 )v.开玩笑骗某人,戏弄某人( hoax的第三人称单数 ) | |
参考例句: |
|
|
19 frustrating | |
adj.产生挫折的,使人沮丧的,令人泄气的v.使不成功( frustrate的现在分词 );挫败;使受挫折;令人沮丧 | |
参考例句: |
|
|
20 credible | |
adj.可信任的,可靠的 | |
参考例句: |
|
|
21 debunk | |
v.揭穿真相,暴露 | |
参考例句: |
|
|
22 debunking | |
v.揭穿真相,暴露( debunk的现在分词 ) | |
参考例句: |
|
|
23 followers | |
追随者( follower的名词复数 ); 用户; 契据的附面; 从动件 | |
参考例句: |
|
|
24 browser | |
n.浏览者 | |
参考例句: |
|
|
25 decided | |
adj.决定了的,坚决的;明显的,明确的 | |
参考例句: |
|
|
26 trickiest | |
adj.狡猾的( tricky的最高级 );(形势、工作等)复杂的;机警的;微妙的 | |
参考例句: |
|
|
27 intentionally | |
ad.故意地,有意地 | |
参考例句: |
|
|
28 technological | |
adj.技术的;工艺的 | |
参考例句: |
|
|
29 confirmation | |
n.证实,确认,批准 | |
参考例句: |
|
|
30 incentive | |
n.刺激;动力;鼓励;诱因;动机 | |
参考例句: |
|
|
31 literate | |
n.学者;adj.精通文学的,受过教育的 | |
参考例句: |
|
|