PBS高端访谈:医疗算法存在种族歧视(在线收听

Hari Sreenivasan: A recent study published in Science magazine found significant racial bias in an algorithm used by hospitals across the nation to determine who needs follow up care and who does not. Megan Thompson recently spoke with STAT's Shraddha Chakradhar, who explained what the researchers found.

Megan Thompson: Where exactly was this bias coming from?

Shraddha Chakradhar: There are two ways that we can identify how sick a person is. One, is how many dollars are spent on that person. You know, the assumption being the more health care they come in for, the more treatment that they get, the more dollars they spend and presumably the sicker they are if they're getting all that treatment. And the other way is that, you know, we can measure actual biophysical things, you know, from lab tests, what kind of conditions or diseases they might have. So it seems like this algorithm was relying on the cost prediction definition. In other words, the more dollars a patient was projected to spend on the part of an insurance company or a hospital, then that was a sign of how sick they were going to be. And that seems to be where the bias emerged.

Megan Thompson: I understand that the researchers then sort of use the algorithm using a different type of data. Can you just tell us a little bit more about that? What did they use?

Shraddha Chakradhar: Yeah. So instead of relying on just costs to predict which patients are going to need follow up care, they actually used biometric data, physical biophysical data, physiological data, and they saw a dramatic difference, you know, in the previous model. The algorithm missed some 48,000 extra chronic conditions that African-American patients had. But when they rejiggered the algorithm to look more at actual biological data, they brought that down to about 7,700. So it was about an 84 percent reduction in bias.

Megan Thompson: Do we know anything about how the use of this biased algorithm actually affected patient care?

Shraddha Chakradhar: We don't actually know that. But as I mentioned, the algorithm is used by hospitals to help them flag patients who might need extra care in the coming year, whether it's, you know, an at-home nurse or making sure that they come in for regularly scheduled doctor's appointments. So we can only presume that if black patients, sicker black patients weren't being flagged accurately, that they also missed out on this follow up care.

Megan Thompson: Are there any consequences for the company, Optum, that was behind this algorithm?

Shraddha Chakradhar: Yes. So the day after the study came out, actually, New York regulators, the Department of Financial Services and the Department of Health sent a letter to the company saying they were investigating this algorithm and that the company had to show that the way the algorithm worked wasn't in violation of anti-discrimination laws in New York. So that investigation is pending. One encouraging thing is that when the researchers did the study, they actually reached back to Optum and let them know about the discrepancy in the data. And the company was glad to be told about it. And I'm told that they're working on a fix. And the other encouraging thing is that the researchers have actually now launched an initiative to help other companies who may be behind similar algorithms to help them fix any biases in their programs. So they've launched a program based out of the University of Chicago's Booth School to do this work on a pro bono basis so that they can sort of catch these things in other algorithms that might be used across the country.

Megan Thompson: All right, Shraddha Chakradhar of STAT, thank you so much for being with us.

Shraddha Chakradhar: Thank you for having me.

  原文地址:http://www.tingroom.com/lesson/pbs/sh/502954.html