The Future of Medical AI is... Racist?

Racism AI Healthcare Shutterstock.jpg

Check out the referenced source here.

When you ask Siri to flip a coin or Google to generate a random number between 1-100, you expect the AI to be unbiased. If Siri always gave tails or if Google always generated the number 52, we would call into question the legitimacy of the software and the unbiased nature of the programs. However, this would never be the case, as the unbiased nature of these AI have been verified for years.

The same can not be said when AI is introduced into more nuanced and complex tasks such as those related to healthcare. Last October, health services company Optum’s healthcare decision-making tool was found to be inadvertently racially biased. It “routinely let healthier white people into the [high-risk healthcare management] programmes ahead of less healthy black people.” The margins weren’t even tight, as the “black patients were found to have 26.3% more chronic health conditions than equally ranked white patients.”

But why did this happen? Did the computer scientists who coded the program have a vendetta against minorities? Thankfully, not so. Though the algorithm had excluded race from the calculations, it did take into account healthcare costs. Due to the structural inequalities present in the US healthcare system, black patients spend $1800 less in healthcare per year than white patients with the same chronic conditions—so the system assumed that they were healthier and wouldn’t need admittance into the program.

Though inadvertent, the Optum situation was far from the only case. “In a recent article in The New England Journal of Medicine a group of Harvard University researchers . . . reviewed the use of race correction in 13 clinical algorithms used in the US. They unearthed numerous examples of implicit racial bias that made non-white Americans less likely to receive appropriate care.”

So then how do we fix this? If the problem continues and goes unchecked, inadvertently racist healthcare programmes/softwares may be permanently rooted within the healthcare system, perpetuating any systematic inequalities that already exist against minorities. One solution would be the use of a diverse set of data. As Theator co-founder and CEO Tamir Wolf explains, “The data has to be . . . from thousands of hospitals . . . from tens of thousands of surgeons. It has to be with cases that went well and with cases that had errors and complications – you can’t weed these out because you need all of these examples in order to really understand situations and decision making, so that you can ultimately provide decision support that is really meaningful.”

Though ambitious, the careful review and testing of these tools are needed. When one accidental bias could mean thousands of minorities unjustly receiving delayed care or even being outrightly denied it, it is our moral and human duty to struggle against it, regardless of the cost.

Previous
Previous

Clear Skies, Ad Buys: New Weather-Based Advertising Platform Fueled By AI

Next
Next

Driverless Trucks: Delight or Disaster?