Two recent studies found that medical tools that monitor COVID-19 symptoms—oxygen readers and forehead thermometers—give less accurate readings for people with dark skin. These studies are concrete examples of a reality that Black people in tech, patient advocates, and even health insurers have been stressing for years: Racial bias is endemic in tech development. In the case of health tech, that bias can be life-threatening.
Some stories about tech bias might appear trivial if you don’t dig deeper. A soap dispenser with a sensor that doesn’t recognize dark skin might seem like a minor inconvenience until you realize it uses the same infrared technology that fails to detect fevers in Black patients. A beauty pageant with an AI judge that “did not like people with dark skin” was the butt of jokes. But facial recognition technology used by law enforcement that is so unable to accurately distinguish Black faces that it falsely matched images of Black members of Congress with mugshots? That’s terrifying. The biases encoded in these technologies can also result in disease misdiagnosis and delayed treatments. One recent study found that an AI model designed to diagnose diseases and recommend treatment based on chest X-rays was less accurate for patients who are Black, Latino, female, or poor. AI models trained to detect skin cancer were developed primarily using images of white patients and didn’t include a single data set from Africa.
Proponents of health technology believe taking humans out of the equation will eliminate bias and that artificial intelligence can succeed where human partiality fails. But if we acknowledge that everyone is susceptible to bias, programmers and engineers included, why do we expect the tech they develop not to be? Medical devices designed to improve health have made it from the development stage to hospital beds without recognition that the technology doesn’t work properly for potentially millions of patients. That’s more than an administrative failure; that’s a sign of deeply embedded bias and lack of inclusivity at every level of the health tech field. In a podcast interview earlier this year, radiologist and data scientist Dr. Judy Gichoya emphasized that coding skills are not enough to overcome the bias inherent in health tech. “As we start to think about the ethical implications of these human-machine collaborations, we’ll need different minds from those we have right now in the workforce.”
Efforts to push bias out of tech are well underway. A grassroots organization is supporting up-and-coming Black AI researchers. The Mayo Clinic recently launched a platform to test for bias in AI models. Pending federal legislation lays the groundwork to more closely regulate and monitor the impact of machine learning tech. In medicine, biased tech is the difference between catching cancer early and a terminal diagnosis, between being mistakenly sent home from the hospital and receiving lifesaving treatment. We can’t take for granted that anti-bias in tech is the default; it has to be intentionally built into technology from inception to implementation.