Algorithms trained to diagnose medical images can recognize the patient’s race — but how?

What’s new: Researchers from Emory University, MIT, Purdue University, and other institutions found that deep learning systems trained to interpret x-rays and CT scans also were able to identify their subjects as Asian, Black, or White.

What they found: Researchers trained various implementations of ResNet, DenseNet, and EfficientNet on nine medical imaging datasets in which examples were labeled Asian, Black, or White as reported by the patient. In tests, the models reliably recognized the race, although their performance varied somewhat depending on the type of scan, training dataset, and other variables.

  • The models were pretrained on ImageNet and fine-tuned on commonly used datasets of chest, limb, breast, and spinal scans.
  • The ResNet identified the patient’s race most accurately: 80 to 97 percent of the time.
  • The authors tried to determine how the models learned to differentiate races. Factors like body mass, tissue density, age, and sex had little bearing, they found. The models were able to guess the patient’s race even when the images had been blurred.

Behind the news: Racial bias has been documented in some medical AI systems.

  • In 2019, researchers found that an algorithm widely used by health care providers to guide treatment recommended extra care for Black patients half as often as it did White patients.
  • Several studies have found that convolutional neural networks trained to detect skin cancer are less accurate on people with darker complexions.
  • Most ophthalmology datasets are made up of data from Chinese, European, and North American patients, which could make models trained on them to recognize eye diseases less reliable with groups that aren’t well represented in those regions.

Why it matters: The fact that diagnostic models recognize race in medical scans is startling. The mystery of how they do it only adds fuel to worries that AI could magnify existing racial disparities in health care.

We’re thinking: Neural networks can learn in ways that aren’t intuitive to humans. Finding out how medical imaging algorithms learn to identify race could help develop less biased systems — and unlock other mysteries of machine learning.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox