Medical AI models rely on shortcuts, can cause misdiagnosis
New York, June 1
Artificial Intelligence (AI) models like humans have a tendency to look for shortcuts. In the case of an AI-assisted disease detection, these shortcuts could lead to diagnostic errors if deployed in clinical settings, warn researchers.
A team from the University of Washington in the US, examined multiple models recently put forward as potential tools for accurately detecting Covid-19 from chest radiography, otherwise known as chest X-rays.
The findings, published in the journal Nature Machine Intelligence, showed that rather than learning genuine medical pathology, these models rely instead on shortcut learning to draw spurious associations between medically irrelevant factors and disease status.
As a result, the models ignored clinically significant indicators and relied instead on characteristics such as text markers or patient positioning that were specific to each dataset to predict whether someone had Covid-19.
“A physician would generally expect a finding of Covid-19 from an X-ray to be based on specific patterns in the image that reflect disease processes,” said co-lead author Alex DeGrave, from UW’s Medical Scientist Training Programme.
“But rather than relying on those patterns, a system using shortcut learning might, for example, judge that someone is elderly and thus infer that they are more likely to have the disease because it is more common in older patients.
“The shortcut is not wrong per se, but the association is unexpected and not transparent. And that could lead to an inappropriate diagnosis,” DeGrave said.
Also read:
- Can Artificial Intelligence get common sense? Facebook model shows the way
- Researchers test artificial intelligence that calculates suicide attempt risk
- Does Covid really affect your heart?
Shortcut learning is less robust than genuine medical pathology and usually means the model will not generalise well outside of the original setting, the researchers said.
Combining lack of robustness with the typical opacity of AI decision-making can make these AI models prone to a condition known as “worst-case confounding,” owing to the lack of training data available for such a new disease.
This scenario increased the likelihood that the models would rely on shortcuts rather than learning the underlying pathology of the disease from the training data, the researchers noted.
–IANS