DT
PT
Subscribe To Print Edition About The Tribune Code Of Ethics Download App Careers Advertise with us Classifieds
search-icon-img
search-icon-img
Advertisement

AI set to make medical scan reports twice as easy to understand for patients

University of Sheffield study shows AI can simplify radiology reports without compromising accuracy

  • fb
  • twitter
  • whatsapp
  • whatsapp
featured-img featured-img
Image for representation.
Advertisement

Artificial intelligence could soon help patients better understand complex medical scan results, making them far easier to comprehend without losing clinical accuracy, according to a major new study by the University of Sheffield.

Advertisement

The research found that when radiology reports for X-rays, CT scans, and MRIs were rewritten using advanced AI systems such as ChatGPT, patients found them almost twice as easy to understand compared with the original versions. Analysis showed that the reading level dropped from “university level” to one more closely aligned with the comprehension of an 11–13-year-old school pupil.

Advertisement

The findings suggest that AI-assisted explanations could become a standard companion to medical reports, improving transparency and trust across healthcare systems, including the National Health Service (NHS).

Advertisement

Researchers reviewed 38 studies published between 2022 and 2025, covering more than 12,000 radiology reports simplified using AI. These rewritten reports were evaluated by patients, members of the public, and clinicians to assess both patient understanding and clinical accuracy.

Traditionally, radiology reports are written for doctors rather than patients. However, initiatives promoting patient-centred care, such as the NHS App, along with policies mandating greater transparency of medical records, have expanded patient access to these reports.

Advertisement

Lead author of the study, Dr Samer Alabed, Senior Clinical Research Fellow at the University of Sheffield and Honorary Consultant Cardio Radiologist at Sheffield Teaching Hospitals NHS Foundation Trust, said:

"The fundamental issue with these reports is that they are not written with patients in mind. They are often filled with technical jargon and abbreviations that can be easily misunderstood, leading to unnecessary anxiety, false reassurance, and confusion. Patients with lower health literacy or English as a second language are particularly disadvantaged. Clinicians frequently have to use valuable appointment time explaining report terminology instead of focusing on care and treatment. Even small time savings per patient could add up to significant benefits across the NHS."

While doctors reviewing these AI-simplified reports found that the vast majority were accurate and complete, around one percent contained errors, such as incorrect diagnoses. This shows that, while highly promising, the approach still requires careful oversight.

Of the 38 studies reviewed, none were conducted in the UK or within NHS settings—a gap that Dr Alabed says the research team now aims to address.

"This research has highlighted several key priorities. The most important is the need for real-world testing in NHS clinical workflows to properly assess safety, efficiency, and patient outcomes," he said.

"This includes human-oversight models, where clinicians review and approve AI-generated explanations before they are shared with patients. Our long-term goal is not to replace clinicians but to support clearer, kinder, and more equitable communication in healthcare," he added.

The research underscores the University of Sheffield’s ambition to translate ideas into real-world impact, reflecting a commitment to independent thinking and shared innovation in healthcare.

Read what others can’t with The Tribune Premium

Advertisement
Advertisement
Advertisement
tlbr_img1 Classifieds tlbr_img2 Videos tlbr_img3 Premium tlbr_img4 E-Paper tlbr_img5 Shorts