Europe

Study cautions radiologists not to over-rely on AI tools for diagnosis

Researchers also found that physicians were more likely to trust an AI explanation if it pinpointed a specific area on an X-ray.

ADVERTISEMENT

While artificial intelligence (AI) is a revolutionising tool in medicine, radiologists may over-rely on its advice when it highlights a specific part of an X-ray, according to a new study. 

A team of US researchers recruited 220 physicians across multiple sites in the country tasked with reviewing chest X-rays alongside AI-generated advice.

Participants included radiologists and internal or emergency medicine physicians who were tasked with reading X-rays with the help of an AI assistant. The physicians could accept, modify, or reject the AI suggestions.

The study, published in the journal Radiology, explored how the type of AI advice, either local or global, and its accuracy affected a diagnosis.

A local explanation is when the AI highlights specific areas of interest in an X-ray, while a global explanation is when the AI provides images from similar past cases to show how it made the suggestion.

“We found that local explanations improved diagnostic accuracy and reduced interpretation time when the AI’s advice was correct,” Dr Paul H Yi, one of the study’s co-authors and director of intelligent imaging informatics at St Jude Children’s Research Hospital, told Euronews Health.

When the AI provided accurate advice, local explanations led to reviewers having a diagnostic accuracy rate of 92.8 per cent and global explanations to an accuracy rate of 85.3 per cent. 

However, when the AI’s diagnosis was incorrect, diagnostic accuracy dropped to 23.6 per cent for local explanations and to 26.1 per cent for physicians with global explanations.

“These findings emphasise the importance of carefully designing AI tools. Thoughtful explanation design is not just an add-on; it’s a pivotal factor in ensuring AI enhances clinical practice rather than introducing unintended risks,” Yi said.

‘Type of AI explanation’ can impact trust

An unexpected finding was how quickly physicians, both radiologists and non-radiologists, trusted local explanations, even when the AI was incorrect. 

“This reveals a subtle but critical insight: the type of AI explanation can shape trust and decision-making in ways users may not even realise,” he added.

He has several suggestions for mitigating this risk of “automation bias” – the human tendency to over-rely on automation.

He said physicians learn through years of training and repetition to follow a pattern or checklist.

“The idea is that it creates a routine. It minimises variation which can cause unexpected mistakes to happen,” he said.

However, introducing AI tools adds a new factor and can derail this routine. 

ADVERTISEMENT

“We have to stick to our checklists and make sure we adhere to them. But I envision a future where our checklists are actually going to change to incorporate AI,” Yi said, adding that human-computer interaction should also be studied with factors such as stress or tiredness. 

Checkout latest world news below links :
World News || Latest News || U.S. News

Source link

Back to top button