select search filters
briefings
roundups & rapid reactions
Fiona fox's blog

expert reaction to observational study looking at detection rate of precancerous growths in colonoscopies by health professionals who perform them before and after the routine introduction of AI

An observational study published in the Lancet Gastroenterology & Hepatology looks at endoscopist deskilling risk after exposure to AI in colonoscopy. 

 

Dr Catherine Menon, Principal Lecturer at the University of Hertfordshire’s Department of Computer Science, said:

“Although de-skilling resulting from AI use has been raised as a theoretical risk in previous studies, this study is the first to present real-world data that might potentially indicate de-skilling arising from the use of AI in diagnostic colonoscopies. This would have implications for other areas of medicine as any de-skilling effect would likely be observed more generally. There may be a risk that health professionals who get accustomed to using AI support will perform more poorly than they originally did if the AI support becomes suddenly unavailable, for example due to cyber-attacks or compromised IT systems. 

“Although AI in medicine offers the potential for significant benefits such as improved diagnostic rates, this study suggests there are risks that come from over-reliance on it. All technology, including AI, can be compromised, and it is therefore important that health professionals retain their original diagnostic skills. If not, we risk poorer patient outcomes compared to before the AI was introduced at all.”

 

Prof Venet Osmani, Professor of Clinical AI and Machine Learning, Queen Mary University of London, said:

“There are several reasons to be cautious about concluding that AI alone is causing a deskilling effect in clinicians. The study’s findings might be influenced by other factors.

“For example, the number of colonoscopies performed nearly doubled after the AI tool was introduced, going from 795 to 1382. It’s possible that this sharp increase in workload, rather than the AI itself, could have led to a lower detection rate. A more intense schedule might mean doctors have less time or are more fatigued, which could affect their performance.

“Furthermore, the introduction of a new technology like AI often comes with other changes, such as new clinical workflows or a shift in how resources are used. These organisational changes, which the study did not measure, could also be affecting detection rates.

“Finally, the study suggests a drop in skill over just three months. This is a very short period, especially for a clinician with over 27 years of experience. It raises the question of whether a true loss of skill could happen so quickly, or if the doctors were simply changing their habits in a way that affected their performance when the AI was not available.”

 

Prof Allan Tucker, Professor of Artificial Intelligence in the Department of Computer Science, Brunel University of London:

“It looks like a solid piece of work that highlights what many researchers in AI fear – that of automation bias. It is only one study and the limitations of it are explicitly highlighted in the paper.

“In terms of limitations, authors only looked at one AI system, there are many different systems and technologies that may be better at supporting or explaining decisions than others. The healthcare professionals selected were clearly experienced and interested in taking part in the study, other less tech-savvy or less experienced professionals may behave differently. It is also worth noting that some major changes to the endoscopy department were undertaken in the middle of the study, and the authors make it clear that randomised crossover trials are needed to make more robust claims.

“There have been other examples reported of automation bias [1] which highlight some of the risks in healthcare more generally.

“This is not unique to AI systems and is a risk with the introduction of any new technology, but the risk involved with AI systems is potentially more extreme. AI aims to imitate human decision-making. This can place much more pressure on a human in terms of their own decision-making than other technologies. For example, they could feel under pressure to agree with the new technology. Imagine if a mistake is made and the human expert has to defend their over-ruling of an AI decision. They could see it a less risky thing to simply agree with the AI.

“The paper is particularly interesting because it indicates that AI still spots more cancers overall. The ethical question then is whether we trust AI over humans. Often, we expect there to be a human overseeing all AI decision-making but if the human experts are putting less effort into their own decisions as a result of introducing AI systems this could be problematic.

“One side of the argument would be ‘who cares if more cancer is identified’.

“The other side may counter ‘but if the AI is biassed and making its own mistakes then it could be making them at a massive scale if left unsupervised’.”

1-https://academic.oup.com/jamia/article-abstract/24/2/423/2631492?redirectedFrom=fulltext

 

 

Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study’ by Budzyń et al. was published in Lancet Gastroenterology & Hepatology at 23:30 UK Time on Tuesday 12 August 2025. 

 

DOI: 10.1016/S2468-1253(25)00133-5

 

 

Declared interests

Dr Catherine Menon None

Prof Venet Osmani None

Prof Allan Tucker “My only other commitments beyond academia are advising the MHRA on the use of AI in healthcare (currently at no cost).”

in this section

filter RoundUps by year

search by tag