Overreliance on AI in medicine and beyond could erode critical skills, as a new study reveals doctors spotting 20% fewer abnormalities without their digital aids—echoing dangers seen in aviation and workplaces.
- The Study’s Shocking Revelation: Endoscopists exposed to AI assistance during colonoscopies saw their independent detection rates plummet by 20%, highlighting potential overdependence on technology that could compromise patient safety.
- Broader Implications Across Industries: From aviation disasters like Air France Flight 447 to workplace productivity pitfalls, emerging research warns that AI boosts efficiency but risks atrophying human judgment and skills if not managed carefully.
- Path Forward with Caution: Experts urge institutions to balance AI adoption with training to maintain core abilities, ensuring technology enhances rather than replaces human expertise in high-stakes fields.
In the rapidly evolving world of healthcare, artificial intelligence promises to revolutionize diagnostics and procedures, potentially saving countless lives through enhanced accuracy and efficiency. Yet, a groundbreaking study published this month in the Lancet Gastroenterology & Hepatology journal uncovers a troubling downside: doctors who rely on AI tools may inadvertently dull their own skills. Conducted by Dr. Marcin Romańczyk, a gastroenterologist at H-T. Medical Center in Tychy, Poland, the research examined endoscopists performing colonoscopies and found that after being introduced to AI assistance, their ability to spot abnormalities on their own dropped significantly—by about 20%. This raises alarms about overreliance on AI, not just in medicine but across critical sectors where human lives hang in the balance.
The study was a retrospective, observational analysis involving four endoscopy centers in Poland participating in the ACCEPT (Artificial Intelligence in Colonoscopy for Cancer Prevention) trial. These centers implemented AI tools for polyp detection at the end of 2021, randomly assigning colonoscopies to be done with or without AI based on the examination date. Researchers compared colonoscopy quality over two phases: three months before and three months after AI introduction. Focusing on diagnostic colonoscopies, they excluded cases involving intensive anticoagulant use, pregnancy, colorectal resection history, or inflammatory bowel disease. The primary metric was the adenoma detection rate (ADR) in non-AI assisted procedures, analyzed via multivariable logistic regression to identify influencing factors.
The findings were stark. Between September 8, 2021, and March 9, 2022, 1,443 patients underwent non-AI colonoscopies—795 before AI exposure and 648 after. The median patient age was 61 years, with 58.7% female and 41.3% male. Pre-AI, the ADR stood at 28.4% (226 out of 795 cases), but it fell to 22.4% (145 out of 648) post-exposure, marking an absolute difference of -6.0% (95% CI -10.5 to -1.6; p=0.0089). Multivariable analysis pinpointed AI exposure as a key independent factor (odds ratio 0.69, 95% CI 0.53–0.89), alongside patient sex (male vs. female: 1.78, 1.38–2.30) and age (≥60 vs. <60: 3.60, 2.74–4.72). Dr. RomaÅ„czyk expressed surprise at these results, speculating that endoscopists grew accustomed to AI’s green boxes highlighting polyps, leading to a “Google Maps effect”—much like how drivers once navigated with paper maps but now depend on GPS, potentially losing their sense of direction.
This isn’t an isolated concern in medicine. RomaÅ„czyk draws parallels to traditional medical training, where doctors learned from books, mentors, and hands-on observation. “We were taught medicine from books and from our mentors. We were observing them. They were telling us what to do,” he explained. “And now there’s some artificial object suggesting what we should do, where we should look, and actually we don’t know how to behave in that particular case.” The study didn’t collect data on the exact reasons for the decline, as the outcome was unexpected, but it suggests a behavioral shift: specialists might have become so tuned to AI cues that without them, they overlooked subtle abnormalities.
Zooming out, the proliferation of AI in workplaces carries lofty promises. Goldman Sachs predicted last year that the technology could boost productivity by up to 25%, transforming industries from healthcare to finance. However, emerging research highlights pitfalls. A study from Microsoft and Carnegie Mellon University earlier this year surveyed knowledge workers and found that while AI sped up tasks, it reduced critical engagement, leading to atrophied judgment skills. RomaÅ„czyk’s work adds to this narrative, questioning whether humans can integrate AI without compromising their innate abilities.
The dangers of overreliance on automation are vividly illustrated in aviation, where similar issues have led to tragedies. The 2009 crash of Air France Flight 447, which plummeted into the Atlantic Ocean killing all 228 aboard, stemmed from autopilot disconnection amid sensor failures. Pilots, overly dependent on automated systems, failed to manually correct the aircraft’s path. As William Voss, president of the Flight Safety Foundation, noted at the time, “We are seeing a situation where we have pilots that can’t understand what the airplane is doing unless a computer interprets it for them.” This wasn’t unique to Air France or Airbus; it’s a industry-wide training challenge, underscoring the need for humans to retain manual skills even as automation advances.
Lynn Wu, an associate professor at the University of Pennsylvania’s Wharton School, emphasizes learning from such histories. “What is important is that we learn from this history of aviation and the prior generation of automation, that AI absolutely can boost performance,” she told Fortune. “But at the same time, we have to maintain those critical skills, such that when AI is not working, we know how to take over.” Institutions bear the responsibility to implement checks and balances, ensuring AI adoption includes robust training to preserve human expertise.
Dr. RomaÅ„czyk doesn’t advocate abandoning AI; instead, he sees it as an inevitable part of life. “AI will be, or is, part of our life, whether we like it or not,” he said. “We are not trying to say that AI is bad and [to stop using] it. Rather, we are saying we should all try to investigate what’s happening inside our brains, how we are affected by it? How can we actually effectively use it?” Wu echoes this, warning that AI trains on human data—if our skills degrade, so will AI’s effectiveness. “Once we become really bad at it, AI will also become really bad,” she cautioned. “We have to be better in order for AI to be better.”
As AI integrates deeper into high-stakes fields, this study serves as a wake-up call. Balancing technological innovation with human skill preservation isn’t just prudent—it’s essential for safety and progress. By fostering awareness and adaptive training, professionals can harness AI’s power without letting it erode the very expertise that makes them indispensable.