Highlights of the Research
– Clinicians demonstrated a high level of critical engagement, successfully identifying and disregarding incorrect AI recommendations, suggesting that AI serves as a tool rather than a replacement for clinical judgment.
– The AI system most significantly influenced clinical behavior when it recommended against switching from intravenous to oral antibiotics, reinforcing conservative decision-making and risk-aversion in infectious disease management.
– Despite the emphasis on Explainable AI (XAI) in academic research, clinicians in this study utilized AI-generated explanations only 9% of the time, prioritizing rapid decision-making over detailed logic.
– Technical usability was rated highly (System Usability Scale: 72.3), yet behavioral inertia and existing hospital infrastructure remain the most significant barriers to real-world implementation.
Background: The Clinical Imperative of Antimicrobial Stewardship
Antimicrobial resistance (AMR) is a silent pandemic that threatens the foundations of modern medicine. One of the primary pillars of antimicrobial stewardship is the timely transition from intravenous (IV) to oral (PO) antibiotic therapy. This transition—often referred to as IV-to-oral switch (IVOS)—is critical for reducing hospital-acquired infections, lowering healthcare costs, and improving patient comfort and mobility. However, in clinical practice, IVOS is frequently delayed due to clinician uncertainty, lack of standardized monitoring, and varying levels of experience.
Artificial Intelligence (AI) and clinical decision support systems (CDSSs) offer a potential solution by analyzing vast amounts of patient data to identify the optimal moment for switching. Yet, the translation of AI from experimental models to bedside tools is fraught with challenges. In the field of infectious diseases, where there is often no clear ‘ground truth’ and decisions are influenced by complex cultural and behavioral factors, understanding how AI influences the human clinician is as important as the accuracy of the algorithm itself.
Study Design: A Multimethod Evaluation of AI in Practice
This study, conducted by Bolton et al. and published in Lancet Digital Health, employed a randomized, multimethod approach to evaluate the impact of an AI-driven CDSS on clinician decision-making. The research involved 42 healthcare professionals from 23 hospitals across the UK, including consultants and training-grade doctors specialized in various fields, notably infectious diseases.
The Three-Phase Methodology
The study was structured into three distinct parts to capture a holistic view of AI integration:
1. Semistructured Interviews: Researchers explored participants’ baseline experiences with antibiotic prescribing, their perceptions of AI, and the existing technological landscape in their hospitals.
2. Clinical Vignette Experiment: Using a custom web application, participants evaluated 12 patient cases. They were randomized to either the Standard of Care (SOC) group, receiving typical clinical information, or the AI-CDSS group, which received SOC data plus AI recommendations and explanations. Participants had to decide whether to switch the patient to oral antibiotics or remain on IV therapy.
3. Usability and Acceptance Questionnaires: Post-experiment, participants completed the System Usability Scale (SUS) and the Technology Acceptance Model (TAM) questionnaire to quantify their perception of the tool’s utility and ease of use.
Key Findings: The Conservative Influence of AI
The findings provide a nuanced perspective on the ‘human-in-the-loop’ model of AI healthcare. Interestingly, the study found no significant difference in the time it took clinicians to complete vignettes, regardless of whether they had AI support. This suggests that the AI tool did not add a cognitive burden but also did not necessarily speed up the decision-making process in this simulated environment.
Decision Diversity and Influence
One of the most significant results was the direction of AI influence. When the AI CDSS provided a recommendation that differed from the standard of care consensus, it was most influential when it advised against switching to oral antibiotics. The statistical analysis showed a significant shift toward conservative management (not switching) when the AI suggested it (logistic regression odds ratio 0.13 [95% CI 0.03-0.50]; p=0.0031). Conversely, when the AI recommended switching in complex cases where clinicians were hesitant, its influence was less pronounced.
This ‘conservative bias’ suggests that clinicians may use AI as a safety net to justify more cautious clinical paths rather than as a catalyst for more aggressive stewardship. Crucially, clinicians were not blindly following the AI; they were able to identify and ignore recommendations they deemed incorrect, maintaining their role as the final decision-maker.
The Explainability Gap
In the realm of AI research, ‘Explainable AI’ (XAI) is often touted as a requirement for clinical trust. However, the study revealed a surprising disconnect: clinicians accessed the AI’s explanations only 9% of the time. This suggests that at the point of care, clinicians are more interested in the ‘what’ (the recommendation) and the ‘who’ (the trust in the system’s evidence base) than the ‘how’ (the underlying logic of the algorithm). For a busy clinician, the perceived reliability of the system as a whole appears more valuable than a case-by-case breakdown of features.
Expert Commentary: Interpreting the Behavioral Shift
The results of this study highlight a critical aspect of medical technology: implementation science is as vital as data science. The fact that clinicians were more likely to be influenced by a ‘do not switch’ recommendation reflects the inherent risk-aversion in infectious disease management. An inappropriate switch to oral therapy can lead to clinical relapse, whereas staying on IV therapy for an extra 24 hours is often seen as the ‘safer’ error, despite its impact on stewardship goals.
Furthermore, the low engagement with XAI features indicates that we may need to rethink how we present AI insights. If clinicians are not reading the explanations, we must ensure that the trust in the system is built through rigorous, transparent clinical trials and evidence-based validation rather than just complex visual dashboards. The ‘black box’ may be acceptable to clinicians if the ‘box’ has been proven to work in real-world clinical outcomes.
Study limitations include the use of clinical vignettes rather than real-time bedside decisions, which may not fully capture the pressures of a live hospital environment. Additionally, the sample size, while diverse across many hospitals, remains small for a definitive assessment of all prescribing behaviors.
Conclusion: The Path to Integration
This study demonstrates that AI-driven decision support is positively received and technically feasible in the UK healthcare setting. It shows that AI can influence prescribing behavior, particularly by reinforcing clinical caution. However, for AI to truly revolutionize antimicrobial stewardship, systems must move beyond simple recommendations and address the behavioral and infrastructural barriers that currently limit their use.
Future research must focus on prospective, real-world trials that measure patient outcomes—such as length of stay, readmission rates, and infection resolution—rather than just decision-making in a vacuum. As AI enters the clinical workspace, its success will depend on its ability to integrate seamlessly into existing workflows and earn the trust of clinicians through consistent, evidence-backed performance.
Funding and References
This research was funded by the UK Research and Innovation Centre for Doctoral Training in AI for Healthcare and the National Institute for Health and Care Research (NIHR) Health Protection Research Unit in Healthcare Associated Infections and Antimicrobial Resistance at Imperial College London.
Reference:
Bolton WJ, Wilson R, Gilchrist M, Georgiou P, Holmes A, Rawson TM. The impact of artificial intelligence-driven decision support on uncertain antimicrobial prescribing: a randomised, multimethod study. Lancet Digit Health. 2025 Nov;7(11):100912. doi: 10.1016/j.landig.2025.100912. Epub 2025 Dec 9. PMID: 41372053.
