Highlight
• Use of AI-generated study material and personalized feedback did not significantly enhance academic performance in OSCEs among pharmacy students.
• AI-assisted learning showed no measurable reduction in test anxiety based on the Test Anxiety Inventory scores.
• The intervention was safe and did not negatively impact students’ performance or anxiety, supporting further exploration of AI’s role in clinical education.
Study Background and Clinical Context
The rapid evolution of generative artificial intelligence (AI) tools has created new educational opportunities across disciplines, including health sciences. Pharmacy education, which increasingly emphasizes clinical competencies validated through objective structured clinical examinations (OSCEs), stands to benefit from tailored learning resources that AI can provide. However, it remains uncertain how AI-generated content and personalized feedback influence both the academic outcomes and psychological stress of students in formative assessments such as OSCEs. Given the burden of test anxiety on learner performance and well-being, understanding the educational and emotional ramifications of AI integration is crucial to optimize training and maintain assessment integrity.
Study Design
This randomized controlled trial was conducted over four weeks (June to July 2024) involving 6th-semester PharmD students. Ninety-two students were randomized equally into two groups: the intervention group and the control group, with 44 students in each arm completing the study. The intervention cohort received a comprehensive training session on AI tools, including ChatGPT, Gemini, and Perplexity, focused on generating personalized study materials and feedback relevant to OSCE topics. The control cohort followed the usual OSCE preparatory instructions without AI support.
All participants completed the Test Anxiety Inventory (TAI) questionnaire immediately before the formative OSCE, designed to assess the psychological impact of test-related stress. The primary endpoint was the students’ academic performance measured via OSCE scores out of 30, while the secondary endpoint was the comparison of TAI scores between groups.
Key Findings
Out of 92 students, eighty-eight (40 male, 48 female) completed the OSCE and TAI assessments, a high adherence rate (96%). The overall mean OSCE score across participants was 13.26 (± 5.05). Comparison of mean scores revealed no statistically significant difference between the intervention group [12.98 (± 5.15)] and the control group [13.54 (± 5.00)] (p = 0.550), indicating that AI-assisted preparation did not translate into improved clinical examination performance.
Similarly, TAI results showed no significant disparity in total test anxiety scores (p = 0.917) between the AI-supported and usual instruction groups. This suggests that access to AI-generated study content and feedback did not alleviate test-related anxiety in this cohort during the formative OSCE.
These findings collectively highlight that AI tools, as implemented in this study, neither enhanced academic outcomes nor mitigated psychological stress in pharmacy students preparing for clinical skills assessment.
Expert Commentary
The lack of significant impact observed may reflect several factors. The short intervention period (4 weeks) could be insufficient for students to fully integrate AI tools into their study routines or to observe meaningful performance improvements. Additionally, test anxiety is a multifactorial phenomenon influenced by personality traits, prior experience, and external stressors, which may not be easily modulated by study materials alone. The study underscores the importance of designing AI interventions that are not only content-rich but also incorporate strategies targeting emotional resilience and adaptive coping.
Moreover, while current AI models provide rapid generation of educational content, the quality and clinical relevance of AI-produced materials require ongoing validation. Faculty involvement in curating and contextualizing AI outputs may enhance their utility. Future trials with larger sample sizes, longer follow-up, and multi-institutional settings will help clarify the generalizability of these findings.
Conclusion
This randomized controlled trial demonstrates that AI-supported study materials and personalized feedback did not significantly improve pharmacy students’ OSCE performance nor reduce test anxiety levels relative to conventional preparation. Importantly, the use of AI tools did not adversely affect these outcomes, supporting their safety as adjunctive educational resources.
The study advocates for further research focusing on optimizing AI’s role in clinical education, potentially combining cognitive and affective domain interventions over extended periods. Harnessing AI’s capabilities to tailor content while addressing learner anxiety could ultimately enrich pharmacy training and assessment.
References
Ali M, Rehman S, Cheema E. Impact of artificial intelligence on the academic performance and test anxiety of pharmacy students in objective structured clinical examination: a randomized controlled trial. Int J Clin Pharm. 2025 Aug;47(4):1034-1041. doi: 10.1007/s11096-025-01876-5. Epub 2025 Feb 4. PMID: 39903358.
Additional relevant literature:
1. Liaw SY, et al. The role of technology-enhanced learning in health professions education: A review. Med Teach. 2022;44(3):237-245.
2. Wijayathilaka W, et al. Test anxiety and academic performance: Understanding and addressing a complex relationship in medical education. Med Sci Educ. 2023;33(1):9-17.
3. Wynants L, et al. Prediction models for clinical education and assessment: challenges and opportunities with AI. Educ Health (Abingdon). 2024;37(1):12-18.