Setting the parameters for generative AI initiatives in medical education for clinical communication
Part 3 of 3 instalments from the Clinical Communication Conference held at the Clerkenewll campus
Attending this year’s UKCCC conference on 14–15 April as a student at City St George’s, University of London, I noticed one theme rising above the rest: the growing role of artificial intelligence (AI) in medical education. From Professor Stian Reimers' plenary to several oral presentations, AI’s potential to support clinical communication training was a clear focus - and understandably so. With challenges like limited teaching hours and access to role-play actors, innovations such as AI-simulated consultations (AISCs) are being positioned as potential game-changers. Yet, as exciting as these developments are, they also raise important questions around equity, diversity, and inclusivity (EDI). Who benefits from these tools - and who might be left out?
Despite the conversation around the importance of inclusivity in medical education is hardly new, its formal regulation is actually as recent as the boom of generative AI with the EDI alliance of the Medical Schools’ Council only formed in 2021, the year before ChatGPT was released for public use. Equity iversity and nclusivity (EDI) approaches are typically retroactive in nature, adapting existing frameworks that have historically disadvantaged individuals or groups. If we are not careful, the rush to implement AIinto clinical education may push us onto a familiar path : with questions of fairness and representation considered only after unintended gaps or harms have already taken root.Here are just a few ways AISCs might conflict with EDI principles:
- Inherent biases in training data can lead to representational bias and generation of harmful stereotypes
- AI hallucinations can lead to plausible, yet inaccurate, even offensive responses
- Attempts to achieve realism in AI-generated patients may cause further complications, e.g., the potentially unwarranted risk of caricatures in AI-generated voices.

However, there are ways we can avoid these pitfalls by incorporating these principles early on. Here are some suggestions that I believe may have some potential as solutions to the tension between AI and EDI:
- Reflective guides like the Liverpool EDI toolkit to direct resource design
- Closed-circuit AI models with specific, vetted training data, ideally from a range of consenting patients
- Encouraging AI literacy in students and implementing efficient feedback mechanisms
When it comes to designing educational resources like AISCs, EDI shouldn’t be an afterthought. These frameworks need to be built in from the start–not added later as a form ofdamage control once gaps have already become visible.
Words by Ms. Toru Obunge, MBBS 4 Year 1, City St George’s, University of London
Edited by Alexandra Bondoc, Inclusive Education and Events Officer