%0 Journal Article %@ 2369-3762 %I JMIR Publications %V 11 %N %P e63353 %T Assessing ChatGPT’s Capability as a New Age Standardized Patient: Qualitative Study %A Cross,Joseph %A Kayalackakom,Tarron %A Robinson,Raymond E %A Vaughans,Andrea %A Sebastian,Roopa %A Hood,Ricardo %A Lewis,Courtney %A Devaraju,Sumanth %A Honnavar,Prasanna %A Naik,Sheetal %A Joseph,Jillwin %A Anand,Nikhilesh %A Mohammed,Abdalla %A Johnson,Asjah %A Cohen,Eliran %A Adeniji,Teniola %A Nnenna Nnaji,Aisling %A George,Julia Elizabeth %K medical education %K standardized patient %K AI %K ChatGPT %K virtual patient %K assessment %K standardized patients %K LLM %K effectiveness %K medical school %K qualitative %K flexibility %K diagnostic %D 2025 %7 20.5.2025 %9 %J JMIR Med Educ %G English %X Background: Standardized patients (SPs) have been crucial in medical education, offering realistic patient interactions to students. Despite their benefits, SP training is resource-intensive and access can be limited. Advances in artificial intelligence (AI), particularly with large language models such as ChatGPT, present new opportunities for virtual SPs, potentially addressing these limitations. Objectives: This study aims to assess medical students’ perceptions and experiences of using ChatGPT as an SP and to evaluate ChatGPT’s effectiveness in performing as a virtual SP in a medical school setting. Methods: This qualitative study, approved by the American University of Antigua Institutional Review Board, involved 9 students (5 females and 4 males, aged 22‐48 years) from the American University of Antigua College of Medicine. Students were observed during a live role-play, interacting with ChatGPT as an SP using a predetermined prompt. A structured 15-question survey was administered before and after the interaction. Thematic analysis was conducted on the transcribed and coded responses, with inductive category formation. Results: Thematic analysis identified key themes preinteraction including technology limitations (eg, prompt engineering difficulties), learning efficacy (eg, potential for personalized learning and reduced interview stress), verisimilitude (eg, absence of visual cues), and trust (eg, concerns about AI accuracy). Postinteraction, students noted improvements in prompt engineering, some alignment issues (eg, limited responses on sensitive topics), maintained learning efficacy (eg, convenience and repetition), and continued verisimilitude challenges (eg, lack of empathy and nonverbal cues). No significant trust issues were reported postinteraction. Despite some limitations, students found ChatGPT as a valuable supplement to traditional SPs, enhancing practice flexibility and diagnostic skills. Conclusions: ChatGPT can effectively augment traditional SPs in medical education, offering accessible, flexible practice opportunities. However, it cannot fully replace human SPs due to limitations in verisimilitude and prompt engineering challenges. Integrating prompt engineering into medical curricula and continuous advancements in AI are recommended to enhance the use of virtual SPs. %R 10.2196/63353 %U https://mededu.jmir.org/2025/1/e63353 %U https://doi.org/10.2196/63353