%0 Journal Article %@ 2369-3762 %I JMIR Publications %V 11 %N %P e66186 %T Effect of Immersive Virtual Reality Teamwork Training on Safety Behaviors During Surgical Cases: Nonrandomized Intervention Versus Controlled Pilot Study %A Mazur,Lukasz %A Butler,Logan %A Mitchell,Cody %A Lashani,Shaian %A Buchanan,Shawna %A Fenison,Christi %A Adapa,Karthik %A Tan,Xianming %A An,Selina %A Ra,Jin %K Teamwork Evaluation of Non-Technical Skills %K TENTS %K Team Strategies and Tools to Enhance Performance and Patient Safety %K TeamSTEPPS %K immersive virtual reality %K virtual reality %K VR %K safety behavior %K surgical error %K operating room %K OR %K training intervention %K training %K pilot study %K nontechnical skills %K surgery %K surgical %K patient safety %K medical training %K medical education %D 2025 %7 1.5.2025 %9 %J JMIR Med Educ %G English %X Background: Approximately 4000 preventable surgical errors occur per year in the US operating rooms, many due to suboptimal teamwork and safety behaviors. Such errors can result in temporary or permanent harm to patients, including physical injury, emotional distress, or even death, and can also adversely affect care providers, often referred to as the “second victim.” Objective: Given the persistence of adverse events in the operating rooms, the objective of this study was to quantify the effect of an innovative and immersive virtual reality (VR)–based educational intervention on (1) safety behaviors of surgeons in the operating rooms and (2) sense-making regarding the overall training experience. Methods: This mixed methods pre- versus postintervention pilot study was conducted in a large academic medical center with 55 operating rooms. Safety behaviors were observed and quantified using validated Teamwork Evaluation of Non-Technical Skills instrument during surgical cases at baseline (101 observations; 83 surgeons) and postimmersive VR based intervention (postintervention: 24 observations within each group; intervention group [with VR training; 10 surgeons] and control [no VR training; 10 surgeons]). VR intervention included a 45-minute immersive VR-based training incorporating a pre- and postdebriefing based on Team Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPS) principles to improve safety behaviors. A 2-tailed, 2-sample t-test with adjustments for multiplicity of the tests was used to test for significance in observable safety behaviors between the groupings. The debriefing data underwent analysis through the phenomenological analysis method to gain insights into how participants interpreted the training. Results: Preintervention, all safety behaviors averaged slightly above “acceptable” scores, with an overall average of 2.2 (range 2‐2.3; 0‐3 scale). The 10 surgeons that underwent our intervention showed statistically significant (P<.05) improvements in 90% (18/20) of safety behaviors when compared to the 10 surgeons that did not receive the intervention (overall average 2.5, range 2.3‐2.7 vs overall average 2.1, range 1.9‐2.2). Our qualitative analysis based on 492 quotes from participants suggests that the observed behavioral changes are a result of an immersive experience and sense-making of key TeamSTEPPS training concepts. Conclusions: VR-based immersive training intervention focused on TeamSTEPPS principles seems effective in improving safety behaviors in the operating rooms as quantified via observations using the Teamwork Evaluation of Non-Technical Skills instrument. Further research with larger, more diverse sample sizes is needed to confirm the generalizability of these findings. International Registered Report Identifier (IRRID): RR2-10.2196/40445. %R 10.2196/66186 %U https://mededu.jmir.org/2025/1/e66186 %U https://doi.org/10.2196/66186 %0 Journal Article %@ 2369-3762 %I JMIR Publications %V 11 %N %P e63602 %T Leveraging Datathons to Teach AI in Undergraduate Medical Education: Case Study %A Yao,Michael Steven %A Huang,Lawrence %A Leventhal,Emily %A Sun,Clara %A Stephen,Steve J %A Liou,Lathan %K data science education %K datathon %K machine learning %K artificial intelligence %K undergraduate medical education %D 2025 %7 16.4.2025 %9 %J JMIR Med Educ %G English %X Background: As artificial intelligence and machine learning become increasingly influential in clinical practice, it is critical for future physicians to understand how such novel technologies will impact the delivery of patient care. Objective: We describe 2 trainee-led, multi-institutional datathons as an effective means of teaching key data science and machine learning skills to medical trainees. We offer key insights on the practical implementation of such datathons and analyze experiences gained and lessons learned for future datathon initiatives. Methods: We detail 2 recent datathons organized by MDplus, a national trainee-led nonprofit organization. To assess the efficacy of the datathon as an educational experience, an opt-in postdatathon survey was sent to all registered participants. Survey responses were deidentified and anonymized before downstream analysis to assess the quality of datathon experiences and areas for future work. Results: Our digital datathons between 2023 and 2024 were attended by approximately 200 medical trainees across the United States. A diverse array of medical specialty interests was represented among participants, with 43% (21/49) of survey participants expressing an interest in internal medicine, 35% (17/49) in surgery, and 22% (11/49) in radiology. Participant skills in leveraging Python for analyzing medical datasets improved after the datathon, and survey respondents enjoyed participating in the datathon. Conclusions: The datathon proved to be an effective and cost-effective means of providing medical trainees the opportunity to collaborate on data-driven projects in health care. Participants agreed that datathons improved their ability to generate clinically meaningful insights from data. Our results suggest that datathons can serve as valuable and effective educational experiences for medical trainees to become better skilled in leveraging data science and artificial intelligence for patient care. %R 10.2196/63602 %U https://mededu.jmir.org/2025/1/e63602 %U https://doi.org/10.2196/63602 %0 Journal Article %@ 2369-3762 %I JMIR Publications %V 10 %N %P e48518 %T Opportunities to Improve Communication With Residency Applicants: Cross-Sectional Study of Obstetrics and Gynecology Residency Program Websites %A Devlin,Paulina M %A Akingbola,Oluwabukola %A Stonehocker,Jody %A Fitzgerald,James T %A Winkel,Abigail Ford %A Hammoud,Maya M %A Morgan,Helen K %K obstetrics and gynecology %K residency program %K residency application %K website %K program signals %K communication best practices %D 2024 %7 21.10.2024 %9 %J JMIR Med Educ %G English %X Background: As part of the residency application process in the United States, many medical specialties now offer applicants the opportunity to send program signals that indicate high interest to a limited number of residency programs. To determine which residency programs to apply to, and which programs to send signals to, applicants need accurate information to determine which programs align with their future training goals. Most applicants use a program’s website to review program characteristics and criteria, so describing the current state of residency program websites can inform programs of best practices. Objective: This study aims to characterize information available on obstetrics and gynecology residency program websites and to determine whether there are differences in information available between different types of residency programs. Methods: This was a cross-sectional observational study of all US obstetrics and gynecology residency program website content. The authorship group identified factors that would be useful for residency applicants around program demographics and learner trajectories; application criteria including standardized testing metrics, residency statistics, and benefits; and diversity, equity, and inclusion mission statements and values. Two authors examined all available websites from November 2011 through March 2022. Data analysis consisted of descriptive statistics and one-way ANOVA, with P<.05 considered significant. Results: Among 290 programs, 283 (97.6%) had websites; 238 (82.1%) listed medical schools of current residents; 158 (54.5%) described residency alumni trajectories; 107 (36.9%) included guidance related to the preferred United States Medical Licensing Examination Step 1 scores; 53 (18.3%) included guidance related to the Comprehensive Osteopathic Medical Licensing Examination Level 1 scores; 185 (63.8%) included international applicant guidance; 132 (45.5%) included a program-specific mission statement; 84 (29%) included a diversity, equity, and inclusion statement; and 167 (57.6%) included program-specific media or links to program social media on their websites. University-based programs were more likely to include a variety of information compared to community-based university-affiliated and community-based programs, including medical schools of current residents (113/123, 91.9%, university-based; 85/111, 76.6%, community-based university-affiliated; 40/56, 71.4%, community-based; P<.001); alumni trajectories (90/123, 73.2%, university-based; 51/111, 45.9%, community-based university-affiliated; 17/56, 30.4%, community-based; P<.001); the United States Medical Licensing Examination Step 1 score guidance (58/123, 47.2%, university-based; 36/111, 32.4%, community-based university-affiliated; 13/56, 23.2%, community-based; P=.004); and diversity, equity, and inclusion statements (57/123, 46.3%, university-based; 19/111, 17.1%, community-based university-affiliated; 8/56, 14.3%, community-based; P<.001). Conclusions: There are opportunities to improve the quantity and quality of data on residency websites. From this work, we propose best practices for what information should be included on residency websites that will enable applicants to make informed decisions. %R 10.2196/48518 %U https://mededu.jmir.org/2024/1/e48518 %U https://doi.org/10.2196/48518 %0 Journal Article %@ 2369-3762 %I JMIR Publications %V 10 %N %P e43705 %T Impact of the COVID-19 Pandemic on Medical Grand Rounds Attendance: Comparison of In-Person and Remote Conferences %A Monahan,Ken %A Gould,Edward %A Rice,Todd %A Wright,Patty %A Vasilevskis,Eduard %A Harrell,Frank %A Drago,Monique %A Mitchell,Sarah %+ Vanderbilt University Medical Center, 1215 21st Avenue, Medical Center East - 5th Floor, Nashville, TN, 37232, United States, 1 6153222318, ken.monahan@vumc.org %K continuing medical education %K COVID-19 %K distance education %K professional development %K virtual learning %D 2024 %7 3.1.2024 %9 Short Paper %J JMIR Med Educ %G English %X Background: Many academic medical centers transitioned from in-person to remote conferences due to the COVID-19 pandemic, but the impact on faculty attendance is unknown. Objective: This study aims to evaluate changes in attendance at medical grand rounds (MGR) following the transition from an in-person to remote format and as a function of the COVID-19 census at Vanderbilt Medical Center. Methods: We obtained the faculty attendee characteristics from Department of Medicine records. Attendance was recorded using a SMS text message–based system. The daily COVID-19 census was recorded independently by hospital administration. The main attendance metric was the proportion of eligible faculty that attended each MGR. Comparisons were made for the entire cohort and for individual faculty. Results: The observation period was from March 2019 to June 2021 and included 101 MGR conferences with more than 600 eligible faculty. Overall attendance was unchanged during the in-person and remote formats (12,536/25,808, 48.6% vs 16,727/32,680, 51.2%; P=.44) and did not change significantly during a surge in the COVID-19 census. Individual faculty members attendance rates varied widely. Absolute differences between formats were less than –20% or greater than 20% for one-third (160/476, 33.6%) of faculty. Pulmonary or critical care faculty attendance increased during the remote format compared to in person (1450/2616, 55.4% vs 1004/2045, 49.1%; P<.001). A cloud-based digital archive of MGR lectures was accessed by <1% of faculty per conference. Conclusions: Overall faculty attendance at MGR did not change following the transition to a remote format, regardless of the COVID-19 census, but individual attendance habits fluctuated in a bidirectional manner. Incentivizing the use of a digital archive may represent an opportunity to increase faculty consumption of MGR. %M 38029287 %R 10.2196/43705 %U https://mededu.jmir.org/2024/1/e43705 %U https://doi.org/10.2196/43705 %U http://www.ncbi.nlm.nih.gov/pubmed/38029287 %0 Journal Article %@ 2369-3762 %I JMIR Publications %V 9 %N %P e47191 %T Assessing the Performance of ChatGPT in Medical Biochemistry Using Clinical Case Vignettes: Observational Study %A Surapaneni,Krishna Mohan %+ Panimalar Medical College Hospital & Research Institute, Varadharajapuram, Poonamallee, Chennai, 600123, India, 91 9789099989, krishnamohan.surapaneni@gmail.com %K ChatGPT %K artificial intelligence %K medical education %K medical Biochemistry %K biochemistry %K chatbot %K case study %K case scenario %K medical exam %K medical examination %K computer generated %D 2023 %7 7.11.2023 %9 Short Paper %J JMIR Med Educ %G English %X Background: ChatGPT has gained global attention recently owing to its high performance in generating a wide range of information and retrieving any kind of data instantaneously. ChatGPT has also been tested for the United States Medical Licensing Examination (USMLE) and has successfully cleared it. Thus, its usability in medical education is now one of the key discussions worldwide. Objective: The objective of this study is to evaluate the performance of ChatGPT in medical biochemistry using clinical case vignettes. Methods: The performance of ChatGPT was evaluated in medical biochemistry using 10 clinical case vignettes. Clinical case vignettes were randomly selected and inputted in ChatGPT along with the response options. We tested the responses for each clinical case twice. The answers generated by ChatGPT were saved and checked using our reference material. Results: ChatGPT generated correct answers for 4 questions on the first attempt. For the other cases, there were differences in responses generated by ChatGPT in the first and second attempts. In the second attempt, ChatGPT provided correct answers for 6 questions and incorrect answers for 4 questions out of the 10 cases that were used. But, to our surprise, for case 3, different answers were obtained with multiple attempts. We believe this to have happened owing to the complexity of the case, which involved addressing various critical medical aspects related to amino acid metabolism in a balanced approach. Conclusions: According to the findings of our study, ChatGPT may not be considered an accurate information provider for application in medical education to improve learning and assessment. However, our study was limited by a small sample size (10 clinical case vignettes) and the use of the publicly available version of ChatGPT (version 3.5). Although artificial intelligence (AI) has the capability to transform medical education, we emphasize the validation of such data produced by such AI systems for correctness and dependability before it could be implemented in practice. %M 37934568 %R 10.2196/47191 %U https://mededu.jmir.org/2023/1/e47191 %U https://doi.org/10.2196/47191 %U http://www.ncbi.nlm.nih.gov/pubmed/37934568