TY - JOUR AU - Mazur, Lukasz AU - Butler, Logan AU - Mitchell, Cody AU - Lashani, Shaian AU - Buchanan, Shawna AU - Fenison, Christi AU - Adapa, Karthik AU - Tan, Xianming AU - An, Selina AU - Ra, Jin PY - 2025/5/1 TI - Effect of Immersive Virtual Reality Teamwork Training on Safety Behaviors During Surgical Cases: Nonrandomized Intervention Versus Controlled Pilot Study JO - JMIR Med Educ SP - e66186 VL - 11 KW - Teamwork Evaluation of Non-Technical Skills KW - TENTS KW - Team Strategies and Tools to Enhance Performance and Patient Safety KW - TeamSTEPPS KW - immersive virtual reality KW - virtual reality KW - VR KW - safety behavior KW - surgical error KW - operating room KW - OR KW - training intervention KW - training KW - pilot study KW - nontechnical skills KW - surgery KW - surgical KW - patient safety KW - medical training KW - medical education N2 - Background: Approximately 4000 preventable surgical errors occur per year in the US operating rooms, many due to suboptimal teamwork and safety behaviors. Such errors can result in temporary or permanent harm to patients, including physical injury, emotional distress, or even death, and can also adversely affect care providers, often referred to as the ?second victim.? Objective: Given the persistence of adverse events in the operating rooms, the objective of this study was to quantify the effect of an innovative and immersive virtual reality (VR)?based educational intervention on (1) safety behaviors of surgeons in the operating rooms and (2) sense-making regarding the overall training experience. Methods: This mixed methods pre- versus postintervention pilot study was conducted in a large academic medical center with 55 operating rooms. Safety behaviors were observed and quantified using validated Teamwork Evaluation of Non-Technical Skills instrument during surgical cases at baseline (101 observations; 83 surgeons) and postimmersive VR based intervention (postintervention: 24 observations within each group; intervention group [with VR training; 10 surgeons] and control [no VR training; 10 surgeons]). VR intervention included a 45-minute immersive VR-based training incorporating a pre- and postdebriefing based on Team Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPS) principles to improve safety behaviors. A 2-tailed, 2-sample t-test with adjustments for multiplicity of the tests was used to test for significance in observable safety behaviors between the groupings. The debriefing data underwent analysis through the phenomenological analysis method to gain insights into how participants interpreted the training. Results: Preintervention, all safety behaviors averaged slightly above ?acceptable? scores, with an overall average of 2.2 (range 2?2.3; 0?3 scale). The 10 surgeons that underwent our intervention showed statistically significant (P<.05) improvements in 90% (18/20) of safety behaviors when compared to the 10 surgeons that did not receive the intervention (overall average 2.5, range 2.3?2.7 vs overall average 2.1, range 1.9?2.2). Our qualitative analysis based on 492 quotes from participants suggests that the observed behavioral changes are a result of an immersive experience and sense-making of key TeamSTEPPS training concepts. Conclusions: VR-based immersive training intervention focused on TeamSTEPPS principles seems effective in improving safety behaviors in the operating rooms as quantified via observations using the Teamwork Evaluation of Non-Technical Skills instrument. Further research with larger, more diverse sample sizes is needed to confirm the generalizability of these findings. International Registered Report Identifier (IRRID): RR2-10.2196/40445. UR - https://mededu.jmir.org/2025/1/e66186 UR - http://dx.doi.org/10.2196/66186 ID - info:doi/10.2196/66186 ER - TY - JOUR AU - Yao, Steven Michael AU - Huang, Lawrence AU - Leventhal, Emily AU - Sun, Clara AU - Stephen, J. Steve AU - Liou, Lathan PY - 2025/4/16 TI - Leveraging Datathons to Teach AI in Undergraduate Medical Education: Case Study JO - JMIR Med Educ SP - e63602 VL - 11 KW - data science education KW - datathon KW - machine learning KW - artificial intelligence KW - undergraduate medical education N2 - Background: As artificial intelligence and machine learning become increasingly influential in clinical practice, it is critical for future physicians to understand how such novel technologies will impact the delivery of patient care. Objective: We describe 2 trainee-led, multi-institutional datathons as an effective means of teaching key data science and machine learning skills to medical trainees. We offer key insights on the practical implementation of such datathons and analyze experiences gained and lessons learned for future datathon initiatives. Methods: We detail 2 recent datathons organized by MDplus, a national trainee-led nonprofit organization. To assess the efficacy of the datathon as an educational experience, an opt-in postdatathon survey was sent to all registered participants. Survey responses were deidentified and anonymized before downstream analysis to assess the quality of datathon experiences and areas for future work. Results: Our digital datathons between 2023 and 2024 were attended by approximately 200 medical trainees across the United States. A diverse array of medical specialty interests was represented among participants, with 43% (21/49) of survey participants expressing an interest in internal medicine, 35% (17/49) in surgery, and 22% (11/49) in radiology. Participant skills in leveraging Python for analyzing medical datasets improved after the datathon, and survey respondents enjoyed participating in the datathon. Conclusions: The datathon proved to be an effective and cost-effective means of providing medical trainees the opportunity to collaborate on data-driven projects in health care. Participants agreed that datathons improved their ability to generate clinically meaningful insights from data. Our results suggest that datathons can serve as valuable and effective educational experiences for medical trainees to become better skilled in leveraging data science and artificial intelligence for patient care. UR - https://mededu.jmir.org/2025/1/e63602 UR - http://dx.doi.org/10.2196/63602 ID - info:doi/10.2196/63602 ER - TY - JOUR AU - Devlin, M. Paulina AU - Akingbola, Oluwabukola AU - Stonehocker, Jody AU - Fitzgerald, T. James AU - Winkel, Ford Abigail AU - Hammoud, M. Maya AU - Morgan, K. Helen PY - 2024/10/21 TI - Opportunities to Improve Communication With Residency Applicants: Cross-Sectional Study of Obstetrics and Gynecology Residency Program Websites JO - JMIR Med Educ SP - e48518 VL - 10 KW - obstetrics and gynecology KW - residency program KW - residency application KW - website KW - program signals KW - communication best practices N2 - Background: As part of the residency application process in the United States, many medical specialties now offer applicants the opportunity to send program signals that indicate high interest to a limited number of residency programs. To determine which residency programs to apply to, and which programs to send signals to, applicants need accurate information to determine which programs align with their future training goals. Most applicants use a program?s website to review program characteristics and criteria, so describing the current state of residency program websites can inform programs of best practices. Objective: This study aims to characterize information available on obstetrics and gynecology residency program websites and to determine whether there are differences in information available between different types of residency programs. Methods: This was a cross-sectional observational study of all US obstetrics and gynecology residency program website content. The authorship group identified factors that would be useful for residency applicants around program demographics and learner trajectories; application criteria including standardized testing metrics, residency statistics, and benefits; and diversity, equity, and inclusion mission statements and values. Two authors examined all available websites from November 2011 through March 2022. Data analysis consisted of descriptive statistics and one-way ANOVA, with P<.05 considered significant. Results: Among 290 programs, 283 (97.6%) had websites; 238 (82.1%) listed medical schools of current residents; 158 (54.5%) described residency alumni trajectories; 107 (36.9%) included guidance related to the preferred United States Medical Licensing Examination Step 1 scores; 53 (18.3%) included guidance related to the Comprehensive Osteopathic Medical Licensing Examination Level 1 scores; 185 (63.8%) included international applicant guidance; 132 (45.5%) included a program-specific mission statement; 84 (29%) included a diversity, equity, and inclusion statement; and 167 (57.6%) included program-specific media or links to program social media on their websites. University-based programs were more likely to include a variety of information compared to community-based university-affiliated and community-based programs, including medical schools of current residents (113/123, 91.9%, university-based; 85/111, 76.6%, community-based university-affiliated; 40/56, 71.4%, community-based; P<.001); alumni trajectories (90/123, 73.2%, university-based; 51/111, 45.9%, community-based university-affiliated; 17/56, 30.4%, community-based; P<.001); the United States Medical Licensing Examination Step 1 score guidance (58/123, 47.2%, university-based; 36/111, 32.4%, community-based university-affiliated; 13/56, 23.2%, community-based; P=.004); and diversity, equity, and inclusion statements (57/123, 46.3%, university-based; 19/111, 17.1%, community-based university-affiliated; 8/56, 14.3%, community-based; P<.001). Conclusions: There are opportunities to improve the quantity and quality of data on residency websites. From this work, we propose best practices for what information should be included on residency websites that will enable applicants to make informed decisions. UR - https://mededu.jmir.org/2024/1/e48518 UR - http://dx.doi.org/10.2196/48518 ID - info:doi/10.2196/48518 ER - TY - JOUR AU - Monahan, Ken AU - Gould, Edward AU - Rice, Todd AU - Wright, Patty AU - Vasilevskis, Eduard AU - Harrell, Frank AU - Drago, Monique AU - Mitchell, Sarah PY - 2024/1/3 TI - Impact of the COVID-19 Pandemic on Medical Grand Rounds Attendance: Comparison of In-Person and Remote Conferences JO - JMIR Med Educ SP - e43705 VL - 10 KW - continuing medical education KW - COVID-19 KW - distance education KW - professional development KW - virtual learning N2 - Background: Many academic medical centers transitioned from in-person to remote conferences due to the COVID-19 pandemic, but the impact on faculty attendance is unknown. Objective: This study aims to evaluate changes in attendance at medical grand rounds (MGR) following the transition from an in-person to remote format and as a function of the COVID-19 census at Vanderbilt Medical Center. Methods: We obtained the faculty attendee characteristics from Department of Medicine records. Attendance was recorded using a SMS text message?based system. The daily COVID-19 census was recorded independently by hospital administration. The main attendance metric was the proportion of eligible faculty that attended each MGR. Comparisons were made for the entire cohort and for individual faculty. Results: The observation period was from March 2019 to June 2021 and included 101 MGR conferences with more than 600 eligible faculty. Overall attendance was unchanged during the in-person and remote formats (12,536/25,808, 48.6% vs 16,727/32,680, 51.2%; P=.44) and did not change significantly during a surge in the COVID-19 census. Individual faculty members attendance rates varied widely. Absolute differences between formats were less than ?20% or greater than 20% for one-third (160/476, 33.6%) of faculty. Pulmonary or critical care faculty attendance increased during the remote format compared to in person (1450/2616, 55.4% vs 1004/2045, 49.1%; P<.001). A cloud-based digital archive of MGR lectures was accessed by <1% of faculty per conference. Conclusions: Overall faculty attendance at MGR did not change following the transition to a remote format, regardless of the COVID-19 census, but individual attendance habits fluctuated in a bidirectional manner. Incentivizing the use of a digital archive may represent an opportunity to increase faculty consumption of MGR. UR - https://mededu.jmir.org/2024/1/e43705 UR - http://dx.doi.org/10.2196/43705 UR - http://www.ncbi.nlm.nih.gov/pubmed/38029287 ID - info:doi/10.2196/43705 ER - TY - JOUR AU - Surapaneni, Mohan Krishna PY - 2023/11/7 TI - Assessing the Performance of ChatGPT in Medical Biochemistry Using Clinical Case Vignettes: Observational Study JO - JMIR Med Educ SP - e47191 VL - 9 KW - ChatGPT KW - artificial intelligence KW - medical education KW - medical Biochemistry KW - biochemistry KW - chatbot KW - case study KW - case scenario KW - medical exam KW - medical examination KW - computer generated N2 - Background: ChatGPT has gained global attention recently owing to its high performance in generating a wide range of information and retrieving any kind of data instantaneously. ChatGPT has also been tested for the United States Medical Licensing Examination (USMLE) and has successfully cleared it. Thus, its usability in medical education is now one of the key discussions worldwide. Objective: The objective of this study is to evaluate the performance of ChatGPT in medical biochemistry using clinical case vignettes. Methods: The performance of ChatGPT was evaluated in medical biochemistry using 10 clinical case vignettes. Clinical case vignettes were randomly selected and inputted in ChatGPT along with the response options. We tested the responses for each clinical case twice. The answers generated by ChatGPT were saved and checked using our reference material. Results: ChatGPT generated correct answers for 4 questions on the first attempt. For the other cases, there were differences in responses generated by ChatGPT in the first and second attempts. In the second attempt, ChatGPT provided correct answers for 6 questions and incorrect answers for 4 questions out of the 10 cases that were used. But, to our surprise, for case 3, different answers were obtained with multiple attempts. We believe this to have happened owing to the complexity of the case, which involved addressing various critical medical aspects related to amino acid metabolism in a balanced approach. Conclusions: According to the findings of our study, ChatGPT may not be considered an accurate information provider for application in medical education to improve learning and assessment. However, our study was limited by a small sample size (10 clinical case vignettes) and the use of the publicly available version of ChatGPT (version 3.5). Although artificial intelligence (AI) has the capability to transform medical education, we emphasize the validation of such data produced by such AI systems for correctness and dependability before it could be implemented in practice. UR - https://mededu.jmir.org/2023/1/e47191 UR - http://dx.doi.org/10.2196/47191 UR - http://www.ncbi.nlm.nih.gov/pubmed/37934568 ID - info:doi/10.2196/47191 ER -