%0 Journal Article %@ 2369-3762 %I JMIR Publications %V 10 %N %P e50174 %T ChatGPT in Medical Education: A Precursor for Automation Bias? %A Nguyen,Tina %+ The University of Texas Medical Branch, 301 University Blvd, Galveston, TX, 77551, United States, 1 4097721118, nguy.t921@gmail.com %K ChatGPT %K artificial intelligence %K AI %K medical students %K residents %K medical school curriculum %K medical education %K automation bias %K large language models %K LLMs %K bias %D 2024 %7 17.1.2024 %9 Editorial %J JMIR Med Educ %G English %X Artificial intelligence (AI) in health care has the promise of providing accurate and efficient results. However, AI can also be a black box, where the logic behind its results is nonrational. There are concerns if these questionable results are used in patient care. As physicians have the duty to provide care based on their clinical judgment in addition to their patients’ values and preferences, it is crucial that physicians validate the results from AI. Yet, there are some physicians who exhibit a phenomenon known as automation bias, where there is an assumption from the user that AI is always right. This is a dangerous mindset, as users exhibiting automation bias will not validate the results, given their trust in AI systems. Several factors impact a user’s susceptibility to automation bias, such as inexperience or being born in the digital age. In this editorial, I argue that these factors and a lack of AI education in the medical school curriculum cause automation bias. I also explore the harms of automation bias and why prospective physicians need to be vigilant when using AI. Furthermore, it is important to consider what attitudes are being taught to students when introducing ChatGPT, which could be some students’ first time using AI, prior to their use of AI in the clinical setting. Therefore, in attempts to avoid the problem of automation bias in the long-term, in addition to incorporating AI education into the curriculum, as is necessary, the use of ChatGPT in medical education should be limited to certain tasks. Otherwise, having no constraints on what ChatGPT should be used for could lead to automation bias. %M 38231545 %R 10.2196/50174 %U https://mededu.jmir.org/2024/1/e50174 %U https://doi.org/10.2196/50174 %U http://www.ncbi.nlm.nih.gov/pubmed/38231545 %0 Journal Article %@ 2369-3762 %I JMIR Publications %V 9 %N %P e51494 %T Can AI Mitigate Bias in Writing Letters of Recommendation? %A Leung,Tiffany I %A Sagar,Ankita %A Shroff,Swati %A Henry,Tracey L %+ JMIR Publications, 130 Queens Quay East, Unit 1100, Toronto, ON, M5A 0P6, Canada, 1 416 583 2040, tiffany.leung@jmir.org %K sponsorship %K implicit bias %K gender bias %K bias %K letters of recommendation %K artificial intelligence %K large language models %K medical education %K career advancement %K tenure and promotion %K promotion %K leadership %D 2023 %7 23.8.2023 %9 Editorial %J JMIR Med Educ %G English %X Letters of recommendation play a significant role in higher education and career progression, particularly for women and underrepresented groups in medicine and science. Already, there is evidence to suggest that written letters of recommendation contain language that expresses implicit biases, or unconscious biases, and that these biases occur for all recommenders regardless of the recommender’s sex. Given that all individuals have implicit biases that may influence language use, there may be opportunities to apply contemporary technologies, such as large language models or other forms of generative artificial intelligence (AI), to augment and potentially reduce implicit biases in the written language of letters of recommendation. In this editorial, we provide a brief overview of existing literature on the manifestations of implicit bias in letters of recommendation, with a focus on academia and medical education. We then highlight potential opportunities and drawbacks of applying this emerging technology in augmenting the focused, professional task of writing letters of recommendation. We also offer best practices for integrating their use into the routine writing of letters of recommendation and conclude with our outlook for the future of generative AI applications in supporting this task. %M 37610808 %R 10.2196/51494 %U https://mededu.jmir.org/2023/1/e51494 %U https://doi.org/10.2196/51494 %U http://www.ncbi.nlm.nih.gov/pubmed/37610808 %0 Journal Article %@ 2369-3762 %I JMIR Publications %V 9 %N %P e50945 %T The Role of Large Language Models in Medical Education: Applications and Implications %A Safranek,Conrad W %A Sidamon-Eristoff,Anne Elizabeth %A Gilson,Aidan %A Chartash,David %+ Section for Biomedical Informatics and Data Science, Yale University School of Medicine, 9th Fl, 100 College St, New Haven, CT, 06510, United States, 1 317 440 0354, david.chartash@yale.edu %K large language models %K ChatGPT %K medical education %K LLM %K artificial intelligence in health care %K AI %K autoethnography %D 2023 %7 14.8.2023 %9 Editorial %J JMIR Med Educ %G English %X Large language models (LLMs) such as ChatGPT have sparked extensive discourse within the medical education community, spurring both excitement and apprehension. Written from the perspective of medical students, this editorial offers insights gleaned through immersive interactions with ChatGPT, contextualized by ongoing research into the imminent role of LLMs in health care. Three distinct positive use cases for ChatGPT were identified: facilitating differential diagnosis brainstorming, providing interactive practice cases, and aiding in multiple-choice question review. These use cases can effectively help students learn foundational medical knowledge during the preclinical curriculum while reinforcing the learning of core Entrustable Professional Activities. Simultaneously, we highlight key limitations of LLMs in medical education, including their insufficient ability to teach the integration of contextual and external information, comprehend sensory and nonverbal cues, cultivate rapport and interpersonal interaction, and align with overarching medical education and patient care goals. Through interacting with LLMs to augment learning during medical school, students can gain an understanding of their strengths and weaknesses. This understanding will be pivotal as we navigate a health care landscape increasingly intertwined with LLMs and artificial intelligence. %M 37578830 %R 10.2196/50945 %U https://mededu.jmir.org/2023/1/e50945 %U https://doi.org/10.2196/50945 %U http://www.ncbi.nlm.nih.gov/pubmed/37578830 %0 Journal Article %@ 2369-3762 %I JMIR Publications %V 9 %N %P e46885 %T The Role of ChatGPT, Generative Language Models, and Artificial Intelligence in Medical Education: A Conversation With ChatGPT and a Call for Papers %A Eysenbach,Gunther %+ JMIR Publications, 130 Queens Quay East, Suite 1100-1102, Toronto, ON, M5A 0P6, Canada, 1 416 786 6970, geysenba@gmail.com %K artificial intelligence %K AI %K ChatGPT %K generative language model %K medical education %K interview %K future of education %D 2023 %7 6.3.2023 %9 Editorial %J JMIR Med Educ %G English %X ChatGPT is a generative language model tool launched by OpenAI on November 30, 2022, enabling the public to converse with a machine on a broad range of topics. In January 2023, ChatGPT reached over 100 million users, making it the fastest-growing consumer application to date. This interview with ChatGPT is part 2 of a larger interview with ChatGPT. It provides a snapshot of the current capabilities of ChatGPT and illustrates the vast potential for medical education, research, and practice but also hints at current problems and limitations. In this conversation with Gunther Eysenbach, the founder and publisher of JMIR Publications, ChatGPT generated some ideas on how to use chatbots in medical education. It also illustrated its capabilities to generate a virtual patient simulation and quizzes for medical students; critiqued a simulated doctor-patient communication and attempts to summarize a research article (which turned out to be fabricated); commented on methods to detect machine-generated text to ensure academic integrity; generated a curriculum for health professionals to learn about artificial intelligence (AI); and helped to draft a call for papers for a new theme issue to be launched in JMIR Medical Education on ChatGPT. The conversation also highlighted the importance of proper “prompting.” Although the language generator does make occasional mistakes, it admits these when challenged. The well-known disturbing tendency of large language models to hallucinate became evident when ChatGPT fabricated references. The interview provides a glimpse into the capabilities and limitations of ChatGPT and the future of AI-supported medical education. Due to the impact of this new technology on medical education, JMIR Medical Education is launching a call for papers for a new e-collection and theme issue. The initial draft of the call for papers was entirely machine generated by ChatGPT, but will be edited by the human guest editors of the theme issue. %M 36863937 %R 10.2196/46885 %U https://mededu.jmir.org/2023/1/e46885 %U https://doi.org/10.2196/46885 %U http://www.ncbi.nlm.nih.gov/pubmed/36863937