Published on in Vol 12 (2026)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/77127, first published .
Artificial Intelligence in Medical Education: Transformative Potential, Current Applications, and Future Implications

Artificial Intelligence in Medical Education: Transformative Potential, Current Applications, and Future Implications

Artificial Intelligence in Medical Education: Transformative Potential, Current Applications, and Future Implications

1One Health Research Group, Universidad de Las Américas, Via Nayon S/N, Quito, Ecuador

2Escuela de Comunicación, Universidad Latina de Costa Rica, San Jose, Costa Rica

Corresponding Author:

Esteban Ortiz-Prado, MSc, MPH, MD, PhD


Artificial intelligence (AI) is increasingly influencing medical education by enabling adaptive learning, AI-assisted assessment, and scalable instructional tools. Natural language processing, machine learning, and generative large language models offer innovative ways to support teaching and learning, yet their integration raises ethical, pedagogical, and infrastructural challenges. This viewpoint article aims to examine the current applications, benefits, and challenges of AI in medical education and propose strategies for responsible and effective integration. AI tools such as chatbots, virtual patients, and intelligent tutoring systems enhance personalized and immersive learning. Automated grading and predictive analytics support efficient evaluations, while AI-assisted writing tools streamline content creation. Despite these advances, concerns persist around data privacy, algorithmic bias, unequal access, and diminished critical thinking. Key solutions include AI literacy training, data oversight, equitable infrastructure, and curriculum reform. The FACETS framework offers 6 dimensions (ie, form, application, context, instructional mode, technology, and the SAMR [substitution, augmentation, modification, redefinition model]) to evaluate AI integration effectively. AI offers substantial opportunities to transform medical education, but its adoption must be ethical, equitable, and pedagogically grounded. Strategic frameworks such as FACETS, combined with institutional governance and cross-sector collaboration, are essential to guide implementation so that AI enhances learning outcomes while preserving the humanistic foundations of medical practice.

JMIR Med Educ 2026;12:e77127

doi:10.2196/77127

Keywords



Artificial intelligence (AI) emerged in the mid-20th century, particularly after the Dartmouth Conference in 1956, as an interdisciplinary field integrating computer science, mathematics, logic, and cognitive science. The first published study on AI in medical education dates back to 1992, although AI has existed much longer than this. Its goal was to simulate human cognitive processes and endow computer systems with human-like abilities, such as reasoning, learning, and decision-making [1]. In recent decades, AI applications have expanded significantly, finding roles in fields such as programming, statistics, preschool education, and university disciplines such as medicine and medical education. However, it was not until the launch of ChatGPT (OpenAI) in 2022 that AI became exponentially popularized, taking on a prominent role in many domains of knowledge. This growth has been driven by technological advances and increasing expectations fueled by globalization and the vast flow of information in today’s interconnected world [2,3].

In the field of sciences, including medicine, biomedicine, and related disciplines, the use of AI extends far beyond the traditional teaching-learning process. It now plays a fundamental role in the entire professional life cycle of medical practitioners. This includes preparation, continuous education, and the acquisition of knowledge. The rapid growth of scientific literature and the constant influx of new information push physicians to seek knowledge from diverse sources, aiming to stay as current as possible. These sources go beyond traditional books or scientific articles, incorporating videos, interactive platforms, virtual classes, and other innovative tools that facilitate learning and professional development [1,2].

AI supports both learning and teaching, leveraging technologies such as natural language processing (NLP), machine learning (ML), and generative pretrained transformer architectures. These technologies enable the creation of innovative educational strategies, revolutionary teaching methods, curriculum design, content development, and the evaluation of various academic processes. This makes AI an indispensable tool for medical education, facilitating both the transmission of knowledge and the continuous improvement of educational systems [4].

The dynamics have changed so rapidly that teachers, educators, students, and trainees are using AI, yet very few have been adequately trained for this purpose. Scientific literature is growing at an unprecedented pace, and while the integration of AI into medical education offers significant benefits, it also presents substantial challenges. One of the most profound critiques involves ethical considerations, the risk of undermining academic integrity, and concerns about students’ reliance on AI for assignments, raising critical questions among educators. As Pineda-de-Alcázar [5] suggests, these technological advances raise fundamental questions about the nature of communication between humans and machines, as AI moves closer to interacting with us in ways that mimic human thought and emotional complexity. Within this context, we explore the applicability and evidence for using AI in biomedical teaching practices.

Against this backdrop, this manuscript addresses the following research question: How are distinct AI modalities being deployed in medical education across learning, assessment, administration, and academic content creation, and what ethically grounded strategies best support their responsible integration? Accordingly, our objective is to examine the applications, benefits, and challenges of AI in medical education across these domains and propose strategies for responsible and effective integration.


Overview

AI is rapidly transforming medical education beyond the use of conversational agents such as ChatGPT. Its current impact can be categorized into four main areas: interactive learning tools, intelligent assessment systems, administrative and logistical support, and content creation.

Interactive Learning Tools

AI-driven virtual assistants and chatbots—such as ChatGPT—facilitate dialogue-based learning, allowing students to practice history taking and enhance communication skills through simulated patient encounters [6]. Moreover, immersive technologies, including virtual reality, intelligent tutoring systems (ITS), medical robotics, and augmented reality, provide hands-on training experiences. These tools promote the development of clinical reasoning, surgical technique, and medical imaging interpretation while delivering personalized feedback and adaptive learning pathways [7,8].

Intelligent Assessment Systems

AI enhances evaluation processes by enabling automated grading and reducing evaluator bias. NLP can analyze narrative feedback to identify patterns related to competency, professionalism, and potential learner risk [9]. Innovations such as virtual Objective Structured Clinical Examinations (OSCEs) and automated assessments of clinical case presentations streamline performance evaluations [10-13]. In addition, ML contributes to the creation and validation of examination items, although human oversight remains essential to maintain content integrity [14,15].

Administrative and Logistical Support

In educational administration settings, AI automates the documentation of clinical experiences. For example, NLP applications can map trainee documentation to core competencies with high accuracy (92%‐97%), significantly reducing clerical workload [16-19]. Furthermore, predictive modeling aids in optimizing residency selection and procedural tracking, enhancing the efficiency of academic programs [20,21].

Content Creation and Academic Writing

AI also supports academic writing by assisting in the drafting of structured clinical notes, research manuscripts, and literature summaries. These tools enhance clarity, reduce writing time, and facilitate effective scholarly communication [22,23] (Table 1).

Table 1. Synthesis of artificial intelligence (AI) applications in medical education.
Category and AI applicationDescription
Virtual assistants and chatbots
Interactive dialogue systemsChatbots enable dialogue-based learning, supporting communication, and history-taking practice.
Generative conversational AITools such as ChatGPT simulate realistic patient interactions to develop clinical communication skills.
Simulated medical casesNLPa-driven simulations help assess and improve students’ clinical reasoning and decision-making.
Learning tools, simulation, and VRb
Virtual surgical assistantsAI evaluates surgical performance in simulations, offering feedback for skill refinement.
Virtual patient avatars2D/3D avatars simulate clinical scenarios to train students in consultations and emergency care.
Virtual patient simulators and ITScAdaptive platforms offer real-time feedback to strengthen reasoning and procedural skills.
Medical robotsUsed for pharmacologic simulations and clinical knowledge assessments in competency-based training.
Robot-assisted surgical trainingEnables safe, simulated practice of complex surgical procedures.
Augmented reality in simulationsAugmented reality tools provide interactive training for interpreting imaging (eg, x-rays, CTd scans).
AI-integrated electronic learning platformsCombine AI tutoring with online and in-person education for personalized learning paths.
NLP for literature processingNLP tools assist in summarizing and interpreting large volumes of scientific literature.
Intelligent assessment systems
Unbiased candidate selectionAI reduces demographic bias in residency selection processes.
Test creationAI generates multiple-choice questions and clinical scenarios for examinations and curriculum.
Automated gradingSystems provide instant feedback on assessments while requiring human oversight.
Content creation and writing support
AI-assisted academic writingTools assist in drafting structured notes and summaries to enhance academic communication.
AI in scientific researchSupports manuscript drafting and editing, with attention to avoiding fabricated content.

aNLP: natural language processing.

bVR: virtual reality.

cITS: intelligent tutoring system.

dCT: computed tomography.


AI technologies are reshaping medical education by enabling personalized learning, enhancing clinical training, and supporting academic content development. While these advancements offer considerable promise, they also raise significant risks and ethical challenges. Responsible integration of AI requires careful attention to its implications, along with institutional commitment to adapting curricula and policies.

AI-driven technologies provide highly personalized learning experiences that can improve academic performance and clinical competence. One major advantage is the ability to tailor educational content and assessments to the individual learner’s needs, fostering more efficient knowledge acquisition and deeper conceptual understanding [24]. Furthermore, AI-powered simulations—including virtual surgical assistants, patient avatars, and procedural simulators—offer safe, immersive environments where students can refine their diagnostic and technical skills without risk to patients [25].

Exposure to AI also prepares students for its growing role in clinical practice. Familiarity with decision support tools enhances future physicians’ readiness to integrate AI into diagnosis, treatment planning, and patient care [26]. From an instructional perspective, generative AI accelerates the development of educational content by facilitating the creation of multiple-choice questions, clinical vignettes, and tailored teaching modules [23,27,28]. Notably, AI-generated resources have the potential to reduce educational disparities by enhancing access to high-quality materials in underserved regions [23,27].


Risks and Concerns

Despite its benefits, the incorporation of AI into medical training poses several concerns. One is the potential impact on career decisions. Some students may be deterred from entering fields such as radiology due to perceived reductions in future demand, as AI systems become increasingly proficient at image interpretation [29]. Another concern is the limited integration of AI-related content in existing medical curricula, leaving students unprepared to engage with AI-driven health care systems [30]. Additionally, there is the risk of overreliance on AI tools, which may undermine the development of critical hands-on clinical skills, diagnostic reasoning, and human-centered decision-making [31].

Ethical Considerations

The ethical use of AI in medical education is a growing focus of scholarly and institutional discourse. A key consideration is the preservation of human judgment. While AI can support decision-making, it must not displace the clinician’s responsibility for critical thinking, empathy, and ethical deliberation [32]. Moreover, transparency and accountability are essential. Medical students must be taught to recognize biases, uphold data privacy, and ensure fairness in AI-mediated outcomes. Equitable access is also critical; AI integration must address the digital divide to ensure that learners across different regions and socioeconomic backgrounds benefit equally [29,33,34].

In medical education, AI use may compromise several ethical principles: academic integrity (fabrication or inaccuracy of references), justice (potential biases), nonmaleficence (overreliance on algorithmic outputs by students), and confidentiality (risks of training data extraction). These concerns are supported by empirical findings. For example, in a set of 115 references generated with ChatGPT-3.5 (OpenAI), 47% (n/N) were fabricated, 46% (n/N) inaccurate, and 7% (n/N) authentic and correct [35]; in another analysis of 636 citations, 55% (n/N) were fabricated with GPT-3.5 and 18% with GPT-4 (OpenAI), alongside errors in “real” citations [36]. Editorial practice has likewise documented high similarity indices with plagiarism risk and failures to detect misuse in medical-scientific articles—including AI-generated images with biological errors and the publication of stock large language model (LLM) phrases (eg, “I do not have access to real-time information...”) in papers that passed peer review [37]. Beyond GenAI, computer vision used for instructional purposes (eg, medical imaging or intraoperative video) exhibits performance degradation due to dataset shift, dependence on spurious signals across hospitals or equipment, and vulnerability to adversarial attacks, necessitating external validation, audits, and technical safeguards when incorporated into educational activities [38,39]. From a privacy standpoint, LLMs have demonstrated memorization of training data and the potential for verbatim disclosure, reinforcing the need to avoid uploading protected health information (PHI) and to adopt institutional controls (data policies, privacy-preserving environments, and prior review before use in classrooms or OSCEs) [40,41].

In alignment with the United Nations Educational, Scientific and Cultural Organization (UNESCO) Recommendation on the Ethics of Artificial Intelligence (2021) and the World Health Organization Guidance on Ethics and Governance of AI for Health (2021), we translate principles into practice through five tightly scoped, actionable axes [42,43]: (1) governance and accountability: establish an oversight committee, maintain a public registry of AI systems, and require predeployment Algorithmic Impact Assessments and Data Protection Impact Assessments; (2) human oversight in high-stakes uses: mandate dual human+AI scoring, structured debriefs, and explicit override criteria grounded in clinical or educational judgment; (3) transparency: disclose in syllabi which tools are used and for what purposes and limits, provide model or data cards, and require student disclosure of AI assistance; (4) privacy-preserving data governance: prohibit uploading PHI to public models; enforce data minimization and deidentification, role-based access controls, secure logging, and retention limits; and prefer on-prem and virtual private cloud solutions when processing student data; and (5) equity, safety, and robustness: conduct subgroup bias testing and disparate impact monitoring, pursue cross-site validation where feasible, and set predefined error thresholds with rollback plans. To operationalize these pillars, we propose a staged pathway that begins with pilots in controlled sandbox environments with Algorithmic Impact Assessments or Data Protection Impact Assessments and a clear evaluation plan, then scales gradually with bias and robustness audits and cross-site performance monitoring, and culminates in institutional integration with periodic audits, update protocols, and public reporting. Together, these steps aim to improve learning while safeguarding autonomy, justice, and trust.


AI in Curricular Development

To ensure responsible and effective use of AI, medical education should incorporate foundational and applied competencies. Core knowledge areas should include data science, ML, algorithm design, statistics, and basic coding principles [44,45].

Curricular modules should also address clinical applications of AI in domains such as surgical training, radiology interpretation, ophthalmologic analysis, and hematologic diagnostics [46-49]. Ethical and legal education remains vital, with an emphasis on transparency, privacy, and professional accountability [34,50]. Notably, one publication has even proposed establishing a dedicated specialty in Medical Data Sciences to address the growing demand for AI expertise in clinical settings [51].

Teaching Strategies

Educators are actively exploring pedagogical strategies to integrate AI into undergraduate medical training. AI-enhanced curriculum design, such as the use of chatbots and generative platforms, enables the creation of dynamic learning experiences tailored to student progression and clinical reasoning levels [28]. Similarly, adaptive assessment systems powered by AI can identify individual student weaknesses and provide targeted feedback, thereby optimizing skill development and educational outcomes [23,27].


The integration of AI into medical education can be meaningfully interpreted through Oberg’s theory of culture shock, which comprises four progressive phases: honeymoon, frustration, adaptation, and acceptance [52]. This framework provides a valuable lens to examine the emotional, cognitive, and institutional transitions associated with AI adoption in medical education.

Adoption Trajectory

The Honeymoon Phase: Enthusiasm and Idealism

The initial response to AI is often marked by enthusiasm and idealism. During this phase, perspective articles and expert commentaries emphasize AI’s transformative potential, exploring novel applications and offering conceptual frameworks for implementation. However, these early contributions frequently understate practical limitations, ethical dilemmas, and the systemic challenges associated with integration [52].

The Frustration Phase: Uncertainty and Resistance

As implementation progresses, institutions often encounter a phase of skepticism and resistance. Concerns may stem from limited technical knowledge, fears of professional displacement, or mistrust in AI’s reliability. Faculty unfamiliar with digital tools may experience anxiety or a sense of obsolescence. Despite its discomfort, this phase is critical, as it catalyzes essential discussions on ethics, equity, and the risks of uncritical adoption [52].

The Adaptation Phase: Pragmatic Implementation

Over time, a more balanced perspective emerges, with institutions adopting AI tools in measured and context-sensitive ways. In this phase, AI is viewed not as a panacea but as a complement to existing pedagogical practices. Implementation strategies begin to emphasize design thinking, iterative refinement, and context-specific use cases. Nonetheless, many approaches remain limited in scope and lack a longitudinal vision [52].

The Acceptance Phase: Thoughtful Integration and Leadership

In the final phase, AI becomes a normalized and integrated component of the educational ecosystem. Educators demonstrate fluency in AI applications and embed them into curricula, assessments, and research activities with ethical awareness and pedagogical intent. Institutions at this stage offer scalable models characterized by innovation, critical engagement, and a commitment to preserving the humanistic core of medical education [52].

Persistent Challenges in AI Integration

Despite increasing interest and promising use cases, several challenges continue to hinder the effective integration of AI in medical education. Ethical and legal concerns include risks to data privacy, opaque accountability for AI-supported decisions, and the potential erosion of humanistic principles in teaching and care; AI use must remain aligned with the ethical foundations of medical practice and must respect the clinician-patient relationship [53]. Unreliable or low-quality outputs from generative models, such as ChatGPT and similar tools, may perpetuate outdated, biased, or inaccurate information when these systems are trained on flawed data; inaccuracies in AI-generated educational content can compromise learner safety and clinical competence, underscoring the need for rigorous data curation and active human oversight [1,54]. Scalability and equity barriers arise from limited digital infrastructure and high implementation costs, particularly in low- and middle-income countries, where disparities restrict equitable access to AI-driven tools and may reinforce existing educational inequities [54]. Finally, curricular displacement is a concern: overdependence on AI may inadvertently marginalize foundational clinical reasoning, communication, and hands-on skills if these technologies are introduced without explicit pedagogical safeguards and thoughtful alignment with existing curricula [51].

Proposed Solutions for Effective and Ethical AI Adoption

To harness the benefits while mitigating potential risks, a comprehensive strategy is essential—one that encompasses ethical considerations, pedagogical advancements, and technological inclusivity. Several complementary approaches can facilitate the responsible adoption of AI in medical education. Strengthening ethical frameworks is critical to ensure transparency, fairness, and respect for patient-centered values throughout AI development and deployment [53]. Enhancing data quality and human oversight is equally important: training AI systems on peer-reviewed data and subjecting outputs to expert review can reduce misinformation and support safe implementation [1,54]. Promoting equitable access involves designing adaptable, low-cost solutions tailored to diverse educational contexts and resource environments, particularly in underserved regions [54]. Reforming curricula to include AI literacy prepares future physicians to engage critically with algorithmic tools, with core competencies that encompass data science, algorithmic reasoning, ethical discernment, and digital fluency [51]. Supporting longitudinal research on the long-term educational and clinical impacts of AI integration will provide the evidence base needed for informed decision-making and continuous improvement [54].

These strategies are further detailed in Table 2, which aligns specific challenges with corresponding solutions to guide effective AI integration in medical education.

Table 2. Strategic challenges and solutions for the integration of artificial intelligence (AI) in medical education.
ChallengeProposed solution
Ethical and legal risksStrengthen ethical oversight and align AI use with core humanistic values [53]
Unreliable AI-generated contentEnsure high-quality data sources and implement human oversight [54]
Limited scalability and accessDevelop accessible, adaptable tools for global use [54]
Curriculum imbalanceIntegrate AI literacy into core training without displacing clinical fundamentals [45,51]
Need for evidence-based integrationSupport rigorous research on educational impact and long-term outcomes [54]

FACETS as a Unifying Lens for Implementation

In the current context, recent studies typically address isolated components of AI in medical education, such as specific teaching technologies or environments. However, to advance this field, a more cohesive framework is required that facilitates replication, innovation, and a comprehensive understanding of the educational role of AI [2]. To this end, the FACETS framework has been proposed as a guiding model for future research and implementation in medical education settings. By examining six key dimensions—form, application, context, instructional mode, technology, and the SAMR (substitution, augmentation, modification, redefinition) model—this framework provides a structured approach for assessing how AI tools align with educational objectives (Figure 1). This multifaceted analysis ensures that AI implementations are pedagogically sound and contextually appropriate [2].

The tangible applicability of FACETS can be appreciated by retrospectively mapping published interventions (Table 3). Holderried et al [55] evaluated a GPT-4–based simulated patient that provided immediate, structured feedback for history taking among third-year medical students; they observed more than 99% medically plausible exchanges and near-perfect agreement with a human rater (κ=0.832), although 8 of 45 feedback categories showed lower concordance, underscoring the need for human oversight. Similarly, Luordo et al [56] used GPT-4 to grade OSCE reports from 96 students, finding high concordance with human graders (intraclass correlation coefficient=0.77 for single measures; 0.91 for average measures), systematically stricter AI scoring (AI mean 8.88, SD 2.96 vs experts’ mean 12.39, SD 3.22; mean difference −3.51 points), and substantially shorter grading times (~24 min vs 2‐4 h). Taken together, although the original studies did not explicitly use FACETS, this mapping supports the utility of the FACETS framework as an analytic lens to describe, compare, and guide future implementations of AI in medical education, particularly in undergraduate contexts.

Beyond the specific cases mapped earlier, FACETS can also be used to anticipate and assess emerging applications of AI in medical education—ranging from ITS and personalized learning platforms to chatbots (eg, ChatGPT) and intraoperative video analysis—by aligning each use case with pedagogical intent, context, and level of transformation [2].

Figure 1. FACETS framework for integrating AI into medical education. This schematic depicts the FACETS framework with six dimensions—form, application, context, instructional mode, technology, and SAMR—arrayed around a central objective: aligning interventions with learning outcomes and contextual fit for the integration of AI in medical education. AI: artificial intelligence; AR: augmented reality; CV: computer vision; GPT: generative pretrained transformer; HITL: human-in-the-loop; ITS: intelligent tutoring system; LLM: large language model; ML: machine learning; NLP: natural language processing; OSCE: Objective Structured Clinical Examination; PHI: protected health information; SAMR: substitution, augmentation, modification, redefinition; VR: virtual reality.
Table 3. Form, application, context, instructional mode, technology, SAMRc (FACETS) mapping of 2 real-world artificial intelligence (AI) interventions in medical education.
Dimension (FACETS)Case 1—prospective study; 106 conversations (1894 question-answer pairs) [55]Case 2—observational study; 96 students; AI versus human graders [56]
FormLLMa chatbot simulating a patient with structured feedbackAutomated scoring system for OSCEb reports (LLM)
ApplicationHistory-taking practice+evaluation of coverage of critical itemsFormative or summative grading of reports using an institutional checklist
ContextThird-year students, European medical school, individual practiceHospital teaching unit; 96 students in a real OSCE
Instructional modeAI-guided practice with comparison against a human rater (for debrief)AI-assisted assessment with expert benchmarking (human-in-the-loop)
TechnologyGPT-4; analysis of response plausibility and category-based feedback qualityBatch GPT-4 pipeline; comparison with human graders; time logging
SAMR (level of transformation)Augmentation → modification (replaces SP+adds rubric-aligned feedback or analytics)Augmentation → modification (from manual scoring to automated with traceability or speed)

aLLM: large language model.

bOSCE: objective structured clinical examination.

cSAMR: substitution, augmentation, modification, redefinition.

Emerging Applications and Integrative Synthesis for Decision-Making

As AI becomes more deeply embedded in health care delivery, its incorporation into medical education continues to represent a critical yet uneven frontier. Although AI tools have shown promise in diagnosis, therapeutic planning, and operational efficiency, educational adoption has lagged—both in scale and in the rigor of implementation and evaluation. Key barriers include heterogeneous technological infrastructure—particularly in resource-limited settings—and limited AI literacy among students and faculty [57,58]. These challenges are compounded by the fact that many medical students, despite their interest in AI’s potential, express anxiety about downstream labor market implications [59,60].

Beyond the case-based exemplars mapped with FACETS, diverse emerging applications in medical education share several salient features: they prioritize deliberate practice with adaptive feedback and competency-aligned performance traceability; they operate predominantly at the augmentation and modification levels of the SAMR model (with few achieving true redefinition); and they rely on high-quality data (interaction logs, rubrics, audio-video) that demand psychometric validation, bias and drift monitoring, and robust privacy safeguards. The evidentiary base remains heterogeneous, with single-site pilots and intermediate metrics predominating over longitudinal outcomes. The human factor remains essential—explicit instructional design, structured debriefing, and human-in-the-loop approaches are needed to calibrate criteria and prevent overreliance. Absent targeted faculty development and institutional processes (including data governance, audits, and reporting standards), these solutions risk amplifying equity gaps.

To operationalize these principles and translate them into programmatic, monitorable curricular decisions, it is helpful to note that these common features are especially evident in intelligent tutoring systems (ITSs), clinical chatbots, AI-assisted assessment, personalized learning platforms, virtual reality and AI simulators, anatomy tools, and intraoperative video analytics (Table 4). Building on that foundation, cross-cutting applications that enable and govern implementation across workstreams assume particular importance: (1) multimodal adaptive analytics, which fuse text, audio-video, and interaction traces to personalize pacing and content while flagging early risk at the learner or course level [61]; (2) learning analytics dashboards, which integrate competencies, assessment evidence, and progression signals to guide just-in-time feedback, remediation, and coaching for students and instructors [62]; and (3) educational clinical decision support, implemented in deidentified, guideline-constrained sandbox environments to teach evidence retrieval, uncertainty appraisal, and human-AI teaming in diagnostic and therapeutic planning [63,64] (Table 4).

Effective adoption of these cross-cutting layers requires the very institutional capacities and safeguards they presuppose. Curricula must evolve to incorporate AI training that fosters both technical competence and ethical responsibility [2]. This entails not only understanding AI tools but also recognizing their limitations, biases, and societal implications. Cross-sector collaboration among educational institutions, technology developers, and researchers is crucial to establishing universal digital literacy standards that equip learners and faculty to critically evaluate AI-generated content and apply such tools responsibly in clinical and educational settings [4].

Table 4. Emerging artificial intelligence (AI) applications in medical education.
ApplicationDescription
Intelligent tutoring systemsAI-powered platforms enhancing decision-making and clinical reasoning [65]
AI-assisted learner assessmentAutomated grading using NLPa and semantic analysis for case summaries [66]
Chatbots (eg, ChatGPT)Teaching clinical management, USMLEb prep, communication skills, USMLE prep, communication skills [67,68]
Personalized learning platformsTailored learning paths and feedback for individualized student development [69]
Surgical simulations (VRc/AI)Virtual reality tools for surgical skill training and evaluation [70]
Enhanced anatomy educationAI-assisted tools for deeper learning and retention in anatomy [71]
AI in admissionsTools supporting application reviews and personal statement development [72]
AI-generated artImproving patient education through visual storytelling [73]
Intraoperative video analysisTeaching competency-based assessments via machine learning [74]
Multimodal adaptive analyticsFuses text, audio-video, interaction data to personalize pacing and flag early risk [61]
Learning analytics dashboardsAggregates competencies and assessment traces for just-in-time feedback and monitoring [62]
AI-supported CDSd for educationGuideline-constrained sandbox CDS to teach evidence retrieval, uncertainty, and human-AI teaming [63]

aNLP: natural language processing.

bUSMLE: United States Medical Licensing Examination.

cVR: virtual reality.

dCDS: clinical decision support.

AI-powered platforms are expanding personalized learning by tailoring educational content to individual needs. Technologies such as virtual patients, augmented reality simulations, and mobile platforms offer dynamic, interactive experiences that increase engagement and broaden access to high-quality education, particularly in resource-limited regions. Nonetheless, the risk of depersonalizing medical practice—together with substantial concerns about data privacy—underscores the need for rigorous ethical and regulatory frameworks that engage all stakeholders in the educational process [75].

Given AI’s expanding role in health care, prohibiting its use in education is neither practical nor beneficial. Institutions should instead establish comprehensive guidelines to ensure that AI-generated content remains reliable, relevant, and ethically sound. A structured, responsible approach—grounded in robust educational infrastructure, interdisciplinary collaboration, and consistent regulatory oversight—has the potential to improve learning outcomes and, ultimately, strengthen patient care.

As a synthesis of the manuscript’s arguments, we recommend a concise, actionable set of practices: (1) report every AI pilot with a FACETS-aligned template to ensure transparency and comparability; (2) maintain human oversight for high-impact uses (eg, dual scoring with override), reserving automation-first approaches for low-risk contexts with active monitoring; (3) conduct predeployment audits for bias, drift, and adversarial robustness and, where appropriate, cross-site validation before scaling; (4) operate privacy-preserving workflows (no PHI in public LLMs, institutional sandboxes, data minimization or deidentification, role-based access, secure logging and retention); (5) deploy learning analytics dashboards tied to competency frameworks with ongoing psychometric monitoring; and (6) evaluate equity and feasibility using implementation science designs, including in resource-limited settings with disparate impact monitoring. This roadmap is summarized in Figure 2.

Figure 2. Schematic overview of AI applications in medical education. This figure synthesizes a practical pathway for AI uses in medical education. From left to right, it maps current applications—interactive learning, assessment systems, administration or logistics, and content or writing—to the benefits they enable; the risks and ethical considerations they entail; and the curriculum and teaching strategies required for responsible use. The rightmost column summarizes solutions and future directions: bias audits and data oversight; robust psychometrics and fairness checks; multisite robustness testing; implementation science (including LMIC feasibility and equity); comparative studies of HITL versus automation-first approaches; and reporting standards aligned with FACETS to support transparency and comparability. The bottom banner highlights cross-cutting safeguards—human-in-the-loop processes, bias monitoring, external validation, and data governance—that should accompany all deployments. AI: artificial intelligence; AR: augmented reality; FACETS: form, application, context, instructional mode, technology, and SAMR (substitution-augmentation-modification-redefinition); HITL: human-in-the-loop; ITS: intelligent tutoring system; LLM: large language model; LMIC: low- and middle-income country; MCQ: multiple-choice question; NLP: natural language processing; OSCE: Objective Structured Clinical Examination; PHI: protected health information; VR: virtual reality.

AI is redefining the landscape of medical education by offering innovative tools that enhance learning, assessment, and administrative processes. AI-driven platforms, such as ITS, virtual simulations, and generative models, provide personalized and interactive educational experiences, improving knowledge acquisition and clinical skills while streamlining administrative tasks to increase efficiency and reduce educator workload. These advancements hold immense potential to revolutionize medical education further. By fostering interdisciplinary collaboration, investing in robust infrastructure, and prioritizing ethical considerations, stakeholders can harness AI’s capabilities to enhance learning outcomes and ultimately improve patient care.

At the same time, integrating AI into medical education presents several challenges. Ethical concerns such as data privacy, algorithmic bias, and the potential erosion of humanistic care must be carefully addressed. Technological disparities, particularly in low-resource settings, also hinder equitable AI adoption. To meet these challenges, medical curricula must evolve to include AI training that promotes both technical competence and ethical awareness, encompassing knowledge of AI tools, their limitations, and their societal implications. Cross-sector collaboration among educational institutions, technology developers, and researchers is essential to establishing universal digital literacy standards. These standards should equip students and educators to critically assess AI-generated content and use these tools responsibly in both educational and clinical settings.

Looking ahead, priorities include longitudinal studies linking AI-enabled instruction to validated learning outcomes and real-world performance; rigorous psychometric validation of AI-assisted assessments with routine bias audits; multisite robustness testing—addressing dataset shift and adversarial risk—especially for vision and simulation; implementation science work in diverse, including low-resource, settings on feasibility, cost-effectiveness, and equity; comparative evaluations of human-in-the-loop versus automation-first designs and their impact on critical thinking and professional identity; and FACETS-aligned reporting standards to strengthen transparency, reproducibility, and cross-study comparability. With ethically grounded design and cross-sector collaboration, AI can enhance learning while preserving the humanistic foundations of medical practice.

Data Availability

Data sharing is not applicable to this article as no datasets were generated or analyzed during this study.

Conflicts of Interest

None declared.

  1. Mir MM, Mir GM, Raina NT, et al. Application of artificial intelligence in medical education: current scenario and future perspectives. J Adv Med Educ Prof. Jul 2023;11(3):133-140. [CrossRef] [Medline]
  2. Gordon M, Daniel M, Ajiboye A, et al. A scoping review of artificial intelligence in medical education: BEME Guide No. 84. Med Teach. Apr 2024;46(4):446-470. [CrossRef] [Medline]
  3. Rahman MA, Victoros E, Ernest J, Davis R, Shanjana Y, Islam MR. Impact of artificial intelligence (AI) technology in healthcare sector: a critical evaluation of both sides of the coin. Clin Pathol. 2024;17:2632010X241226887. [CrossRef] [Medline]
  4. Haleem A, Javaid M, Qadri MA, Suman R. Understanding the role of digital technologies in education: a review. Sustain Oper Comput. 2022;3:275-285. [CrossRef]
  5. Pineda-de-Alcázar MY. Inteligencia Artificial y Modelos de Comunicación [Article in Spanish]. Razón y Palabra. Dec 14, 2017;21(4_99):332-346. URL: https://revistarazonypalabra.org/index.php/ryp/article/view/1033 [Accessed 2026-01-30]
  6. Hale J, Alexander S, Wright ST, Gilliland K. Generative AI in undergraduate medical education: a rapid review. J Med Educ Curric Dev. Jan 2024;11:23821205241266697. [CrossRef]
  7. Riddle EW, Kewalramani D, Narayan M, Jones DB. Surgical simulation: virtual reality to artificial intelligence. Curr Probl Surg. Nov 2024;61(11):101625. [CrossRef] [Medline]
  8. Shahrezaei A, Sohani M, Taherkhani S, Zarghami SY. The impact of surgical simulation and training technologies on general surgery education. BMC Med Educ. Nov 13, 2024;24(1):1297. [CrossRef] [Medline]
  9. Booth GJ, Ross B, Cronin WA, et al. Competency-based assessments: leveraging artificial intelligence to predict subcompetency content. Acad Med. Apr 1, 2023;98(4):497-504. [CrossRef] [Medline]
  10. King AJ, Kahn JM, Brant EB, Cooper GF, Mowery DL. Initial development of an automated platform for assessing trainee performance on case presentations. ATS Sch. Dec 2022;3(4):548-560. [CrossRef] [Medline]
  11. Hamdy H, Sreedharan J, Rotgans JI, et al. Virtual clinical encounter examination (VICEE): a novel approach for assessing medical students' non-psychomotor clinical competency. Med Teach. Oct 2021;43(10):1203-1209. [CrossRef] [Medline]
  12. Pereira DSM, Falcão F, Nunes A, Santos N, Costa P, Pêgo JM. Designing and building OSCEBot ® for virtual OSCE - performance evaluation. Med Educ Online. Dec 2023;28(1):2228550. [CrossRef] [Medline]
  13. Merritt C, Glisson M, Dewan M, Klein M, Zackoff M. Implementation and evaluation of an artificial intelligence driven simulation to improve resident communication with primary care providers. Acad Pediatr. Apr 2022;22(3):503-505. [CrossRef] [Medline]
  14. Agarwal M, Sharma P, Goswami A. Analysing the applicability of ChatGPT, Bard, and Bing to generate reasoning-based multiple-choice questions in medical physiology. Cureus. Jun 2023;15(6):e40977. [CrossRef] [Medline]
  15. Ayub I, Hamann D, Hamann CR, Davis MJ. Exploring the potential and limitations of chat generative pre-trained transformer (ChatGPT) in generating board-style dermatology questions: a qualitative analysis. Cureus. Aug 2023;15(8):e43717. [CrossRef] [Medline]
  16. Bond WF, Zhou J, Bhat S, et al. Automated patient note grading: examining scoring reliability and feasibility. Acad Med. Nov 1, 2023;98(11S):S90-S97. [CrossRef] [Medline]
  17. Cianciolo AT, LaVoie N, Parker J. Machine scoring of medical students’ written clinical reasoning: initial validity evidence. Acad Med. Jul 1, 2021;96(7):1026-1035. [CrossRef] [Medline]
  18. Wang M, Sun Z, Jia M, et al. Intelligent virtual case learning system based on real medical records and natural language processing. BMC Med Inform Decis Mak. Mar 4, 2022;22(1):60. [CrossRef] [Medline]
  19. Woo CW, Evens MW, Freedman R, et al. An intelligent tutoring system that generates a natural language dialogue using dynamic multi-level planning. Artif Intell Med. Sep 2006;38(1):25-46. [CrossRef] [Medline]
  20. Gupta N, Khatri K, Malik Y, et al. Exploring prospects, hurdles, and road ahead for generative artificial intelligence in orthopedic education and training. BMC Med Educ. Dec 28, 2024;24(1):1544. [CrossRef] [Medline]
  21. Grévisse C. LLM-based automatic short answer grading in undergraduate medical education. BMC Med Educ. Sep 27, 2024;24(1):1060. [CrossRef] [Medline]
  22. Khalifa M, Albadawy M. Using artificial intelligence in academic writing and research: an essential productivity tool. Computer Methods and Programs in Biomedicine Update. 2024;5:100145. [CrossRef]
  23. Dwivedi YK, Kshetri N, Hughes L, et al. Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. Int J Inf Manage. Aug 2023;71:102642. [CrossRef]
  24. Narayanan S, Ramakrishnan R, Durairaj E, Das A. Artificial intelligence revolutionizing the field of medical education. Cureus. Nov 2023;15(11):e49604. [CrossRef] [Medline]
  25. Spatscheck N, Schaschek M, Winkelmann A. The effects of generative AI’s human-like competencies on clinical decision-making. J Decis Syst. 2024:1-39. [CrossRef]
  26. Bansal M, Jindal A. Artificial intelligence in healthcare: should it be included in the medical curriculum? A students’ perspective. Natl Med J India. 2022;35(1):56-58. [CrossRef] [Medline]
  27. Kamalov F, Santandreu Calonge D, Gurrib I. New era of artificial intelligence in education: towards a sustainable multifaceted revolution. Sustainability. Jan 2023;15(16):12451. [CrossRef]
  28. Arévalo JA, Cordero MQ. ChatGPT: la creación automática de contenidos con Inteligencia Artificial y su impacto en la comunicación académica y educativa [Article in Spanish]. Desiderata. 2023;(22):136-142. URL: https://dialnet.unirioja.es/servlet/articulo?codigo=8965142 [Accessed 2026-02-06]
  29. Alamer A. Medical students’ perspectives on artificial intelligence in radiology: the current understanding and impact on radiology as a future specialty choice. Curr Med Imaging Rev. Jul 2023;19(8):921-930. [CrossRef]
  30. Civaner MM, Uncu Y, Bulut F, Chalil EG, Tatli A. Artificial intelligence in medical education: a cross-sectional needs assessment. BMC Med Educ. Nov 9, 2022;22(1):772. [CrossRef] [Medline]
  31. van de Ridder JMM, Shoja MM, Rajput V. Finding the place of ChatGPT in medical education. Acad Med. Aug 1, 2023;98(8):867. [CrossRef] [Medline]
  32. Izquierdo-Condoy JS, Arias-Intriago M, Tello-De-la-Torre A, Busch F, Ortiz-Prado E. Generative artificial intelligence in medical education: enhancing critical thinking or undermining cognitive autonomy? J Med Internet Res. Nov 3, 2025;27(1):e76340. [CrossRef] [Medline]
  33. Ejaz H, McGrath H, Wong BL, Guise A, Vercauteren T, Shapey J. Artificial intelligence and medical education: a global mixed-methods study of medical students’ perspectives. Digit Health. 2022;8:20552076221089099. [CrossRef] [Medline]
  34. Zhong JY, Fischer NL. Commentary: the desire of medical students to integrate artificial intelligence into medical education: an opinion article. Front Digit Health. 2023;5:1151390. [CrossRef] [Medline]
  35. Bhattacharyya M, Miller VM, Bhattacharyya D, Miller LE. High rates of fabricated and inaccurate references in ChatGPT-generated medical content. Cureus. May 2023;15(5):e39238. [CrossRef] [Medline]
  36. Walters WH, Wilder EI. Fabrication and errors in the bibliographic citations generated by ChatGPT. Sci Rep. Sep 7, 2023;13(1):14045. [CrossRef] [Medline]
  37. Izquierdo-Condoy JS, Vásconez-González J, Ortiz-Prado E. “AI et al.” The perils of overreliance on artificial intelligence by authors in scientific research. Clinical eHealth. Dec 2024;7:133-135. [CrossRef]
  38. Zech JR, Badgeley MA, Liu M, Costa AB, Titano JJ, Oermann EK. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLoS Med. Nov 2018;15(11):e1002683. [CrossRef] [Medline]
  39. D’Amour A, Heller K, Moldovan D, et al. Underspecification presents challenges for credibility in modern machine learning. J Mach Learn Res. 2022;23(226):1-61. URL: https://www.jmlr.org/papers/v23/20-1335.html [Accessed 2026-01-30]
  40. Hayes J, Swanberg M, Chaudhari H, et al. Measuring memorization in language models via probabilistic extraction. In: Chiruzzo L, Ritter A, Wang L, editors. 2025. Presented at: Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers); Apr 29 to May 4, 2025:9266-9291; Albuquerque, New Mexico. URL: https://aclanthology.org/volumes/2025.naacl-long/ [CrossRef]
  41. Carlini N, Tramer F, Wallace E, et al. Extracting training data from large language models. 2021. Presented at: Proceedings of the 30th USENIX Security Symposium(USENIX Security 21); Aug 11-13, 2021. URL: https://www.usenix.org/system/files/sec21-carlini-extracting.pdf [Accessed 2026-01-30]
  42. Recommendation on the ethics of artificial intelligence. United Nations Educational, Scientific and Cultural Organization; 2022. URL: https://digitallibrary.un.org/record/4062376?v=pdf [Accessed 2026-01-30]
  43. Ethics and governance of artificial intelligence for health. World Health Organization; 2021. URL: https://iris.who.int/server/api/core/bitstreams/f780d926-4ae3-42ce-a6d6-e898a5562621/content [Accessed 2026-02-06]
  44. Nagy M, Radakovich N, Nazha A. Why machine learning should be taught in medical schools. Med Sci Educ. Apr 2022;32(2):529-532. [CrossRef] [Medline]
  45. Ngo B, Nguyen D, vanSonnenberg E. The cases for and against artificial intelligence in the medical school curriculum. Radiol Artif Intell. Sep 2022;4(5):e220074. [CrossRef] [Medline]
  46. Chai SY, Hayat A, Flaherty GT. Integrating artificial intelligence into haematology training and practice: opportunities, threats and proposed solutions. Br J Haematol. Sep 2022;198(5):807-811. [CrossRef] [Medline]
  47. Ward TM, Mascagni P, Madani A, Padoy N, Perretta S, Hashimoto DA. Surgical data science and artificial intelligence for surgical education. J Surg Oncol. Aug 2021;124(2):221-230. [CrossRef] [Medline]
  48. Valikodath NG, Cole E, Ting DSW, et al. Impact of artificial intelligence on medical education in ophthalmology. Transl Vis Sci Technol. Jun 1, 2021;10(7):14. [CrossRef] [Medline]
  49. Tejani AS, Elhalawani H, Moy L, Kohli M, Kahn Jr CE. Artificial intelligence and radiology education. Radiol Artif Intell. Jan 2023;5(1):e220084. [CrossRef] [Medline]
  50. Busch F, Adams LC, Bressem KK. Biomedical ethical aspects towards the implementation of artificial intelligence in medical education. Med Sci Educ. Aug 2023;33(4):1007-1012. [CrossRef] [Medline]
  51. Cussat-Blanc S, Castets-Renard C, Monsarrat P. Doctors in medical data sciences: a new curriculum. Int J Environ Res Public Health. Dec 30, 2022;20(1):675. [CrossRef] [Medline]
  52. Zhou Y, Jindal-Snape D, Topping K, Todman J. Theoretical models of culture shock and adaptation in international students in higher education. Stud High Educ. Feb 2008;33(1):63-75. [CrossRef]
  53. Mennella C, Maniscalco U, De Pietro G, Esposito M. Ethical and regulatory challenges of AI technologies in healthcare: a narrative review. Heliyon. Feb 29, 2024;10(4):e26297. [CrossRef] [Medline]
  54. Abdelwanis M, Alarafati HK, Tammam MMS, Simsekler MCE. Exploring the risks of automation bias in healthcare artificial intelligence applications: a Bowtie analysis. Journal of Safety Science and Resilience. Dec 2024;5(4):460-469. [CrossRef]
  55. Holderried F, Stegemann-Philipps C, Herrmann-Werner A, et al. A language model-powered simulated patient with automated feedback for history taking: prospective study. JMIR Med Educ. Aug 16, 2024;10:e59213. [CrossRef] [Medline]
  56. Luordo D, Torres Arrese M, Tristán Calvo C, et al. Application of artificial intelligence as an aid for the correction of the objective structured clinical examination (OSCE). Appl Sci. Jan 2025;15(3):1153. [CrossRef]
  57. Izquierdo-Condoy JS, Arias-Intriago M, Nati-Castillo HA, et al. Exploring smartphone use and its applicability in academic training of medical students in Latin America: a multicenter cross-sectional study. BMC Med Educ. Nov 30, 2024;24(1):1401. [CrossRef] [Medline]
  58. Ali H, ul Mustafa A, Aysan AF. Global adoption of generative AI: what matters most? J Econ Technol. Nov 2025;3:166-176. [CrossRef]
  59. Sami A, Tanveer F, Sajwani K, et al. Medical students’ attitudes toward AI in education: perception, effectiveness, and its credibility. BMC Med Educ. Jan 17, 2025;25(1):82. [CrossRef] [Medline]
  60. Busch F, Hoffmann L, Truhn D, et al. Global cross-sectional student survey on AI in medical, dental, and veterinary education and practice at 192 faculties. BMC Med Educ. Sep 28, 2024;24(1):1066. [CrossRef] [Medline]
  61. Guerrero-Sosa JDT, Romero FP, Menéndez-Domínguez VH, Serrano-Guerrero J, Montoro-Montarroso A, Olivas JA. A comprehensive review of multimodal analysis in education. Appl Sci (Basel). Jan 2025;15(11):5896. [CrossRef]
  62. Masiello I, Mohseni Z(, Palma F, Nordmark S, Augustsson H, Rundquist R. A current overview of the use of learning analytics dashboards. Educ Sci. Jan 2024;14(1):82. [CrossRef]
  63. Elhaddad M, Hamam S. AI-driven clinical decision support systems: an ongoing pursuit of potential. Cureus. Apr 2024;16(4):e57728. [CrossRef] [Medline]
  64. Soares A, Afshar M, Moesel C, et al. Playing in the clinical decision support sandbox: tools and training for all. JAMIA Open. Jul 2023;6(2):ooad038. [CrossRef] [Medline]
  65. Lamti S, El Malhi M, Sekhsoukh R, Kerzazi N. Intelligent tutoring system for medical students: opportunities and challenges. 2024. Presented at: International Conference on Smart Medical, IoT & Artificial Intelligence (ICSMAI 2024); Apr 18-20, 2024. [CrossRef] [Medline]
  66. Salt J, Harik P, Barone MA. Leveraging natural language processing: toward computer-assisted scoring of patient notes in the USMLE step 2 clinical skills exam. Acad Med. Mar 2019;94(3):314-316. [CrossRef] [Medline]
  67. Karabacak M, Ozkara BB, Margetis K, Wintermark M, Bisdas S. The advent of generative language models in medical education. JMIR Med Educ. Jun 6, 2023;9:e48163. [CrossRef] [Medline]
  68. Koga S. The potential of ChatGPT in medical education: focusing on USMLE preparation. Ann Biomed Eng. Oct 2023;51(10):2123-2124. [CrossRef] [Medline]
  69. Abd-Alrazaq A, AlSaad R, Alhuwail D, et al. Large language models in medical education: opportunities, challenges, and future directions. JMIR Med Educ. Jun 1, 2023;9:e48291. [CrossRef] [Medline]
  70. Rogers MP, DeSantis AJ, Janjua H, Barry TM, Kuo PC. The future surgical training paradigm: virtual reality and machine learning in surgical education. Surgery. May 2021;169(5):1250-1252. [CrossRef] [Medline]
  71. Abdellatif H, Al Mushaiqri M, Albalushi H, Al-Zaabi AA, Roychoudhury S, Das S. Teaching, learning and assessing anatomy with artificial intelligence: the road to a better future. Int J Environ Res Public Health. Oct 31, 2022;19(21):14209. [CrossRef] [Medline]
  72. Hashimoto DA, Johnson KB. The use of artificial intelligence tools to prepare medical school applications. Acad Med. Sep 1, 2023;98(9):978-982. [CrossRef] [Medline]
  73. Huston JC, Kaminski N. A picture worth a thousand words, created with one sentence: using artificial intelligence-created art to enhance medical education. ATS Sch. Jun 2023;4(2):145-151. [CrossRef] [Medline]
  74. Kawka M, Gall TMH, Fang C, Liu R, Jiao LR. Intraoperative video analysis and machine learning models will change the future of surgical training. Intell Surg. Jan 2022;1:13-15. [CrossRef]
  75. Maleki Varnosfaderani S, Forouzanfar M. The role of AI in hospitals and clinics: transforming healthcare in the 21st century. Bioengineering (Basel). Mar 29, 2024;11(4):337. [CrossRef] [Medline]


AI: artificial intelligence
FACETS: form, application, context, instructional mode, technology, SAMR (framework components)
ITS: intelligent tutoring system
LLM: large language model
ML: machine learning
NLP: natural language processing
OSCE: Objective Structured Clinical Examination
PHI: protected health information
SAMR: substitution, augmentation, modification, redefinition


Edited by A Hasan Sapci, Alicia Stone, Tiffany Leung; submitted 07.May.2025; peer-reviewed by Sadhasivam Mohanadas, Shamnad Mohamed Shaffi; accepted 14.Dec.2025; published 17.Feb.2026.

Copyright

© Juan S Izquierdo-Condoy, Marlon Arias-Intriago, Laura Montero Corrales, Esteban Ortiz-Prado. Originally published in JMIR Medical Education (https://mededu.jmir.org), 17.Feb.2026.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic information, a link to the original publication on https://mededu.jmir.org/, as well as this copyright and license information must be included.