Recent Articles


Artificial intelligence continues to transform health care, offering promising applications in clinical practice and medical education. While large language models (LLMs), as a form of generative artificial intelligence, have shown potential to match or surpass medical students in licensing examinations, their performance varies across languages. Recent studies highlight the complex influence and interdependency of factors such as language and model type on LLMs’ accuracy; yet, cross-language comparisons remain underexplored.

Interprofessional education (IPE) is a key strategy for enhancing collaboration and patient safety. While evidence for student populations is abundant, studies focusing on licensed physical therapists (PTs), occupational therapists (OTs), and speech-language pathologists (SLPs) remain limited. In contemporary rehabilitation practice, continuous IPE is increasingly important to address professional burnout and the growing complexity of patient needs.

As artificial intelligence (AI) develops, the medical education community has begun defining the relevant forms of competency. Many experts emphasize the importance of optimizing AI tools’ output or understanding the relevant technical and normative considerations around using AI tools. A recent publication in this journal showed that optimizing instructions for large language models may yield diminishing returns as such tools improve. This suggests the need for a new competency—one that focuses on choosing the appropriate AI tools. I briefly summarize the current competency domains and examples to contextualize the current state of AI competency development, highlighting the need for further synthesis. I then introduce a hierarchical framework of competencies that might assist with priority-setting around subsequent competency development work. It consists of cognitive, operational, and meta-AI domains, which respectively correspond with the knowledge around understanding, using, and choosing AI tools. The final section describes the potential challenges associated with the development of AI competency. These include traditional concerns around competency-based medical education: deciding whether and which competencies are meaningful for measuring the targets of interest; adjusting the relevant measurements to reflect the necessary temporal and institutional context; and setting up the relevant organizational support to encourage measurement of competency. This section also discusses the challenges of developing the relevant performance indicators for AI tools across different clinical contexts. Such indicators will be necessary for guiding the choice of AI tools for the clinical context, but medical educators may not have the skills to develop them. In addition to identifying potential sources for relevant indicators, the medical education community may shape physicians’ norms of practice to drive the AI industry into producing the relevant indicators. The potential for physicians to incur higher medical liability from poor choice of AI may lead them to demand more nuanced performance indicators from AI suppliers. Physicians are also in a position to do so, since the competitive AI market may provide them more bargaining power.


With the rapid development of artificial intelligence technology, artificial intelligence–generated content (AIGC) is increasingly widely applied in the field of medical education. Large language models, such as ChatGPT, are a prominent type of AIGC technology. Critical thinking is a core ability in medical education, but the impact of AIGC technology on the critical thinking ability of medical students remains unclear. Medical students are at a crucial stage in cultivating critical thinking, and the intervention of AIGC technology may have a profound impact on this process.

Medical history-taking is a core clinical skill; yet, traditional teaching methods face challenges. We developed an artificial intelligence–powered medical history-taking training and evaluation system (AMTES) and established its technical feasibility as an extracurricular resource. Evidence on whether such tools improve learning outcomes when voluntarily embedded in routine curricula remains limited.

Simulation has become an essential pedagogical tool in health professions education, traditionally valued for its ability to approximate clinical practice. Higher simulation fidelity is often assumed to directly enhance learner engagement and improve educational outcomes; however, this assumption oversimplifies a complex relationship. Fidelity is multidimensional, encompassing physical, emotional, and contextual dimensions, as well as qualitative and quantitative considerations, each influencing learners’ perception of realism in distinct ways. Engagement is shaped not only by these dimensions of fidelity but also by intrinsic factors such as motivation, prior experience, stress, and emotional resilience, and by extrinsic factors including instructional design, facilitation, debriefing, and psychological safety. A central mediator in this process is the fiction contract, an implicit agreement that enables learners to suspend disbelief and engage authentically despite inherent limitations in realism. Technological sophistication alone does not necessarily translate into greater educational impact. Rather, fidelity should be intentionally aligned with learning objectives: advanced patient simulators may support procedural training, standardized patients may enhance communication skills, and task trainers may optimize focused psychomotor practice. This viewpoint advocates for a goal-oriented, multimodal approach that redefines high-fidelity simulation not as the pursuit of maximum realism, but as the deliberate alignment of fidelity with pedagogy to optimize learner engagement and educational effectiveness.

As an emerging delivery mode of education, online continuing medical education (CME) increases the accessibility of high-quality medical training for professionals and students in China. Guoyuan (meaning “nationwide” in Chinese) is an online CME platform delivered via a mobile app and operated by the National Telemedicine Center of China since 2018, serving as an illustrative case of mobile online CME implementation.

Radial artery puncture is a common clinical procedure essential for assessing gas exchange but is frequently perceived as stressful by inexperienced operators, who fear causing pain to their patients. Despite its practical relevance, formal training in this procedure is inconsistently integrated into medical curricula. This study evaluated whether a structured training program—combining theoretical instruction, simulation-based practice, and debriefing—could influence students’ procedural confidence and decision-making and patient experience during their first clinical arterial puncture.
Preprints Open for Peer Review
Open Peer Review Period:
-
Open Peer Review Period:
-
Open Peer Review Period:
-
Open Peer Review Period:
-
Open Peer Review Period:
-








