Accessibility settings

Published on in Vol 12 (2026)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/95205, first published .
Authors’ Reply: Why Medical Education Without Artificial Intelligence Still Matters: A Neuroscience-Informed Perspective

Authors’ Reply: Why Medical Education Without Artificial Intelligence Still Matters: A Neuroscience-Informed Perspective

Authors’ Reply: Why Medical Education Without Artificial Intelligence Still Matters: A Neuroscience-Informed Perspective

1One Health Research Group, Universidad de Las Américas, Via Nayon S/N, Quito, Pichincha, Ecuador

2Escuela de Comunicación, Latin University of Costa Rica, San José, Costa Rica

Corresponding Author:

Esteban Ortiz-Prado, MSc, MPH, MD, PhD



We thank the author of the pertinent commentary, Why Medical Education Without Artificial Intelligence Still Matters: A Neuroscience-Informed Perspective” [1]. We believe this letter makes a valuable contribution to the debate on artificial intelligence (AI) in medical education by emphasizing the cognitive consequences of training clinicians in environments where these tools are available yet not always reliable or accessible in everyday practice.

From our perspective, the author’s argument expands on a concern we had already raised previously: if AI, particularly generative AI, can compromise critical thinking and cognitive autonomy, this erosion may also translate into a reduced capacity to sustain clinical reasoning when the tool fails or is unavailable [2]. In our article, “Artificial Intelligence in Medical Education: Transformative Potential, Current Applications, and Future Implications” [3], we identified uncritical dependence on algorithmic outputs as one of the main limitations of incorporating AI into medical training.

The neurocognitive evidence cited by the author reinforces this argument. Recent studies suggest that the use of generative AI may alter how individuals engage in complex tasks. Although this evidence remains preliminary, it is consistent with what has already been described regarding automation bias and cognitive off-loading. Likewise, accepting automated suggestions without sufficient analysis may increase the risk of error when the system fails or is unavailable [4,5]. Taken together, these findings support a central idea: AI should support reasoning, not replace expert judgment.

In our viewpoint, we addressed these concerns through concrete proposals. Importantly, our discussion was not limited to generative AI but also encompassed other educationally relevant technologies, including natural language processing, intelligent tutoring systems, virtual reality, and augmented reality. In this context, we advocated for AI literacy curricula that would enable students not only to use these tools but also to critically evaluate them, recognize their limitations, identify biases, and assess their outputs against sound clinical reasoning. We also supported governance structures aligned with international ethical frameworks, such as the UNESCO (United Nations Educational, Scientific and Cultural Organization) Recommendation on the Ethics of Artificial Intelligence and the World Health Organization guidance on the ethics and governance of AI in health [3]. We consider human oversight, transparency, bias monitoring, and accountability to be essential safeguards for the responsible integration of AI.

We also agree with the author’s suggestion to include structured exercises in which students must function without AI support, as this may strengthen independent diagnostic reasoning and decision-making under uncertainty. At the same time, this discussion must acknowledge persistent technological inequalities, particularly in low-income countries, where access to advanced AI tools remains uneven [6].

We agree that AI integration in medical education should not displace human intelligence but rather enhance professional competence without undermining clinical autonomy or critical thinking. Preparing future physicians to function appropriately with or without AI will be essential to protect patient safety, respond across diverse contexts, and preserve the human dimension of medicine. We again thank the author for this valuable contribution and concur on the need for further empirical evidence, particularly longitudinal studies, to guide a prudent, ethical, and evidence-based integration of AI into medical education.

Acknowledgments

Language editing assistance was used during the preparation of this manuscript. A generative artificial intelligence (AI) tool (ChatGPT, OpenAI, GPT-5.4 Thinking) was consulted solely to improve grammar, clarity, and sentence structure, as English is not the authors’ first language. The tool was used exclusively for linguistic refinement and formatting support. All scientific concepts, interpretations, arguments, and conclusions presented in this manuscript were independently developed by the authors. The authors take full responsibility for the intellectual content, accuracy, and integrity of the work.

Funding

The authors declared that no financial support was received for this work.

Conflicts of Interest

None declared.

  1. Verdonk C. Why medical education without artificial intelligence still matters: a neuroscience-informed perspective. JMIR Med Educ. 2026;12:e94594. [CrossRef]
  2. Izquierdo-Condoy JS, Arias-Intriago M, Tello-De-la-Torre A, Busch F, Ortiz-Prado E. Generative artificial intelligence in medical education: enhancing critical thinking or undermining cognitive autonomy? J Med Internet Res. Nov 3, 2025;27:e76340. [CrossRef] [Medline]
  3. Izquierdo-Condoy JS, Arias-Intriago M, Montero Corrales L, Ortiz-Prado E. Artificial intelligence in medical education: transformative potential, current applications, and future implications. JMIR Med Educ. Feb 17, 2026;12:e77127. [CrossRef] [Medline]
  4. Abdelwanis M, Alarafati HK, Tammam MMS, Simsekler MCE. Exploring the risks of automation bias in healthcare artificial intelligence applications: a Bowtie analysis. J Safety Sci Resilience. Dec 2024;5(4):460-469. [CrossRef]
  5. Kosmyna N, Hauptmann E, Yuan YT, et al. Your Brain on ChatGPT: accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv. Jun 10, 2025. [CrossRef]
  6. Izquierdo-Condoy JS, Arias-Intriago M, Nati-Castillo HA, et al. Exploring smartphone use and its applicability in academic training of medical students in Latin America: a multicenter cross-sectional study. BMC Med Educ. Nov 30, 2024;24(1):1401. [CrossRef] [Medline]


AI: artificial intelligence
UNESCO: United Nations Educational, Scientific and Cultural Organization


Edited by Tiffany Leung; This is a non–peer-reviewed article. submitted 12.Mar.2026; accepted 14.Mar.2026; published 31.Mar.2026.

Copyright

© Juan S Izquierdo-Condoy, Marlon Arias-Intriago, Laura Montero Corrales, Esteban Ortiz-Prado. Originally published in JMIR Medical Education (https://mededu.jmir.org), 31.Mar.2026.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic information, a link to the original publication on https://mededu.jmir.org/, as well as this copyright and license information must be included.