Published on in Vol 10 (2024)

This is a member publication of University of Toronto

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/51446, first published .
The Potential of Artificial Intelligence Tools for Reducing Uncertainty in Medicine and Directions for Medical Education

The Potential of Artificial Intelligence Tools for Reducing Uncertainty in Medicine and Directions for Medical Education

The Potential of Artificial Intelligence Tools for Reducing Uncertainty in Medicine and Directions for Medical Education

1Temerty Faculty of Medicine, University of Toronto, , Toronto, ON, , Canada

2Department of Computer Science, Temerty Centre for AI Research and Education in Medicine, University of Toronto, , 27 King's College Circle, Toronto, ON, , Canada

3Intermedia.net Inc., , Sunnyvale, CA, , United States

4Department of Surgery, University of Toronto, , Toronto, ON, , Canada

5Keenan Chair in Surgery, Division of Neurosurgery, St. Michael's Hospital, , Toronto, ON, , Canada

6Dalla Lana School of Public Health, Department of Family and Community Medicine, University of Toronto, , Toronto, ON, , Canada

Corresponding Author:

Soaad Qahhār Hossain, BSc


In the field of medicine, uncertainty is inherent. Physicians are asked to make decisions on a daily basis without complete certainty, whether it is in understanding the patient’s problem, performing the physical examination, interpreting the findings of diagnostic tests, or proposing a management plan. The reasons for this uncertainty are widespread, including the lack of knowledge about the patient, individual physician limitations, and the limited predictive power of objective diagnostic tools. This uncertainty poses significant problems in providing competent patient care. Research efforts and teaching are attempts to reduce uncertainty that have now become inherent to medicine. Despite this, uncertainty is rampant. Artificial intelligence (AI) tools, which are being rapidly developed and integrated into practice, may change the way we navigate uncertainty. In their strongest forms, AI tools may have the ability to improve data collection on diseases, patient beliefs, values, and preferences, thereby allowing more time for physician-patient communication. By using methods not previously considered, these tools hold the potential to reduce the uncertainty in medicine, such as those arising due to the lack of clinical information and provider skill and bias. Despite this possibility, there has been considerable resistance to the implementation of AI tools in medical practice. In this viewpoint article, we discuss the impact of AI on medical uncertainty and discuss practical approaches to teaching the use of AI tools in medical schools and residency training programs, including AI ethics, real-world skills, and technological aptitude.

JMIR Med Educ 2024;10:e51446

doi:10.2196/51446

Keywords



In clinical practice, uncertainty refers to a physician’s perceived inability to accurately explain or advise on a patient’s medical problem [1] and may arise at any stage of the patient encounter, be it assessment, investigation, diagnosis, or treatment [2]. Physicians have both a professional and instinctive desire to be as certain as possible when diagnosing or treating patients [3]. While teaching physicians to tolerate uncertainty is important, there is also a need to overcome problems in medicine that contribute to uncertainty in the physician’s mind. Several models of uncertainty have been proposed, but for the ease of our discussion, the distinction between reducible and irreducible uncertainty is most relevant.

Simply put, uncertainties associated with unknowable things are irreducible, and uncertainties associated with knowable things that are currently unknown are reducible [4]. Reducible forms of uncertainty in medicine may stem from the lack of information about the effects of treatment; information overload or complexity; vagueness of terms; or differing beliefs, values, and preferences among providers [5]. Reducible uncertainty can be overcome by obtaining new knowledge. For example, an 85-year-old woman with a headache and a pulsatile temporal artery with a normal erythrocyte sedimentation rate and normal C-reactive protein (CRP) levels may be treated for giant cell arteritis (GCA) with pulse steroids in the emergency department by one provider, but might be deemed unlikely to have the same diagnosis by another. The uncertainty may arise, in this case, from any number of factors, including the lack of a consensus definition of GCA, differing tolerances of risk among providers, and the overall clinical appearance of the patient.

Irreducible forms of uncertainty in medicine, on the other hand, stem from statistical limitations (eg, random error due to natural variation), epistemic problems (such as measurement error, systematic error, model uncertainty, and uncertainty about inducing case probability from class probability), or numerical vagueness [5]. For example, again considering the diagnosis of GCA, both an elevated erythrocyte sedimentation rate and elevated CRP level (eg, CRP level >10 mg/dL) may help to confirm the diagnosis. However, even a CRP level of 12 mg/dL may not be considered elevated for older age, obesity, and smoking (all of which are factors that may raise the CRP level). The uncertainty in this case is due to the difficulty in applying the broader diagnostic criteria to the specific patient being assessed, and natural variation in numerical criteria used to make the diagnosis. Even with the development of more specific numerical cutoffs for CRP levels, patients with levels slightly above the threshold may not truly have an increased risk of the disease. These are foundational problems of knowledge creation and not specific to medicine. As such, irreducible forms of uncertainty cannot be eliminated by obtaining new knowledge.

Artificial intelligence (AI) tools are being increasingly used as adjuncts to improve diagnosis, medical decision-making, and treatment. Here, the distinction between reducible and irreducible uncertainty becomes important; the forms of uncertainty that can be improved by obtaining knowledge (namely, the reducible forms) may see a benefit with the introduction of AI tools in medical practice. AI tools that have been developed can assist physicians with documenting encounters [6], diagnosing skin cancer [7], providing patient information on medical conditions [8], and teaching surgical skills to medical trainees [9].

Despite the promise AI tools may hold to overcome the sources of uncertainty in medicine, the relationship between AI and medical uncertainty has not been explored in the literature. In this viewpoint article, we consider the potential of AI tools to reduce uncertainty in medical practice, when used as adjuncts to clinical reasoning. In addition, we offer practical approaches to teaching the use of AI in medical schools and residency programs to increase the uptake of these tools in practice.


The potential impact of AI tools on reducible uncertainty in medical practice is vast. We will, however, focus our discussion on three sources of reducible uncertainty: (1) lack of clinical information, (2) provider competence, and (3) provider bias.

Clinical Information

A significant source of uncertainty in medical practice is the lack of availability of relevant information to make a decision. This may include the lack of studies about the disease process or the lack of information about the particular patient. AI tools are currently being employed to gather scientific data, through large database management and integration with biobanks [10], and integrate these biological and clinical variables in prediction outputs [11]. This accelerated pace of data gathering may substantially advance our understanding of disease. In addition to the limited understanding of disease in the scientific community, the lack of information about the particular patient can also create uncertainty. Consider a patient who presents for psychiatric assessment with suicidal ideation. A detailed history is required to arrive at the underlying cause of distress, but it is time-consuming to elicit this information and is affected greatly by patient rapport. AI tools, including scribes [12], can help to address such time constraints, by reducing time spent on documentation and administrative tasks. AI may also provide feedback on a physician’s skills in providing patient-centered care, facilitating improvement in this domain [13]. In addition, patients may be more willing to provide socially negative information to AI programs than to physicians, assisting with the collection of data used in clinical decision-making [14].

Provider Competence

Provider skill, knowledge, and experience may also lead to diagnostic uncertainty. AI tools are currently being developed as clinical decision support aids. For example, deep neural networks have been able to classify skin cancer from dermoscopic images at similar levels of accuracy to board-certified dermatologists [15]. Similarly, AI tools have been developed to support the triage process in emergency departments with a 27% greater accuracy than that of the average nurse [16]. These tools have the tremendous potential of reducing human error and contributing to personal learning and process improvement.

Provider Bias

Differing beliefs, values, and preferences among physicians also contribute to reducible uncertainty in diagnosis and treatment. Medical decision-making is ideally a balance of the best available evidence and clinical gestalt, the latter being influenced by unconscious biases. For example, in cases where the incidence of disease is lower in a particular population, a reliance on heuristics may result in underdiagnosis. Take for example, the diagnosis of cutaneous malignant melanoma, which has a lower incidence rate in people with darker skin color compared to non-Hispanic individuals with lighter skin color [17]. Research has shown that patients with darker skin types are more likely to present with later stage cancers [18], resulting in higher mortality rates [19]. AI could assist with addressing disagreements by providing recommendations, irrespective of the decision-making agent’s personal perspectives, beliefs, or biases. In fact, a recent study demonstrated that ChatGPT could predict dermatoses in people with lighter and darker skin color with similar levels of accuracy [20], despite established clinical disparities in the diagnosis of skin conditions between these groups [21].


Despite the potential of AI tools to affect the reducible forms of uncertainty, there are irreducible forms of uncertainty in medicine that these tools will not resolve. Here, we will discuss two irreducible forms of uncertainty, (1) the application of class-to-case probability and (2) model uncertainty, and how AI tools will impact them.

Class-to-Case Probability

The distinction between class probability and case probability as a source of uncertainty was first described by Austrian economist and philosopher, Ludwig von Mises [22]; he described class probability as a general understanding of risk for a particular group of people, and case probability as the specific understanding of risk for an individual. For example, based on large population studies, we understand that by the age of 80 years, 14% of smokers will develop lung cancer [23]. However, for a particular 65-year-old smoker who comes into the office, whether they will develop lung cancer remains uncertain. Their risk is based on several immeasurable factors, including their comorbidities, environmental exposures, and genetic profile. Regardless, medical decision-making routinely involves an abstraction of class probability to case probability, and we reasonably accept this patient’s risk of lung cancer to be 14%.

What is the impact of AI on this problem? AI tools are capable of analyzing large datasets and identifying patterns that may improve the overall accuracy in estimating the risk for groups of patients (class probabilities). Additionally, these tools are being increasingly used to advance the field of precision medicine [24]. For example, genotype-guided treatment is an area of active research in precision medicine. Genomic profiling can be used to provide targeted therapy for patients with lung or breast cancer [25]. By integrating massive amounts of individual data (genetics, lifestyle, and environmental exposure), AI may be able to better predict how a specific patient might respond to treatment, improving our understanding of case probability. Additionally, these tools are also able to learn continuously and may be able to refine their predictions regarding disease risk and prognosis as more information becomes available. Wearable devices, for example, which collect continuous, multidimensional data during daily activities, have captured subtle changes in cognition and functional capacity long before the onset of dementia [26]. Despite these advances, decision-making in medicine will continue to rely on the abstraction of class probability to case probability, as the future outcome of a particular case can never be predicted with complete certainty. This is an epistemological source of uncertainty that AI may be able to mitigate, but never eliminate.

Model Uncertainty

Model uncertainty is another reducible form of uncertainty in medicine. This form of uncertainty arises from the fact that models of disease are approximations of complex systems and involve simplifications of reality and assumptions. As a result, these models may not explain all presentations of a disease. For example, we understand a depletion in serotonin as being a cause of depression [27]; however, this model is imperfect and does not explain why some people who meet the criteria for depression do not respond to selective serotonin reuptake inhibitors. One advantage AI tools offer is that they are data-driven rather than assumption-driven. Deep learning techniques can allow AI tools to learn from raw data rather than predefined model parameters. For example, AI tools used in the COVID-19 pandemic were able to identify unusual cases of pneumonia before public health authorities recognized the threat [28]. Unlike traditional models that are fixed once developed, AI models can also learn and adapt over time as new data becomes available. This means that AI tools can develop models that change over time, at rates much faster than traditional scientific models were developed.

While these tools may reduce model uncertainty, one limitation of AI tools when used to develop models of disease is the lack of explainability or algorithmic transparency. It is not always easy or possible to understand how and why an AI system arrives at its decision [29]. Currently, the tools, methods, policies, and frameworks required for explainable AI have not been well developed [11]. This lack of transparency may increase model uncertainty, due to a lack of physician trust in the model’s decision. While explainable AI is foreseeable with time and technological advances, on an epistemological level, AI cannot overcome the fundamental limitation that models are approximations of reality with inherent error. Therefore, model uncertainty will persist as a challenge to medical decision-making despite advances in AI.


Overview

Despite the promise AI tools hold in reducing uncertainty, there has been considerable resistance to the implementation of AI tools in medical practice. Several reasons for this reluctance exist, including a lack of transparency, cost, privacy issues, reputation concerns, and legal liability [30]. Some medical professionals also perceive AI systems as threats to medical professional identity (recognition and capability) [31]. In addition, patients worry that these tools may not be able to account for their unique preferences the way a physician might [32]. These concerns are valid and will need to be addressed before AI tools reach widespread implementation. Indeed, our understanding of AI ethics and privacy issues has greatly improved in the last 5 years [33]. Despite these barriers, AI tools have already made their way into medical practice. In fact, learners are increasingly voicing an interest in training on how to use AI technologies in medical practice [34]. Consequently, a shift in perspective is required in medical education to teach learners how to practically use these tools and to understand their benefits and limitations.

Novel teaching approaches are needed to train medical learners to use AI tools practically and responsibly. For educators involved in designing medical curricula, three objectives should be considered: (1) improving students’ understanding of capabilities, limitations, and ethics of AI use; (2) increasing practical skills in the use of AI; and (3) increasing technological aptitude needed for producing AI systems.

Teaching the Capabilities, Limitations, and Ethics of AI Use

Students should receive formal education on the capabilities and limitations of AI tools. This includes the scope of technologies presently available in various fields of medicine, in addition to those that are newly emerging, including AI scribes, triage assistants, and patient-facing chatbots. Learners should also be made aware of the limitations of these technologies, including the potential for error due to assumptions of class-to-case probability and model uncertainty. Generative forms of AI, including ChatGPT, experience the lack of context and generalizability and are consequently at risk of spreading misinformation [35]. This phenomenon, known as “hallucination,” refers to the generation of information that appears statically plausible but may not be accurate [36]. Students must be made aware of these limitations. In addition, students should be educated about the ethical and legal risks and issues, misinformation, and hallucinations. This includes social discrimination and racial bias in datasets used to develop these tools, which may be further perpetuated by these tools [37]. The legal implications of AI use should also be discussed. While the legalities around AI use are currently being debated, there is a possibility of medical negligence and liability to both physicians and medical institutions if undue harm to the patient is caused by these tools [38]. Students should also be informed about the privacy and security considerations of sharing personal health information with AI tools, including generative forms of AI, such as ChatGPT [39,40]. The capacity of AI tools to “hallucinate” or provide misinformation, and the efforts needed to improve the technical abilities of present forms of AI should also be taught. Teachers and institutions should find ways to develop and enhance students’ critical thinking and analytical skills first, as these technologies are first introduced and refined for practice.

Teaching Practical Skills in AI Use

At present, there is very little teaching on how to practically use AI tools for medical students and residents. This includes the use of these tools for personal, academic, and medical purposes. Perhaps the first step in medical education reform is to normalize the use of AI as a teaching and learning aid. AI has been shown, for example, to ameliorate teaching through helping with teaching concepts; creating and improving scenario modeling, courses, and content; and traditional curriculum and coursework [14]. Technologies that rely on large language models can help with developing curricula and teaching plans [40], generating teaching aids [41], simplifying complex medical concepts [42], and pretesting examination questions [43]. AI can also enhance self-directed learning for medical students. ChatGPT is being used by medical students to practice clinical scenarios, access medical literature, and study for examinations [44]. AI applications are currently being developed, for example, to improve case-based learning and decision-making skills [45,46]. In addition, they can analyze students’ responses in real time and provide immediate feedback and insights into students’ comprehension and learning progress [47]. As large language models become increasingly integrated, skills in prompt engineering and the development and refinement of appropriate inputs for generative AI are also needed to maximize the efficacy of these tools [48].

In addition to openness around the use of AI as a teaching and learning aid, training is needed on how to practically use AI tools as adjuncts to traditional clinical decision-making tools. The integration of AI into medical practice will require professionals to learn how to adapt workflows and communicate effectively with these tools. For example, AI scribes used to assist with documentation in patient encounters may require providers to learn to explicitly describe physical examination findings or to edit initial documentation effectively [49]. Surgical residents should be taught the use of AI tools used for surgical planning early in training [50]. Additionally, learners should be trained on how to communicate the process, risks, benefits, and alternatives of AI use to patients [34].

While it is encouraged that AI is used in medical education, safeguards will equally be necessary to address issues with academic integrity [14] while using ChatGPT and other such technologies for examinations, assignments, and assessments [44]. Guidelines encompassing accountability systems, ethical considerations, privacy, and moral and integrity issues can be used to help address academic integrity issues [40]. In addition, educating students on how to avoid plagiarism in conjunction with plagiarism detection and language analysis software can promote responsible use of these tools [51].

Increasing Technological Aptitude

In addition to training medical learners on the use of existing AI technologies, a possible long-term goal to improve medical education on AI is to develop the technological aptitude. This includes skills in coding, Python language, mechanisms of data leakage, and an overview of how AI tools are developed [34]. The combination of clinical aptitude and understanding of the practical problems these tools need to solve make physicians uniquely positioned to assist with the production of these technologies. At present, however, many physicians lack the technical skills to help with AI development. Down the line, medical schools will need to consider how they can train physicians to do both.


As AI tools become increasingly integrated into medical practice, they will offer powerful solutions to problems of uncertainty in medicine. These tools have the potential to address reducible forms of uncertainty, including the lack of available clinical information and scientific studies, limits to physician ability, and provider personal bias in decision-making. These tools may also improve the irreducible forms of uncertainty to some extent, increasing our ability to make case predictions from class probabilities and develop models of disease. Despite these capabilities, these tools will never be able to overcome foundational knowledge problems in medicine and pose ethical concerns that must be addressed. Nonetheless, AI tools are being used in practice, and trainees must learn the scope of these technologies, the ethical and legal challenges they pose, and how to use them practically. In the future, trainees should also be taught technical skills of how to develop these technologies. AI has reached medicine, and the medical profession is being asked time and time again to adapt; medical education reform is crucial in this transformation.

Authors' Contributions

SRA and SQH contributed equally to the preparation of this manuscript. All authors were involved in the conception and design of the work. SRA and SQH performed the literature search, conducted the analysis, and created the initial draft. SD and RU substantively revised the work. All authors approve the final manuscript and assume accountability for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Conflicts of Interest

SD is a member of the advisory board of the Subcortical Surgery Group. He is a member of the speakers bureau for the Congress of Neurological Surgeons and American Association of Neurological Surgeons. He receives research funding from Synaptive and VPIX. He receives royalty payments from Oxford University Press. He serves as the provincial lead for CNS Cancers, Ontario Health (Cancer Care Ontario). RU has received research funding from the Canadian Institutes of Health Research, Health Canada and Wellcome Trust. He serves on advisory boards for the World Health Organization, Doctors Without Borders, the College of Family Physicians of Canada, the Royal College of Physicians and Surgeons of Canada, and the Canadian Medical Association.

  1. Hillen MA, Gutheil CM, Strout TD, Smets EMA, Han PKJ. Tolerance of uncertainty: conceptual analysis, integrative model, and implications for healthcare. Soc Sci Med. May 2017;180:62-75. [CrossRef] [Medline]
  2. Albrecht O. Gary LA, Ray F, Susan CS, editors. Handbook of Social Studies in Health and Medicine. Sage Publications; 2000. [CrossRef]
  3. Simpkin AL, Schwartzstein RM. Tolerating uncertainty - the next medical revolution? N Engl J Med. Nov 3, 2016;375(18):1713-1715. [CrossRef] [Medline]
  4. Kiureghian AD, Ditlevsen O. Aleatory or epistemic? Does it matter? Struct Saf. Mar 2009;31(2):105-112. [CrossRef]
  5. Djulbegovic B, Hozo I, Greenland S. Uncertainty in clinical medicine. In: Philosophy of Medicine. Elsevier; 2011:299-356. [CrossRef]
  6. Bundy H, Gerhart J, Baek S, et al. Can the administrative loads of physicians be alleviated by AI-facilitated clinical documentation? J Gen Intern Med. Jun 27, 2024. [CrossRef] [Medline]
  7. Melarkode N, Srinivasan K, Qaisar SM, Plawiak P. AI-powered diagnosis of skin cancer: a contemporary review, open challenges and future research directions. Cancers (Basel). Feb 13, 2023;15(4):1183. [CrossRef] [Medline]
  8. Gabriel J, Shafik L, Alanbuki A, Larner T. The utility of the ChatGPT artificial intelligence tool for patient education and enquiry in robotic radical prostatectomy. Int Urol Nephrol. Nov 2023;55(11):2717-2732. [CrossRef] [Medline]
  9. Satapathy P, Hermis AH, Rustagi S, Pradhan KB, Padhi BK, Sah R. Artificial intelligence in surgical education and training: opportunities, challenges, and ethical considerations - correspondence. Int J Surg. May 1, 2023;109(5):1543-1544. [CrossRef] [Medline]
  10. Frascarelli C, Bonizzi G, Musico CR, et al. Revolutionizing cancer research: the impact of artificial intelligence in digital biobanking. J Pers Med. Sep 16, 2023;13(9):1390. [CrossRef] [Medline]
  11. Kim K, Lee YM. Understanding uncertainty in medicine: concepts and implications in medical education. Korean J Med Educ. Sep 2018;30(3):181-188. [CrossRef] [Medline]
  12. Cao DY, Silkey JR, Decker MC, Wanat KA. Artificial intelligence-driven digital scribes in clinical documentation: pilot study assessing the impact on dermatologist workflow and patient encounters. JAAD Int. Jun 2024;15:149-151. [CrossRef] [Medline]
  13. Ryan P, Luz S, Albert P, Vogel C, Normand C, Elwyn G. Using artificial intelligence to assess clinicians’ communication skills. BMJ. Jan 18, 2019;364:l161. [CrossRef] [Medline]
  14. Lucas GM, Gratch J, King A, Morency LP. It’s only a computer: virtual humans increase willingness to disclose. Comput Human Behav. Aug 2014;37:94-100. [CrossRef]
  15. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nat New Biol. Feb 2, 2017;542(7639):115-118. [CrossRef] [Medline]
  16. Ivanov O, Wolf L, Brecher D, et al. Improving ED emergency severity index acuity assignment using machine learning and clinical natural language processing. J Emerg Nurs. Mar 2021;47(2):265-278. [CrossRef] [Medline]
  17. Brunsgaard EK, Wu YP, Grossman D. Melanoma in skin of color: part I. Epidemiology and clinical presentation. J Am Acad Dermatol. Sep 2023;89(3):445-456. [CrossRef] [Medline]
  18. Kabigting FD, Nelson FP, Kauffman CL, Popoveniuc G, Dasanu CA, Alexandrescu DT. Malignant melanoma in African-Americans. Dermatol Online J. 2009;15(2):3. [CrossRef]
  19. Cormier JN, Xing Y, Ding M, et al. Ethnic differences among patients with cutaneous melanoma. Arch Intern Med. Sep 25, 2006;166(17):1907-1914. [CrossRef] [Medline]
  20. Qureshi S, Alli SR, Ogunyemi B. Accuracy of ChatGPT-3.5 and GPT-4 in diagnosing clinical scenarios in dermatology involving skin of color. Int J Dermatol. Aug 9, 2024. [CrossRef] [Medline]
  21. Fenton A, Elliott E, Shahbandi A, et al. Medical students’ ability to diagnose common dermatologic conditions in skin of color. J Am Acad Dermatol. Sep 2020;83(3):957-958. [CrossRef] [Medline]
  22. Mises LV. Human Action. Ludwig von Mises Institute; 1949. URL: https://cdn.mises.org/Human%20Action_3.pdf [Accessed 2024-09-26]
  23. Weber MF, Sarich PEA, Vaneckova P, et al. Cancer incidence and cancer death in relation to tobacco smoking in a population-based Australian cohort study. Int J Cancer. Sep 1, 2021;149(5):1076-1088. [CrossRef] [Medline]
  24. Johnson KB, Wei WQ, Weeraratne D, et al. Precision medicine, AI, and the future of personalized health care. Clin Transl Sci. Jan 2021;14(1):86-93. [CrossRef] [Medline]
  25. Hartmaier RJ, Albacker LA, Chmielecki J, et al. High-throughput genomic profiling of adult solid tumors reveals novel insights into cancer pathogenesis. Cancer Res. May 1, 2017;77(9):2464-2475. [CrossRef] [Medline]
  26. Gold M, Amatniek J, Carrillo MC, et al. Digital technologies as biomarkers, clinical outcomes assessment, and recruitment tools in Alzheimer’s disease clinical trials. Alzheimers Dement (N Y). 2018;4(1):234-242. [CrossRef] [Medline]
  27. Moncrieff J, Cooper RE, Stockmann T, et al. The serotonin theory of depression: a systematic umbrella review of the evidence. Mol Psychiatry. 2023;28:3243-3256. [CrossRef]
  28. Hasan MM, Islam MU, Sadeq MJ, Fung WK, Uddin J. Review on the evaluation and development of artificial intelligence for COVID-19 containment. Sensors (Basel). Jan 3, 2023;23(1):527. [CrossRef] [Medline]
  29. Holzinger A, Langs G, Denk H, Zatloukal K, Müller H. Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip Rev Data Min Knowl Discov. 2019;9(4):e1312. [CrossRef] [Medline]
  30. Yang Y, Ngai EWT, Wang L. Resistance to artificial intelligence in health care: literature review, conceptual framework, and research agenda. Inf Manag. Jun 2024;61(4):103961. [CrossRef]
  31. Jussupow E, Spohrer K, Heinzl A. Identity threats as a reason for resistance to artificial intelligence: survey study with medical students and professionals. JMIR Form Res. Mar 23, 2022;6(3):e28750. [CrossRef] [Medline]
  32. Longoni C, Bonezzi A, Morewedge CK. Resistance to medical artificial intelligence. J Consum Res. Dec 1, 2019;46(4):629-650. [CrossRef]
  33. Stahl BC. Artificial Intelligence for a Better Future: An Ecosystem Perspective on the Ethics of AI and Emerging Digital Technologies. Springer Nature; 2021. [CrossRef]
  34. Civaner MM, Uncu Y, Bulut F, Chalil EG, Tatli A. Artificial intelligence in medical education: a cross-sectional needs assessment. BMC Med Educ. Nov 9, 2022;22(1):772. [CrossRef] [Medline]
  35. Boscardin CK, Gin B, Golde PB, Hauer KE. ChatGPT and generative artificial intelligence for medical education: potential impact and opportunity. Acad Med. Jan 1, 2024;99(1):22-27. [CrossRef] [Medline]
  36. Han Z, Battaglia F, Udaiyar A, Fooks A, Terlecky SR. An explorative assessment of ChatGPT as an aid in medical education: use it with caution. Med Teach. May 2024;46(5):657-664. [CrossRef] [Medline]
  37. Zhang W, Cai M, Lee HJ, Evans R, Zhu C, Ming C. AI in medical education: global situation, effects and challenges. Educ Inf Technol. Mar 2024;29(4):4611-4633. [CrossRef]
  38. Mehta D. The role of artificial intelligence in healthcare and medical negligence. Liverp Law Rev. Apr 2024;45(1):125-142. [CrossRef]
  39. Lee H. The rise of ChatGPT: exploring its potential in medical education. Anat Sci Educ. 2024;17(5):926-931. [CrossRef] [Medline]
  40. Xu X, Chen Y, Miao J. Opportunities, challenges, and future directions of large language models, including ChatGPT in medical education: a systematic scoping review. J Educ Eval Health Prof. 2024;21:6. [CrossRef] [Medline]
  41. Grassini S. Shaping the future of education: exploring the potential and consequences of AI and ChatGPT in educational settings. Educ Sci. 2023;13(7):692. [CrossRef]
  42. Benítez TM, Xu Y, Boudreau JD, et al. Harnessing the potential of large language models in medical education: promise and pitfalls. J Am Med Inform Assoc. Feb 16, 2024;31(3):776-783. [CrossRef] [Medline]
  43. Roos J, Kasapovic A, Jansen T, Kaczmarczyk R. Artificial intelligence in medical education: comparative analysis of ChatGPT, Bing, and medical students in Germany. JMIR Med Educ. Sep 4, 2023;9:e46482. [CrossRef] [Medline]
  44. Lakshan MTD, Chandratilake M, Drahaman AMP, Perera MB. Exploring the pros and cons of integrating artificial intelligence and ChatGPT in medical education: a comprehensive analysis. Ceylon J Otolaryngology. 2024;13(1):39-45. [CrossRef]
  45. Gordon M, Daniel M, Ajiboye A, et al. A scoping review of artificial intelligence in medical education: BEME Guide No. 84. Med Teach. Apr 2024;46(4):446-470. [CrossRef] [Medline]
  46. Ossa LA, Rost M, Lorenzini G, Shaw DM, Elger BS. A smarter perspective: learning with and from AI-cases. Artif Intell Med. Jan 2023;135:102458. [CrossRef] [Medline]
  47. Onesi-Ozigagun O, Ololade YJ, Eyo-Udo NL, Ogundipe DO. Revolutionizing education through AI: a comprehensive review of enhancing learning experiences. Int J Appl Res Soc Sci. 2024;6(4):589-607. [CrossRef]
  48. Meskó B. Prompt engineering as an important emerging skill for medical professionals: tutorial. J Med Internet Res. Oct 4, 2023;25:e50638. [CrossRef] [Medline]
  49. van Buchem MM, Kant IMJ, King L, Kazmaier J, Steyerberg EW, Bauer MP. Impact of a digital scribe system on clinical documentation time and quality: usability study. JMIR AI. Sep 23, 2024;3:e60020. [CrossRef] [Medline]
  50. Varghese C, Harrison EM, O’Grady G, Topol EJ. Artificial intelligence in surgery. N Med. May 2024;30(5):1257-1268. [CrossRef] [Medline]
  51. Cotton DRE, Cotton PA, Shipway JR. Chatting and cheating: ensuring academic integrity in the era of ChatGPT. Innov Educ Teach Int. Mar 3, 2024;61(2):228-239. [CrossRef]


AI: artificial intelligence
CRP: C-reactive protein
GCA: giant cell arteritis


Edited by Taiane de Azevedo Cardoso; submitted 01.08.23; peer-reviewed by Bertalan Meskó, Racha Onaisi; final revised version received 26.09.24; accepted 27.09.24; published 04.11.24.

Copyright

© Sauliha Rabia Alli, Soaad Qahhār Hossain, Sunit Das, Ross Upshur. Originally published in JMIR Medical Education (https://mededu.jmir.org), 4.11.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic information, a link to the original publication on https://mededu.jmir.org/, as well as this copyright and license information must be included.