Published on in Vol 7, No 4 (2021): Oct-Dec

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/31043, first published .
Artificial Intelligence Education Programs for Health Care Professionals: Scoping Review

Artificial Intelligence Education Programs for Health Care Professionals: Scoping Review

Artificial Intelligence Education Programs for Health Care Professionals: Scoping Review

Review

1Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada

2University Health Network, Toronto, ON, Canada

3Vector Institute, Toronto, ON, Canada

4Michener Institute of Education, University Health Network, Toronto, ON, Canada

5Faculty of Medicine, University of Toronto, Toronto, ON, Canada

6Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada

7Wilson Centre, Toronto, ON, Canada

8CAMH Education, Centre for Addictions and Mental Health (CAMH), Toronto, ON, Canada

Corresponding Author:

David Wiljer, PhD

University Health Network

190 Elizabeth Street

R. Fraser Elliott Building RFE 3S-441

Toronto, ON, M5G 2C4

Canada

Phone: 1 416 340 4800 ext 6322

Email: david.wiljer@uhn.ca


Background: As the adoption of artificial intelligence (AI) in health care increases, it will become increasingly crucial to involve health care professionals (HCPs) in developing, validating, and implementing AI-enabled technologies. However, because of a lack of AI literacy, most HCPs are not adequately prepared for this revolution. This is a significant barrier to adopting and implementing AI that will affect patients. In addition, the limited existing AI education programs face barriers to development and implementation at various levels of medical education.

Objective: With a view to informing future AI education programs for HCPs, this scoping review aims to provide an overview of the types of current or past AI education programs that pertains to the programs’ curricular content, modes of delivery, critical implementation factors for education delivery, and outcomes used to assess the programs’ effectiveness.

Methods: After the creation of a search strategy and keyword searches, a 2-stage screening process was conducted by 2 independent reviewers to determine study eligibility. When consensus was not reached, the conflict was resolved by consulting a third reviewer. This process consisted of a title and abstract scan and a full-text review. The articles were included if they discussed an actual training program or educational intervention, or a potential training program or educational intervention and the desired content to be covered, focused on AI, and were designed or intended for HCPs (at any stage of their career).

Results: Of the 10,094 unique citations scanned, 41 (0.41%) studies relevant to our eligibility criteria were identified. Among the 41 included studies, 10 (24%) described 13 unique programs and 31 (76%) discussed recommended curricular content. The curricular content of the unique programs ranged from AI use, AI interpretation, and cultivating skills to explain results derived from AI algorithms. The curricular topics were categorized into three main domains: cognitive, psychomotor, and affective.

Conclusions: This review provides an overview of the current landscape of AI in medical education and highlights the skills and competencies required by HCPs to effectively use AI in enhancing the quality of care and optimizing patient outcomes. Future education efforts should focus on the development of regulatory strategies, a multidisciplinary approach to curriculum redesign, a competency-based curriculum, and patient-clinician interaction.

JMIR Med Educ 2021;7(4):e31043

doi:10.2196/31043

Keywords



Background

The widespread and rapid adoption of artificial intelligence (AI) technologies in health sciences, education, and practices introduces new ways of delivering patient care [1]. AI encompasses a broader term within computer science, which includes technologies that can incorporate human-like perception, intelligence, and problem-solving into complex machines [2]. Big data in health care, along with high-performance computing power, has enabled the use of AI, machine learning (ML), and deep learning, in particular, to improve clinical decision-making and health sector efficiency [3]. More recently, AI-enabled technologies have continued to emerge, predominantly in the medical fields of radiology, anesthesiology, dermatology, surgery, and pharmacy [4-7]. Although AI is not likely to replace clinical reasoning, Mesko [8] predicts that AI will influence all specialties in varying degrees, depending on the nature of the practice (eg, the degree of repetitive tasks involved and whether the tasks are data driven). However, the efficacy of AI-enabled technologies in health care depends on the involvement of health care professionals (HCPs) in developing and validating these technologies. Therefore, HCPs should play a role in this transformation and be involved in every aspect of shaping how AI adoption will affect their specialties and organizations.

Recommendations for HCP involvement are emerging. For instance, in the field of medical imaging, West and Allen [9] recommend that HCPs be involved in (1) implementing data standards and following them in practice, (2) prioritizing use cases of AI in medicine, (3) determining the clinical impact of potential algorithms, (4) describing and articulating the needs of the profession for data scientists and researchers, and (5) participating in the translation of practice needs from human language into machine language. As these technologies emerge, it is essential for HCPs and educators to have the competencies required to rapidly develop and incorporate these changes into their practices and disciplines.

At the individual level, a lack of AI literacy is a significant barrier to the adoption and use of AI-enabled technologies to their full capacity in various medical specialties. In AI education programs specifically, there are barriers to implementation at various levels of medical education (undergraduate, postgraduate, practice-based education, or continuing professional development). For instance, health informatics plays a valuable role in modern medicine; yet, it is not the focus of most medical school curricula [2]. Technology experts are often consulted to provide training on the use of electronic clinical tools, but this does not support the level of skill required to understand how it could be used to enhance patient interactions and improve care [10]. Another example exists within radiology residency programs, where the lack of awareness as well as lack of knowledge of implementing and using AI were cited as barriers to its adoption [11,12]. Incorporating AI fundamentals into health professionals’ curricula is essential, and it would be useful to balance this knowledge with providing patient-centered care by empowering future HCPs to consider AI in the context of their own clinical judgment. The combination of trust in their own judgment and basic statistical knowledge will be useful in understanding how to best apply new AI-driven technologies in clinical practice [13]. AI needs to be considered within the context of HCPs’ broader skill sets, priorities, and ultimate goals in health care; this includes encouraging patient-centered, compassionate care in clinical practice [13,14].

Martec’s Law refers to the idea that technology changes occur much more rapidly, and in fact exponentially, compared with the ability of organizations to adopt these technologies [1]. Therefore, organizations need to promote innovative technologies proactively and empower their professionals to be adequately trained to successfully implement AI-based tools in their practice [1]. A concerted, deliberate approach is required to incorporate these new technologies, both effectively and compassionately, at an individual level and within the culture and operations of an organization [1].

A number of potential barriers to implementing these technologies exist; the 3 main limitations identified include regulatory, economic, and organizational culture issues [15]. Regulatory approval [16] is needed to adopt AI technologies in clinical settings, and potential liabilities in using these technologies for patient care must be considered, as well as the safety, efficacy, and transparency of AI algorithms for clinical decision-making [17,18]. Regulatory issues can also come into play when it comes to accessing data for AI adoption; multi-institution data sharing is required for algorithm improvement and validation, as well as the accompanying research ethics board and regulatory approvals [18]. To further improve adoption, these technologies will also have to be economical, supported by adequate funding [18], and seem as valuable to the organization itself. At an organizational level, the use of AI should align with the goals and strategic plans of an organization; organizations will need to assess how well the AI technology will integrate into existing systems, including data warehouses and electronic health records [18]. It may be difficult to generalize a particular AI model across different clinical contexts to a degree that would prove valuable at an organizational level while still working seamlessly and being clinically useful at the individual level [15]. Furthermore, when choosing to adopt AI technologies, organizations can either collaborate with outside vendors or create the technologies in-house, which will require the use of additional human and material resources [15].

Objective

Deficits in AI education may be contributing to a lack of capacity in health care systems to fully integrate and adopt AI technologies to improve patient care, despite calls for AI integration as part of the National Academy of Medicine’s Quintuple Aim Model [19]. It is important to equip health care organizations and their stakeholders to have the cognitive, psychomotor, and affective skills to harness AI in enhancing and optimizing the delivery of care. This will also involve supporting AI education initiatives that are widely available for all types of HCPs. To support future AI education development, dissemination, and evaluation, it is important to assess the current situation within AI adoption in health care and further understand the extent of AI education implementation, including who is receiving AI training or education, what content is covered, how it is delivered, and whether this reflects what experts believe that AI education curricula should include. Therefore, this scoping review aims to establish a foundational understanding of education programs on AI for HCPs by determining the following:

  1. What were the most effective educational approaches to enabling HCPs to harness AI in enhancing and optimizing health care delivery?
    • What curricular content was delivered?
    • What was the scope of content that should be delivered?
    • What learning objectives were used in these approaches, using the taxonomy for learning formulated by Bloom [20]?
  2. What were the enablers or barriers that contributed to the success of these programs and the implementation of AI curricula in health care education programs?
  3. What outcomes were used to assess the effectiveness of the education programs, using the Kirkpatrick-Barr Framework [21]?

Overview

This scoping review followed the Arksey and O’Malley [22] guidelines and the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) Extension for Scoping Reviews checklist [23,24]. The objective of this scoping review is to examine and summarize the extant literature on AI education and training for HCPs.

Stage 1: Search Strategy

A health sciences librarian (MA) developed strategies for Ovid MEDLINE All, Ovid Embase, Ovid APA PsycINFO, Ovid Emcare Nursing, Ovid Cochrane Database of Systematic Reviews, Ovid Cochrane Central Register of Controlled Trials, EBSCO ERIC, and Clarivate Web of Science using appropriate subject headings and keywords for AI and health professions education. As a result of the widespread use of terms relating to health professions and education in health sciences literature, the decision was made to focus the searches on health professions education concepts to reduce noise in the results sets. Searches for these subject headings were limited to where they were the major subject heading (the most important subject heading in the database record for an item). Keywords for these concepts were only searched in the study titles, the author-assigned keywords, heading words, and journal titles, depending on the content and field availability of the database. No language or date limits were applied. The searches were run and the results were downloaded on July 7, 2020. For the complete strategies, see Multimedia Appendix 1. If the search results included conference abstracts and proceedings, a subsequent search to find any corresponding follow-up studies was conducted in Google Scholar. Finally, pearl growing, also known as a hand comb process, was conducted where all cited works in the included studies from the initial screening underwent a 2-stage screening process (title and abstract scan as well as full-text review).

Stage 2: Study Selection

The 2-stage screening process consisted of (1) title and abstract scan and (2) full-text review. Study eligibility was determined by 2 independent reviewers, and a third reviewer was involved to resolve any conflict when consensus was not reached between the 2 reviewers. For a study to be included for full-text review and to be chosen for subsequent inclusion, the title and abstract at each stage needed to have the following attributes:

  1. It discussed an actual training program or educational intervention or potential training program or educational intervention and the desired content to be covered.
  2. It focused on AI.
  3. It was designed or intended for HCPs (at any stage of their career).

A pilot review of 20% (595/2973) of the MEDLINE citations was conducted to establish interrater reliability. The interrater reliability threshold had a Cohen κ value of 0.70, indicating substantial agreement. Additional batches of 50 citations were reviewed until the threshold was met.

Stage 3: Data Collection

A standardized charting form was developed to capture the following domains: article details, study details (if publication was an empirical study), education program details, and implementation factors. The subdivisions of the domains for the data extraction are outlined in Table 1.

Table 1. Data charting: domains and subdomains.
DomainSubdomain
Article detailsArticle type, year, and country
Study detailsStudy design, participants, intervention, comparator, primary outcomes, and secondary outcomes
Education program detailsName, setting, participants, program delivery and curriculum, program instructors (discipline), program length, and instructor training
Implementation factorsImplementation enablers or facilitators, implementation barriers, and recommendations

Stage 4: Synthesizing and Reporting the Results

To collate, summarize, and report on the included studies in this review, a narrative synthesis approach was used [25]. This included a numeric summary using descriptive statistics to report each domain (article details, study details, education program details, and implementation factors). For program curriculum under education program details, curriculum topics were inductively coded. Once a list of topics was generated, they were then grouped by domain using the taxonomy for learning formulated by Bloom. There are 3 domains: (1) cognitive, which refers to knowledge that learners should have, (2) psychomotor, which refers to skills learners should demonstrate and master, and (3) affective, which refers to attitudes learners should develop and incorporate into their practice [20].The study outcomes were deductively coded using the Kirkpatrick-Barr Framework [21] of educational outcomes. This framework was selected because it provided a standardized method of categorizing the type of educational outcomes reported by each study. The implementation factors subdomain was thematically analyzed by 2 independent reviewers using a priori codes. The reviewers compared coding schemes and iteratively determined overarching themes to frame their findings. For content validation, the project team members, patients, and experts in the fields of medical education and AI provided feedback on the thematic analysis.


Search Results

The initial database search yielded 13,449 results; once duplicates were removed, the titles and abstracts of 10,094 (75.05%) unique citations were identified. From the 10,094 unique citations, we identified 41 (0.41%) articles [2,5,13,26-63], where 13 unique, existing programs [32,35,39,43,49,50,59,61-63] were mentioned in 10 (24%) articles, and the remaining 31 (76%) articles [2,5,13,26-31,33,34,36-38,40-42,44-48,51-58,60] discussed the desired or recommended curricular content. The article selection process is presented in Figure 1. Of the 10 articles that discussed an existing program, 8 (80%) were commentaries [32,43,49,50,59,61-63], 1 (10%) was a case report [39], and 1 (10%) was an empirical study [35]. Tables 2 and 3 describe the characteristics of the articles and programs included in this review.

Figure 1. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram of the scoping review results. AI: artificial intelligence; CPD: continuing professional development.
View this figure
Table 2. Summary of article characteristics (N=41).
CharacteristicsFrequency, n (%)References
Study type

Commentary30 (73)[13,26-30,32-34,36,38,41-45,47,49-53,55,56,58-63]

Review6 (15)[2,5,40,48,54,57]

Empirical study3 (7)[31,35,37]

Case report1 (2)[39]

Best Evidence Medical Education Guide1 (2)[46]
Publication year

20131 (2)[39]

20163 (7)[35,56,61]

20172 (5)[31,50]

20188 (20)[5,28,32-34,42,47,53]

201916 (39)[13,26,27,29,30,38,40,44-46,48,49,52,57,59,62]

202011 (27)[2,36,37,41,43,51,54,55,58,60,63]
Country

United States23 (56)[2,26,28-32,34,35,42,44,45,47,49-52,54,55,59,61-63]

Canada4 (10)[13,27,33,43]

United Kingdom2 (5)[37,56]

Other12 (29)[5,36,38-41,46,48,53,57,58,60]
Table 3. Summary of program characteristics (N=13).
CharacteristicFrequency, n (%)References
Program type

Workshop2 (15)[49,50]

Fellowship3 (23)[32,59]

Biomedical informatics course2 (15)[35,39]

Data science course2 (8)[59,61]

Joint course-based program1 (8)[59]

Educational summit1 (8)[62]

Certificate program1 (8)[43]

Artificial Intelligence Journal Club1 (8)[63]
Program setting

Medical school6 (46)[39,43,59,61,62]

Academic hospital4 (31)[32,35,59]

National1 (8)[49]

International2 (15)[50,63]
Program length

>1 year2 (15)[43,61]

>1 month2 (15)[39,63]

>1 day1 (8)[50]

≤1 day2 (15)[49,62]

Not reported6 (46)[32,35,59]
Program audience

Health care professionals12 (92)[32,35,39,43,49,50,59,61-63]

Researchers or clinician scientists2 (15)[59,62]

Health care administrators1 (8)[62]

Other health disciplines1 (8)[63]
Continuum of learninga

Undergraduate medical education5 (39)[39,43,59,62]

Postgraduate medical education8 (62)[32,35,49,50,62,63]
Program objectivesb

Cognitive or psychomotor10 (77)[32,35,39,43,49,50,59,61,63]

Affective1 (8)[39]

Both2 (15)[59,62]
Program methods

Didactic9 (69)[32,35,39,43,49,50,59,61,62]

Workshop2 (15)[49,50]

Case-based2 (15)[49,50]

Discussions2 (15)[49,62]

Experiential learning5 (39)[43,49,59]

Web-based3 (23)[39,61,63]
Number of methods used

1 method5 (39)[32,35,59,63]

2 methods5 (39)[39,43,59,61,62]

≥3 methods2 (15)[49,50]
Study outcomesc

Level 13 (23)[39,49,50]

Level 2a3 (23)[49,50,62]

Level 2b2 (15)[35,62]

None8 (62)[32,43,59,61,63]

aThere are no continuing medical education programs.

bCategorized based on the domains identified in the taxonomy for learning formulated by Bloom [20].

cCategorized based on the education outcomes identified in the Kirkpatrick-Barr Framework [21].

What Was the Mode of Delivery?

Summaries of the individual programs can be found in Table 4. Of the 13 programs, 8 (62%) originated from the United States [32,35,49,50,59,61-63], 1 (8%) from Canada [43], 1 (8%) from France [59], and 1 (8%) from Mexico [39]. The typology described by Strosahl [64] was used to classify the educational method. Of the 13 programs, 9 (69%) had a didactic approach [32,35,39,43,50,59,61,62] in combination with discussions [62] (1/13, 8%), web-based [39,61] (2/13, 15%), workshop and case-based [50] (1/13, 8%), and experiential learning [43] (1/13, 8%). Of the 13 programs, 10 (77%) were taught in an academic setting [32,35,39,43,59,61,62].

Table 4. Summary of program details.
Program name or first author; country; host institution; specialty; program lengthProgram settingCurriculum delivery methods

Medical schoolAcademic hospitalNationalInternationalDidacticWorkshopCase-basedDiscussionExperiential learningWeb-based
Artificial Intelligence Journal Club; United States; American College of Radiology; Radiology; monthly for 1 hour [63]







Educational Summit; United States; Duke University Medical Center; NSa; <1 day [62]






Health Care by Numbers; United States; New York University; NS; 3 years [61]






Joint course-based program; France; Gustave Roussy with École des Ponts ParisTech and CentraleSupélec; NS; NRb [59]







Fellowship; United States; Emory University School of Medicine; Radiology; NR [59]







Fellowship; United States; Hospital of the University of Pennsylvania; Imaging Informatics; NR [59]







Elective courses; United States; Carle Illinois College of Medicine; NS; NR [59]






Introduction to Comparative Effectiveness Research and Big Data Analytics for Radiology; United States; New York University School of Medicine; medical imaging; 2 days [50]





Kinnear; United States; University of Cincinnati; NS; <1 day [49]




Computing for Medicine certificate program; Canada; University of Toronto, Faculty of Medicine; NS; 2 years [43]






The National Autonomous University of Mexico, Faculty of Medicine, biomedical informatics education; Mexico; University of Mexico’s Faculty of Medicine; NS; 2 one-semester courses [39]






Formalized bioinformatics education; United States; Baylor Scott and White Medical Center; medical imaging; NR [35]







National Cancer Institute–Food and Drug Administration Information Exchange and Data Transformation fellowship in oncology data science; United States; National Cancer Institute; medical imaging; NR [32]







aNS: specialty not specified.

bNR: not reported.

Target Audience

There were 3 types of HCPs identified in the 41 reviewed papers: physicians [41,43,46,52,59,63] (6/41, 15%), nurses [31,52] (2/41, 5%), and radiology technologists [5] (1/41, 2%). In addition, 2 specific specialties were identified: medical imaging [5,26,32-34,37,42,48,50,51,55,58,63] (13/41, 32%) and cardiology [56,61] (2/41, 5%), with others not being specified [2,13,27-31,35,36,38-41,43-47,49,52-54,57,59,60,62] (26/41, 63%). Figure 2 illustrates the type of curriculum topics covered in the continuum of learning for clinicians, which includes undergraduate medical education [2,13,28-30,33,35-37,39-41,43,44,47,53,57,61,62] (20/41, 49%), postgraduate medical education [5,26,32-35,41,42,48-51, 54-58,62,63] (19/41, 46%), and continuing professional development [5,57] (2/41, 5%). Other nonclinical professionals include researchers [5,27,33,59,62] (5/41, 12%), health care administrators [27,33,45,52,62] (5/41, 12%), and computer and data scientists [27,33,52,63] (4/41, 10%).

Figure 2. Number of articles in each curriculum topic domain grouped by target audience.
View this figure

What Content Was Covered?

From these papers, the program curriculum and desired or recommended content mentioned included topics on using AI, interpreting AI, and explaining results from AI, as framed by McCoy et al [43]. A description of each curricular topic can be found in Table 5.

Of these 16 curricular topics, 9 (56%) fell under cognitive domain, 6 (38%) under psychomotor domain, and 1 (6%) under affective domain, and most of them were mentioned both by papers that discussed current education programs and commentaries that discussed what HCPs should be learning. The curricular topics were categorized into the 3 domains identified in the taxonomy for learning formulated by Bloom [20]. Table 6 displays the curricular topics that were unique to 24% (10/41) of the papers [32,35,39,43,49,50,59,61-63] that described what AI programs currently teach, 76% (31/41) of the papers [2,5,13,26-31,33,34,36-38,40-42,44-48,51-58,60] that described what AI programs should teach as part of their curriculum, and those that outlined both what was taught and what should be taught.

Table 5. Curriculum focus and objectives.
Themes (framed by McCoy et al [43]) and topicDescriptionNumber of studiesReferences
Using AIa

Fundamentals of AIAn overview of all stages of model development, translation, and use in clinical practice. Specifically, this would cover nomenclature and principles such as data collection and transformation, algorithm selection, model development, training and validation, and interpreting model output20[5,26,27,32,34,36,37,39-41,43,46,47,51,52,55,57-59,62]

Fundamentals of health care data scienceFundamental understanding of the environment supported by AI. This includes an overview of biostatistics, big data, data streams available, and how algorithms and machine learning use and process data20[5,13,26-36,41,42,45,49,51,52,54]

Fundamentals of biomedical informaticsAn overview of essential concepts such as nomenclature (information and knowledge taxonomy), structure and function of computers, information and communications technology, standards in biomedical informatics, and technology evaluation1[39]

Multidisciplinary collaborationLearning how to partner and communicate with experts in engineering and data science to ensure clinical relevance and accuracy of AI systems13[26,29,31,33,43,45,51-54,57,58,62]

Applications of AIProviding examples of AI that have been implemented in health care settings to understand the impact of technologies that incorporate AI11[2,32,39,40,44-46,51,52,55,57]

Implementation of AI in health care settingsUnderstanding how to embed AI tools into clinical settings and workflows. Specifically, this includes requirements for clinical translation and interpretation of model outputs9[27,30,32-34,41,45,57,62]

Strengths and limitations of AIUnderstanding the value, pitfalls, weaknesses and potential errors or unintended consequences that may occur when using AI tools13[26,30,32-34,37,41,45,51,52,55,58,62]

Ethical considerationsUnderstanding and building awareness of ethics, equity, inclusion, patient rights, and confidentiality when using AI tools13[5,26,28-30,33,36,39,41,42,46,54,58]

Legal considerations and governance strategyUnderstanding data governance principles, regulatory frameworks, legislation, policy on using data and AI tools, as well as liability or intellectual property issues7[27,30,39,41,45,51,58]

Economic considerations“Understanding of how business or clinical processes will be altered through the integration of AI technologies into health care” [58] as well as commercialization2[26,33]
Interpreting results fromAI

Medical decision-makingUnderstanding decision science and probabilities from AI diagnostic and therapeutic algorithms to then meaningfully apply them in clinical decision-making8[13,26,28-31,39,51]

Data visualizationUnderstanding how to present and describe outputs from AI tools4[27,30,52,54]

Product development projectsHands on experience to develop, test, and validate AI algorithms with real medical data2[52,54]
Explaining results fromAI

Communicating with patientsMastering how to communicate results with patients in a personalized and meaningful way and discuss the use of AI in the medical decision-making process8[5,28-30,32,36,43,46]

Compassion and empathyCultivating and expressing empathy and compassion when communicating with patients4[28-30,36]

Critical appraisalUnderstanding how to evaluate AI diagnostic and therapeutic algorithms7[2,34,40,43,51,54,59]

aAI: artificial intelligence.

Table 6. Illustration of the cognitive, psychomotor, and affective domains between what programs currently teach as part of the artificial intelligence (AI) curriculum and what programs should teach.
CompetenciesWhat programs currently teachSimilarities between the current program and recommended program topicsWhat programs should teach
CognitiveInformatics
  • Fundamentals of AI
  • Implementation of AI in health care settings
  • Big data
  • Data science
  • Machine learning
  • Statistics
  • Multidisciplinary collaboration
  • Strengths and limitations of AI
  • Challenges with AI
  • AI applications
  • EHRa fundamentals
  • Predictive analytics
  • Ethics and legal Issues
  • Data governance
  • Economic considerations
PsychomotorLeadership
  • Analytical
  • Problem solving
  • Interpretation
  • Communication
  • Critical appraisal
  • Medical decision-making
  • Cultivation of compassion and empathy
  • Product development
  • Data visualization
AffectivePerception of humanistic AI-enabled care
  • Beliefs about how AI will affect future of health careers and patient care
  • Change management
  • Adoption of AI
  • Create and sustain a culture of trust and transparency with stakeholders and patients

aEHR: electronic health record.

Cognitive Domain

Of the 41 papers, 20 (49%) [5,26,32,34,36,37,39-41,43, 45-47,50,52,55,57-59,62] highlighted the importance of providing HCPs with a baseline understanding of AI and 10 (24%) [32,35,39,43,49,50,59,61-63] recommended teaching them AI applications. The studies focused on various applications of AI, including diagnostic systems, data gathering, assessment and use, clinical applications, and personalized care. In addition, many of the papers reported that medical curricula should integrate fundamentals of health care data science [5,13,26-36,42,45,49,50,52,54] (19/41, 46%), including, but not limited to, big data and bioinformatics. Matheny et al [45] stated that data science curricula should encompass how to form multidisciplinary development teams to improve the value of AI and to be aware of the ethics, equity, diversity, and inclusion principles at play and the inadvertent ramifications that may result from AI implementation. The studies also focused on statistics, ML with model development, model translation and use in clinical knowledge, data extraction, and applications for visualization of patients. Familiarity with ML vocabulary and a basic understanding of the methodology (algorithms and machine gathering and process of data) were deemed important to understand this rapidly emerging field.

Psychomotor Domain

Most of the papers focused on clinicians being able to effectively analyze the data [2,5,31,34,35,40,43,46,50,54,55,58,59,61,63] (15/41, 37%) to identify trends and efficiency correlations. As highlighted by Balthazar et al [63] and Forney and McBride [55], it is imperative to learn how to evaluate the efficacy and precision of AI applications. This point was reinforced in a review conducted by Park et al [40] that stated medical students should be able to validate the clinical accuracy of algorithms. HCPs will need to become accustomed and understand how to embrace real-time health information to help make decisions in their practice setting [61]. Of the 41 papers, 8 (20%) discussed the significance of understanding and interpreting the findings with a reasonable degree of accuracy, including awareness of source error, bias, or clinical irrelevance [13,28-31,39,47,50]. Moreover, the study findings described problem-solving [35,38,60] (3/41, 7%) as a critical skill, entailing the management and application of several distinct resources. Clinicians will need to become adept in communicating the results and processes [5,28-30,32,36,43,46] (8/41, 20%) with patients in a personalized and meaningful manner. Cultivating and expressing empathy and compassion [28-30,36] (4/41, 10%) when communicating with the patient was emphasized in several studies.

Affective Domain

Of the 41 papers, 8 (20%) stressed that HCPs should have the attitude to harness AI tools effectively to improve outcomes for patients and their communities [27,30,37,52,55,58,59,62]. Wiljer and Hakim [27] asserted the importance of breaking the mass stereotypes about AI as an initial step. It is essential that professionals perceive AI as augmenting their delivery of care, rather than taking over different aspects of the health care system [62]. Forney and McBride [55] stated that clinicians are not as likely to perceive AI as a threat if they are able to see the wide array of AI tools and the impact these tools have on workflow and patient care. Furthermore, Sit et al [37] mentioned that medical students are not as likely to be discouraged from pursuing certain specialties when they are presented with use cases and understand the boundaries of AI tools; almost half of the respondents believed the misconception that because of AI, certain specialists such as radiologists will become obsolete in the near future. Moreover, Brouillette [59] mentioned the need for collaborative programs among medical students, computer science students, and engineering students, where they can better understand each other’s disciplines. A few papers recommended that future AI programs should integrate change management and establish a culture of trust and transparency with relevant stakeholders, which will support organizations to more rapidly adopt and implement AI technologies within the health care ecosystem [27,30]. Thus, it is vital to help organizations manage change at a rate in pace with the rapid advancement of technology.

What Were the Critical Implementation Factors?

Enablers

The factors identified as contributing, or potentially contributing, to the success and implementation of these programs include promoting interfaculty collaboration [39,54,57] and working within existing regulatory structures [28,37,39,57]. Not all institutions have clinical faculty who also have experience with data science; hence, there is a need in both practice and teaching for collaboration with data science faculty. Promoting interfaculty collaboration was described in the studies as the sharing of expertise among faculty members, thus creating a multidisciplinary team [39,54,57]. Collaborative teaching by clinical and nonclinical instructors may increase the educational value when preparing future HCPs and also provide data science support to faculty [39,54]. Another facilitator to implementation is working within existing regulatory structures. Curriculum changes require the support of existing accreditation and regulatory bodies [28]. A few papers discussed the need for the integration of mandatory AI coursework and assessments with the current curricula [39,57]. Hence, this could address varying AI literacy levels; enhancing knowledge of AI will increase the likelihood that it will be used in practice settings [37].

Barriers

Overall, 2 major barriers were identified that could potentially impede an organization’s implementation efforts: (1) varying levels of AI literacy among faculty in designing curricula [54,57] and (2) lack of infrastructure to integrate AI into the current curriculum [34,39,50,54]. Varying levels of AI literacy among faculty and curriculum leaders was discussed as a major barrier that encumbers the implementation of AI programs. Of the 41 papers, 2 (5%) discussed how faculty have limited knowledge of AI fundamentals (eg, big data or data science) and software, as well as limited time to teach [54,57]. There is a lack of technical expertise to design AI-based curricula [49,57]. Moreover, a few studies voiced concerns about the lack of infrastructure to integrate AI into the curriculum. Some studies highlighted that the existing curricula are comprehensive and complex and additional content on AI will increase the course load [34,50,54]. Academic institutions are faced with several encumbrances such as faculty retirement, staff not being well-versed in AI, and inadequate financial resources [54]. Finally, integrating the AI content into existing curricula can be an impediment for many organizations [39].

What Measures and Outcomes Were Used to Assess the Effectiveness of Education Programs?

Of the 41 papers, 5 (12%) presented the results of their training evaluation [35,39,49,50,62]. As the educational approaches varied across studies, each approach will be briefly discussed (Table 7), followed by the measures and outcomes associated with each educational initiative. Categorized according to the Kirkpatrick-Barr Framework, the outcomes were either level 1 (ie, learner reaction and satisfaction with the education) [39,49,50], level 2a (ie, change in attitude) [49,50,62], or level 2b (ie, change in knowledge or skill) [35,62]. There were no outcomes that could be categorized as level 3 or level 4; thus, the program evaluations did not comment on the change in behavior or affect at the organizational level or on patient outcomes.

Table 7. Summary of the 5 papers that assessed the effectiveness of the education program.
Programs and authorsMeasureActual outcomes
Educational summit

Barbour et al 2019, [62]
  • Conducted a 5-question before-and-after poll of those attending our educational summit
  • Level 2a: Baseline beliefs about how AIa will affect the future of health care careers and patient care were similarly positive before and after the event
  • Level 2a: At arrival, 70% of the attendees felt that AI would make health care less humanistic; 50% left the summit feeling neutral
  • Level 2a: We did not observe a meaningful shift in attitudes regarding the desire to take a leadership role in developing or implementing AI
  • Level 2b: At arrival, 40% of the attendees believed that they had a poor baseline understanding of AI’s role in health care; 90% left the summit with an enhanced understanding of the topic
Workshops

Kang et al 2017, [50]
  • A survey was designed to capture residents’ opinions after their minicourse, covering 5 major areas of interest:
  1. How helpful the minicourse was as an introduction to CERb and big data research (on a 5-point scale, with 5 indicating very helpful)
  2. Whether the residents would likely pursue further educational or research opportunities in CER
  3. Whether the residents had prior educational or research exposure to CER
  4. Whether a mentor was available for CER at their home institutions
  5. The importance of CER and big data research to the field of radiology (on a 5-point scale, with 5 indicating very important)
  • Level 1: 90% of the residents reported that the course was helpful or very helpful
  • Level 1: 94% of the participants felt that the lectures were of high or very high quality
  • Level 2a: 82% reported that they planned to pursue additional educational or research training in CER or big data analytics after the course
  • Level 2a: 98% of the respondents felt that health services and big data research are important or very important for the future of radiology

Kinnear et al 2019, [49]
  • Evaluations were conducted on a 5-point Likert scale
  • Level 1: The average weighted rating on a 5-point Likert scale over the 3 years for the prompt “Overall satisfaction with the session” was 4.32 out of 5
  • Level 2a: The participants reported an increase in confidence to use this knowledge to teach residents in the coming academic year
Biomedical informatics course within medical education

Sanchez-Mendiola et al 2013, [39]
  • Administered a program-evaluation anonymous survey to the students at the end of the course, a 41-item questionnaire that explored several aspects of the program
  • Level 1: Overall opinion of the students regarding the different elements of the program was good to excellent for educational activities, course resources, and perception of clinical relevance

Sybenga et al 2016, [35]
  • Competency of senior residents on the basis of their project results was evaluated by staff during a multidisciplinary conference
  • Level 2b: After introductory education in big data analysis concepts, the residents were able to rapidly analyze large sets of data to answer simple questions
  • Level 2b: The senior residents were able to engage in complex problem solving requiring management and application of multiple seemingly unrelated resources and successfully present these results

aAI: artificial intelligence.

bCER: comparative effectiveness research.


Current State of AI Education Programs

This review identified pivotal knowledge gaps in our understanding of effective AI education programs for HCPs. The gaps identified through this review illustrated the limited AI education and training opportunities available for HCPs and thus emphasized the necessity of curating further AI education programs targeted to HCPs. The existing programs tend to focus only on the development and implementation of AI; yet, it is essential to also prepare HCPs to not only work with AI but also to advance AI for health and clinical decision-making. AI education programs should be designed in a way that enables HCPs to not only safely adopt these technologies, but also to adapt and shift their scope of practice to stay relevant. A significant and meaningful change to AI curricula in health care will only occur by increasing AI literacy among HCPs and by providing them with the ability to leverage relevant digital and data-driven decision-making tools. Although the studies demonstrate that efforts are being made to evaluate the outcomes of AI-related education initiatives, there is a lack of consistency in the measures for a comprehensive assessment of these outcomes. Most of the papers used self-constructed and nonvalidated instruments and delineated their findings in qualitative terms. Given the variety of instruments that have been employed in the studies, the absence of a standard, comprehensive tool impedes the integration and synthesis of findings across the studies. The guiding principles provided in this review will also hopefully inform future development and design of these programs.

Critical Implementation Factors

A lack of infrastructure to integrate AI content into current curricula could hinder the development of these types of programs; some of the programs described embedded their content within existing professional certifying bodies’ infrastructure to facilitate content development. The Royal College of Physicians and Surgeons of Canada, in particular, further emphasizes the need for these regulatory strategies, which are currently in process but not yet in practice [65,66]. The promotion of multidisciplinary collaboration was indicated as an enabler of content delivery; yet, varying levels of AI literacy among faculty could impede successful delivery of AI content [54,57]. Curricular adaptations and building an infrastructure for AI technologies could be helpful to HCPs wanting to adopt AI to improve patient care; this includes improvements in the types of health care data available for AI education [67]. Of note, much of the health data generated are often inaccessible to researchers and limited by regulatory or infrastructure-level barriers, including institutional ethics approvals and data-sharing agreements [67]. The use of deidentified data, security, and privacy controls could potentially widen the scope of access; broader collaboration with multidisciplinary experts could also help to establish secure data networks to improve use and access of health care data [67]. Lower levels of AI literacy could be augmented by standardizing competency statements and engaging and training faculty in e-learning, for instance [68-70]. The World Health Organization’s Global Strategy on Digital Health further suggests that the barriers to AI adoption need to be addressed at the systems level and all aspects of implementation should be considered.

Our recommendations have been formed into guiding principles that could be used to guide the development of future AI curricula or to incorporate AI education into existing curricula.

Guiding Principles

Principle 1: Need for Regulatory Strategies

Many studies discussed that working within the existing regulatory structure can hinder the implementation of AI education initiatives. Faculty can be inhibitors to changing curricula that were initially developed to prepare students for their national board examinations [28,30]. In addition, teaching approaches may be too outdated to incorporate new and emerging technologies [29] into the changing digital and AI landscape. New regulatory strategies will be required, and organizations will have to prioritize developing a workforce that not only has the knowledge and skills to provide care with these tools, but also the competencies to rapidly learn and adapt. The studies also highlighted that accrediting bodies can be a roadblock to change [27-29]. Wartman and Combs [28] stated that to prepare future care providers for AI-enabled care, there is a need for accreditors to move beyond traditional models (based on fact memorization and clinical clerkships) and be willing to innovate and consider new approaches to lifelong learning.

Principle 2: Multidisciplinary Approach to Design and Delivery

The rapidly evolving nature of the field and the dynamic regulatory, legal, and economic landscape may hinder the implementation of an AI curriculum and thus affect the deployment of AI tools in clinical practice. An initial AI curriculum must be developed iteratively because many of these areas still entail considerable research and advancement [26], ensuring that new knowledge gains and policy changes are reflected within the curriculum. This finding was reinforced in a paper by Wiljer and Hakim [27]. The authors reported that AI applications have not yet developed to a level of complexity and clinical value because many of these applications are currently in the research and development stages.

Wiens et al [71] stated that successful ML deployment entails assembling experts and stakeholders from various disciplines, including knowledge experts, decision-makers, and users. The approach to curriculum redesign will need to focus on multiple disciplines and levels of training; curricula should be specialized to the needs of various individuals such as health care researchers, clinicians, and quality improvement teams [44]. Therefore, the development of an AI-based curriculum should involve a multidisciplinary team comprising health system leaders, frontline providers, data scientists, patients, and education experts to ensure accuracy and clinical relevance of the curriculum [57,71]. It is imperative for all stakeholders and experts in the field to work collaboratively to understand and address the potential biases, thus reducing the existing social inequalities and ultimately leading to optimal care for all patients [71].

Principle 3: Competence-Based Curriculum Design

To influence the development of their future practice, it is essential for HCPs to have a foundational level of AI competencies and skills [27]. Education should be designed in a manner that teaches HCPs to work with, and understand, the AI they use in their clinical practice. Furthermore, a level of baseline competencies in AI should allow trainees to make significant contributions to health policy decisions related to their scope of practice [50]. AI will likely contribute significantly to the medical practice of the future; therefore, fundamentals and applications of AI tools and terminologies should be integrated into medical school curricula. Specifically, training current and future physicians on how to use these tools to provide quality health care, while taking into account the limitations and ethical implications of such technologies, will be useful [43]. In addition to medical learners and physicians, medical teachers need to be trained to deliver this innovative AI curriculum content; this is a shift that needs to occur without delay, given the steep learning curve ahead [36]. Paranjape et al [41] recommended a staged approach to educating future care providers about AI and its application in health care that spans from undergraduate to continuing medical education.

On the basis of the findings of this review, an ideal flow of AI concepts could be split across the 3 stages of medical education defined by Oxford Medicine: undergraduate medical education, postgraduate medical education, and continuing professional development (Figure 3) [72]. Undergraduate medical education should be focused on HCPs becoming familiar with AI terminology, the fundamentals of ML and data science, capabilities of AI, and how to identify opportunities and applications in health where AI would be appropriate with a health equity lens. During postgraduate medical education, emphasis should be placed on how to engage in validation and prospective evaluation of models, as well as deployment. Ethical and legal considerations, including governance strategy development, should be explored in more depth. Finally, during continuing professional development, providers should be involved in facilitating ethical and societal discussions, teaching AI courses, and keeping abreast of new AI knowledge and skills as well as teaching methods.

Figure 3. Ideal flow of key concepts for AI education curricula. The terms have been defined in the Results section. AI: artificial intelligence; ML: machine learning.
View this figure
Principle 4: Patient-Clinician Interaction

In the age of AI-enabled care, HCPs must consider the potential impact of the patient and clinician interaction as well as the strategies for improving the quality of care delivered in a technology-enabled environment [13,27]. Li et al [13] stated that health professions education should teach and cultivate altruism and compassion, unique skills to humans that are integral to the emergence of AI applications. This will ensure that HCPs are not disrupted by novel tools. To equip themselves to use AI in practice, care providers should develop competencies that allow them to differentiate between credible and false information in their delivery of care [40]. Similar to the situation in other industries, the challenge of adopting and implementing AI in health care will lead to winners and laggards [48]. In the successful adoption of AI, HCPs should engage with their patients because these interactions will be important to complement the technical expertise of AI as AI transforms the health care milieu [48].

Limitations

Our scoping review findings should be examined in the context of the following limitations. Because of the nature of the scoping review, the quality of each identified study was not assessed. Given the nature of the topic being investigated, we excluded studies that discussed AI as a tool for medical education or continuing professional development. Only studies in English were included. In addition, the educational approaches varied across the studies; thus, we were unable to conduct formal comparisons among the curricula to determine which were effective. However, reviewing the literature enabled us to identify the gaps in current education programs and provide insights and best practices to guide future education efforts. As this review was inclusive of all types of studies and focused on a breadth of literature, the depth in reporting of education program details was inconsistent and varied based on the scope of the study.

Conclusions

With the inevitable progression of health care digitization, health professions education should foster unique human abilities, which will complement these emerging technologies. This review provided an overview of the current state of AI in health professions education and future directions on preparing care providers for the era of AI in health care. Future education efforts should focus on the development of regulatory strategies, a multidisciplinary approach to curriculum redesign, a competency-based curriculum, and patient-clinician interaction.

Acknowledgments

Accelerating the appropriate adoption of artificial intelligence in health care through building new knowledge, skills, and capacities in the Canadian health care professions is funded by the Government of Canada’s Future Skills Centre.

Accélérer l'adoption appropriée de l'intelligence artificielle dans la santé en développant de nouvelles connaissances, compétences et capacités pour les professionnels desanté canadiennesest financé par le Centre des Compétences Futures du gouvernement du Canada.

Authors' Contributions

RC led the conceptualization, design, and execution of the review. RC, TJ, and SY collaborated on the numeric and thematic analyses, drafting, and finalization of the manuscript. MA developed the search strategy and conducted the search. In addition to RC, TJ, and SY, DAM, SH, SW, and TT contributed to the identification of papers and screening. DW and ED provided guidance on the conceptualization and design of the study. DW, ED, MS, and WT contributed to the development of ideas that were instrumental in surfacing and maturing many of the concepts contained in this study. They also served as content experts in validating the findings and revising all drafts of this manuscript for important intellectual content and clarity. All authors have revised drafts of this manuscript as well as read and approved the final manuscript.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Database search strategies.

DOCX File , 43 KB

  1. Brinker S. Martec's Law: technology changes exponentially, organizations change logarithmically 2013. Chief Martec.   URL: https:/​/chiefmartec.​com/​2013/​06/​martecs-law-technology-changes-exponentially-organizations-change-logarithmically/​ [accessed 2021-10-20]
  2. Sapci AH, Sapci HA. Artificial intelligence education and tools for medical and health informatics students: systematic review. JMIR Med Educ 2020 Jun 30;6(1):e19285 [FREE Full text] [CrossRef] [Medline]
  3. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med 2019 Jan;25(1):44-56. [CrossRef] [Medline]
  4. Berner ES, McGowan JJ. Use of diagnostic decision support systems in medical education. Methods Inf Med 2010;49(4):412-417. [CrossRef] [Medline]
  5. SFR-IA Group, CERF, French Radiology Community. Artificial intelligence and medical imaging 2018: French Radiology Community white paper. Diagn Interv Imaging 2018 Nov;99(11):727-742 [FREE Full text] [CrossRef] [Medline]
  6. Mattessich S, Tassavor M, Swetter SM, Grant-Kels JM. How I learned to stop worrying and love machine learning. Clin Dermatol 2018;36(6):777-778. [CrossRef] [Medline]
  7. Meek RD, Lungren MP, Gichoya JW. Machine learning for the interventional radiologist. AJR Am J Roentgenol 2019 Oct;213(4):782-784. [CrossRef] [Medline]
  8. The impact of digital health technologies on the future of medical specialties in one infographic. The Medical Futurist.   URL: https:/​/medicalfuturist.​com/​towards-creativity-in-healthcare-the-impact-of-digital-technologies-on-medical-specialties-in-an-infographic/​ [accessed 2021-10-20]
  9. How artificial intelligence is transforming the world. Brookings. 2018.   URL: https://www.brookings.edu/research/how-artificial-intelligence-is-transforming-the-world/ [accessed 2021-11-24]
  10. Fridsma DB. Health informatics: a required skill for 21st century clinicians. BMJ 2018 Jul 12;362:k3043. [CrossRef] [Medline]
  11. Collado-Mesa F, Alvarez E, Arheart K. The role of artificial intelligence in diagnostic radiology: a survey at a single radiology residency training program. J Am Coll Radiol 2018 Dec;15(12):1753-1757. [CrossRef] [Medline]
  12. Moore JH, Boland MR, Camara PG, Chervitz H, Gonzalez G, Himes BE, et al. Preparing next-generation scientists for biomedical big data: artificial intelligence approaches. Per Med 2019 May 01;16(3):247-257 [FREE Full text] [CrossRef] [Medline]
  13. Li D, Kulasegaram K, Hodges BD. Why we needn't fear the machines: opportunities for medicine in a machine learning world. Acad Med 2019 May;94(5):623-625. [CrossRef] [Medline]
  14. Han E, Yeo S, Kim M, Lee Y, Park K, Roh H. Medical education trends for future physicians in the era of advanced technology and artificial intelligence: an integrative review. BMC Med Educ 2019 Dec 11;19(1):460 [FREE Full text] [CrossRef] [Medline]
  15. Singh RP, Hom GL, Abramoff MD, Campbell JP, Chiang MF, AAO Task Force on Artificial Intelligence. Current challenges and barriers to real-world artificial intelligence adoption for the healthcare system, provider, and the patient. Transl Vis Sci Technol 2020 Aug 11;9(2):45 [FREE Full text] [CrossRef] [Medline]
  16. Varghese J. Artificial intelligence in medicine: chances and challenges for wide clinical adoption. Visc Med 2020 Dec;36(6):443-449 [FREE Full text] [CrossRef] [Medline]
  17. Holm EA. In defense of the black box. Science 2019 Apr 05;364(6435):26-27. [CrossRef] [Medline]
  18. He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nat Med 2019 Jan;25(1):30-36 [FREE Full text] [CrossRef] [Medline]
  19. Cox M, Blouin AS, Cuff P, Paniagua M, Phillips S, Vlasses PH. The role of accreditation in achieving the quadruple aim. National Academy of Medicine. 2017.   URL: https://nam.edu/the-role-of-accreditation-in-achieving-the-quadruple-aim/ [accessed 2021-10-20]
  20. Hoque M. Three domains of learning: cognitive, affective and psychomotor. Academic Research - What the research project is intended to achieve. 2017.   URL: https:/​/www.​researchgate.net/​publication/​330811334_Three_Domains_of_Learning_Cognitive_Affective_and_Psychomotor [accessed 2021-10-20]
  21. Shen N, Yufe S, Saadatfard O, Sockalingam S, Wiljer D. Rebooting Kirkpatrick: integrating information system theory into the evaluation of web-based continuing professional development interventions for interprofessional education. J Contin Educ Health Prof 2017;37(2):137-146. [CrossRef] [Medline]
  22. Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol 2005 Feb;8(1):19-32. [CrossRef]
  23. Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med 2018 Oct 02;169(7):467-473 [FREE Full text] [CrossRef] [Medline]
  24. PRISMA for scoping reviews. PRISMA.   URL: http://www.prisma-statement.org/Extensions/ScopingReviews [accessed 2021-10-20]
  25. Colquhoun HL, Levac D, O'Brien KK, Straus S, Tricco AC, Perrier L, et al. Scoping reviews: time for clarity in definition, methods, and reporting. J Clin Epidemiol 2014 Dec;67(12):1291-1294. [CrossRef] [Medline]
  26. Wood MJ, Tenenholtz NA, Geis JR, Michalski MH, Andriole KP. The need for a machine learning curriculum for radiologists. J Am Coll Radiol 2019 May;16(5):740-742. [CrossRef] [Medline]
  27. Wiljer D, Hakim Z. Developing an artificial intelligence-enabled health care practice: rewiring health care professions for better care. J Med Imaging Radiat Sci 2019 Dec;50(4 Suppl 2):S8-14. [CrossRef] [Medline]
  28. Wartman SA, Combs CD. Medical education must move from the information age to the age of artificial intelligence. Acad Med 2018 Aug;93(8):1107-1109. [CrossRef] [Medline]
  29. Wartman S, Combs C. Reimagining medical education in the age of AI. AMA J Ethics 2019 Feb 01;21(2):E146-E152 [FREE Full text] [CrossRef] [Medline]
  30. Wartman SA. The empirical challenge of 21st-century medical education. Acad Med 2019 Oct;94(10):1412-1415. [CrossRef] [Medline]
  31. Topaz M, Pruinelli L. Big data and nursing: implications for the future. Stud Health Technol Inform 2017;232:165-171. [Medline]
  32. Thompson RF, Valdes G, Fuller CD, Carpenter CM, Morin O, Aneja S, et al. Artificial intelligence in radiation oncology: a specialty-wide disruptive transformation? Radiother Oncol 2018 Dec;129(3):421-426. [CrossRef] [Medline]
  33. Tang A, Tam R, Cadrin-Chênevert A, Guest W, Chong J, Barfett J, Canadian Association of Radiologists (CAR) Artificial Intelligence Working Group. Canadian association of radiologists white paper on artificial intelligence in radiology. Can Assoc Radiol J 2018 May;69(2):120-135 [FREE Full text] [CrossRef] [Medline]
  34. Tajmir SH, Alkasab TK. Toward augmented radiologists: changes in radiology education in the era of machine learning and artificial intelligence. Acad Radiol 2018 Jun;25(6):747-750. [CrossRef] [Medline]
  35. Sybenga A, Zreik RT, Mohammad A, Rao A. Big Data: bioinformatics education during residency demonstrates immediate value. Nature Publishing Group.   URL: https://www.nature.com/articles/labinvest20168.pdf?proof=t [accessed 2021-11-24]
  36. Srivastava TK, Waghmare L. Implications of Artificial Intelligence (AI) on dynamics of medical education and care: a perspective. J Clin Diagnos Res 2020 Mar;14(3):1-2. [CrossRef]
  37. Sit C, Srinivasan R, Amlani A, Muthuswamy K, Azam A, Monzon L, et al. Attitudes and perceptions of UK medical students towards artificial intelligence and radiology: a multicentre survey. Insights Imaging 2020 Feb 05;11(1):14 [FREE Full text] [CrossRef] [Medline]
  38. Saqr M, Tedre M. Should we teach computational thinking and big data principles to medical students? Int J Health Sci (Qassim) 2019;13(4):1-2 [FREE Full text] [Medline]
  39. Sánchez-Mendiola M, Martínez-Franco AI, Lobato-Valverde M, Fernández-Saldívar F, Vives-Varela T, Martínez-González A. Evaluation of a Biomedical Informatics course for medical students: a pre-posttest study at UNAM Faculty of Medicine in Mexico. BMC Med Educ 2015 Apr 01;15:64 [FREE Full text] [CrossRef] [Medline]
  40. Park SH, Do K, Kim S, Park JH, Lim Y. What should medical students know about artificial intelligence in medicine? J Educ Eval Health Prof 2019;16:18 [FREE Full text] [CrossRef] [Medline]
  41. Paranjape K, Schinkel M, Nannan Panday R, Car J, Nanayakkara P. Introducing artificial intelligence training in medical education. JMIR Med Educ 2019 Dec 03;5(2):e16048 [FREE Full text] [CrossRef] [Medline]
  42. Nguyen GK, Shetty AS. Artificial intelligence and machine learning: opportunities for radiologists in training. J Am Coll Radiol 2018 Sep;15(9):1320-1321. [CrossRef] [Medline]
  43. McCoy LG, Nagaraj S, Morgado F, Harish V, Das S, Celi LA. What do medical students actually need to know about artificial intelligence? NPJ Digit Med 2020 Jun 19;3:86 [FREE Full text] [CrossRef] [Medline]
  44. Mathur P, Burns M. Artificial intelligence in critical care. Int Anesthesiol Clin 2019;57(2):89-102. [CrossRef] [Medline]
  45. Matheny ME, Whicher D, Thadaney Israni S. Artificial intelligence in health care: a report from the national academy of medicine. JAMA 2020 Feb 11;323(6):509-510. [CrossRef] [Medline]
  46. Masters K. Artificial intelligence in medical education. Med Teach 2019 Sep;41(9):976-980. [CrossRef] [Medline]
  47. Kolachalama VB, Garg PS. Machine learning and medical education. NPJ Digit Med 2018 Sep 27;1:54 [FREE Full text] [CrossRef] [Medline]
  48. Kobayashi Y, Ishibashi M, Kobayashi H. How will "democratization of artificial intelligence" change the future of radiologists? Jpn J Radiol 2019 Jan;37(1):9-14. [CrossRef] [Medline]
  49. Kinnear B, Hagedorn P, Kelleher M, Ohlinger C, Tolentino J. Integrating Bayesian reasoning into medical education using smartphone apps. Diagnosis (Berl) 2019 Jun 26;6(2):85-89. [CrossRef] [Medline]
  50. Kang SK, Lee CI, Pandharipande PV, Sanelli PC, Recht MP. Residents' introduction to comparative effectiveness research and big data analytics. J Am Coll Radiol 2017 Apr;14(4):534-536 [FREE Full text] [CrossRef] [Medline]
  51. Kang J, Thompson RF, Aneja S, Lehman C, Trister A, Zou J, et al. National cancer institute workshop on artificial intelligence in radiation oncology: training the next generation. Pract Radiat Oncol 2021;11(1):74-83 [FREE Full text] [CrossRef] [Medline]
  52. Jeffery A. ANI emerging leader project: identifying challenges and opportunities in nursing data science. Comput Inform Nurs 2019 Jan;37(1):1-3. [CrossRef] [Medline]
  53. Gorman D, Kashner TM. Medical graduates, truthful and useful analytics with big data, and the art of persuasion. Acad Med 2018 Aug;93(8):1113-1116. [CrossRef] [Medline]
  54. Foster M, Tasnim Z. Data science and graduate nursing education: a critical literature review. Clin Nurse Spec 2020;34(3):124-131. [CrossRef] [Medline]
  55. Forney MC, McBride AF. Artificial intelligence in radiology residency training. Semin Musculoskelet Radiol 2020 Feb;24(1):74-80. [CrossRef] [Medline]
  56. Evans J, Banerjee A. Global health and data science: future needs for tomorrow’s cardiologist. Br J Cardiol 2016 Aug:87-88. [CrossRef]
  57. Chan KS, Zary N. Applications and challenges of implementing artificial intelligence in medical education: integrative review. JMIR Med Educ 2019 Jun 15;5(1):e13930 [FREE Full text] [CrossRef] [Medline]
  58. Chamunyonga C, Edwards C, Caldwell P, Rutledge P, Burbery J. The impact of artificial intelligence and machine learning in radiation therapy: considerations for future curriculum enhancement. J Med Imaging Radiat Sci 2020 Jun;51(2):214-220. [CrossRef] [Medline]
  59. Brouillette M. AI added to the curriculum for doctors-to-be. Nat Med 2019 Dec;25(12):1808-1809. [CrossRef] [Medline]
  60. Briganti G, Le Moine O. Artificial intelligence in medicine: today and tomorrow. Front Med (Lausanne) 2020 Feb 5;7:27 [FREE Full text] [CrossRef] [Medline]
  61. Bhavnani SP, Muñoz D, Bagai A. Data science in healthcare: implications for early career investigators. Circ Cardiovasc Qual Outcomes 2016 Nov;9(6):683-687. [CrossRef] [Medline]
  62. Barbour AB, Frush JM, Gatta LA, McManigle WC, Keah NM, Bejarano-Pineda L, et al. Artificial intelligence in health care: insights from an educational forum. J Med Educ Curric Dev 2020 Jan 28;6:2382120519889348 [FREE Full text] [CrossRef] [Medline]
  63. Balthazar P, Tajmir SH, Ortiz DA, Herse CC, Shea LA, Seals KF, et al. The Artificial Intelligence Journal Club (#RADAIJC): a multi-institutional resident-driven web-based educational initiative. Acad Radiol 2020 Jan;27(1):136-139. [CrossRef] [Medline]
  64. Strosahl K. Training behavioral health and primary care providers for integrated care: a core competencies approach. In: Behavioral Integrative Care: Treatments That Work in the Primary Care Setting. New York: Brunner-Routledge; 2005.
  65. Schneeweiss S, Ahmed S, Burhan A, Campbell C. Competency-based CPD: implications for physicians, CPD providers and health care institutions. Canada: Royal College of Physicians and Surgeons of Canada.   URL: https://journals.sagepub.com/doi/abs/10.1177/1039856219859279 [accessed 2021-10-20]
  66. Sargeant J, Bhanji F, Holmboe E, Kassen B, McFadyen R, Mazurek K. Assessment and feedback for continuing competence and enhanced expertise in practice. Canada: Royal College of Physicians and Surgeons of Canada.   URL: http:/file:///C:/Users/user/Downloads/cb-cpd-white-paper-assessment-e.pdf [accessed 2021-10-20]
  67. Ghassemi M, Goldenberg A, Morris Q, Rudzicz F, Wang B, Zemel R. Accessible data, health AI and the human right to benefit from science and its applications. Health Law Canada 2019 Aug;40(1):38.
  68. O'Doherty D, Dromey M, Lougheed J, Hannigan A, Last J, McGrath D. Barriers and solutions to online learning in medical education - an integrative review. BMC Med Educ 2018 Jun 07;18(1):130 [FREE Full text] [CrossRef] [Medline]
  69. Park JY, Mills KA. Enhancing interdisciplinary learning with a learning management system. MERLOT J Online Learn Teach 2014 Jun;10(2):299-313.
  70. Global strategy on digital health 2020-2025. World Health Organization. 2021.   URL: https://www.who.int/docs/default-source/documents/gs4dhdaa2a9f352b0445bafbc79ca799dce4d.pdf [accessed 2021-10-20]
  71. Wiens J, Saria S, Sendak M, Ghassemi M, Liu VX, Doshi-Velez F, et al. Do no harm: a roadmap for responsible machine learning for health care. Nat Med 2019 Sep;25(9):1337-1340. [CrossRef] [Medline]
  72. Westerman M, Teunissen P. Oxford Textbook of Medical Education. Oxford, UK: Oxford University Press; 2013.


AI: artificial intelligence
HCP: health care professional
ML: machine learning
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses


Edited by G Eysenbach; submitted 07.06.21; peer-reviewed by S Purkayastha, S You, Y Zidoun; comments to author 08.09.21; revised version received 04.10.21; accepted 04.10.21; published 13.12.21

Copyright

©Rebecca Charow, Tharshini Jeyakumar, Sarah Younus, Elham Dolatabadi, Mohammad Salhia, Dalia Al-Mouaswas, Melanie Anderson, Sarmini Balakumar, Megan Clare, Azra Dhalla, Caitlin Gillan, Shabnam Haghzare, Ethan Jackson, Nadim Lalani, Jane Mattson, Wanda Peteanu, Tim Tripp, Jacqueline Waldorf, Spencer Williams, Walter Tavares, David Wiljer. Originally published in JMIR Medical Education (https://mededu.jmir.org), 13.12.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic information, a link to the original publication on https://mededu.jmir.org/, as well as this copyright and license information must be included.