Review
Abstract
Background: As the adoption of artificial intelligence (AI) in health care increases, it will become increasingly crucial to involve health care professionals (HCPs) in developing, validating, and implementing AI-enabled technologies. However, because of a lack of AI literacy, most HCPs are not adequately prepared for this revolution. This is a significant barrier to adopting and implementing AI that will affect patients. In addition, the limited existing AI education programs face barriers to development and implementation at various levels of medical education.
Objective: With a view to informing future AI education programs for HCPs, this scoping review aims to provide an overview of the types of current or past AI education programs that pertains to the programs’ curricular content, modes of delivery, critical implementation factors for education delivery, and outcomes used to assess the programs’ effectiveness.
Methods: After the creation of a search strategy and keyword searches, a 2-stage screening process was conducted by 2 independent reviewers to determine study eligibility. When consensus was not reached, the conflict was resolved by consulting a third reviewer. This process consisted of a title and abstract scan and a full-text review. The articles were included if they discussed an actual training program or educational intervention, or a potential training program or educational intervention and the desired content to be covered, focused on AI, and were designed or intended for HCPs (at any stage of their career).
Results: Of the 10,094 unique citations scanned, 41 (0.41%) studies relevant to our eligibility criteria were identified. Among the 41 included studies, 10 (24%) described 13 unique programs and 31 (76%) discussed recommended curricular content. The curricular content of the unique programs ranged from AI use, AI interpretation, and cultivating skills to explain results derived from AI algorithms. The curricular topics were categorized into three main domains: cognitive, psychomotor, and affective.
Conclusions: This review provides an overview of the current landscape of AI in medical education and highlights the skills and competencies required by HCPs to effectively use AI in enhancing the quality of care and optimizing patient outcomes. Future education efforts should focus on the development of regulatory strategies, a multidisciplinary approach to curriculum redesign, a competency-based curriculum, and patient-clinician interaction.
doi:10.2196/31043
Keywords
Introduction
Background
The widespread and rapid adoption of artificial intelligence (AI) technologies in health sciences, education, and practices introduces new ways of delivering patient care [
]. AI encompasses a broader term within computer science, which includes technologies that can incorporate human-like perception, intelligence, and problem-solving into complex machines [ ]. Big data in health care, along with high-performance computing power, has enabled the use of AI, machine learning (ML), and deep learning, in particular, to improve clinical decision-making and health sector efficiency [ ]. More recently, AI-enabled technologies have continued to emerge, predominantly in the medical fields of radiology, anesthesiology, dermatology, surgery, and pharmacy [ - ]. Although AI is not likely to replace clinical reasoning, Mesko [ ] predicts that AI will influence all specialties in varying degrees, depending on the nature of the practice (eg, the degree of repetitive tasks involved and whether the tasks are data driven). However, the efficacy of AI-enabled technologies in health care depends on the involvement of health care professionals (HCPs) in developing and validating these technologies. Therefore, HCPs should play a role in this transformation and be involved in every aspect of shaping how AI adoption will affect their specialties and organizations.Recommendations for HCP involvement are emerging. For instance, in the field of medical imaging, West and Allen [
] recommend that HCPs be involved in (1) implementing data standards and following them in practice, (2) prioritizing use cases of AI in medicine, (3) determining the clinical impact of potential algorithms, (4) describing and articulating the needs of the profession for data scientists and researchers, and (5) participating in the translation of practice needs from human language into machine language. As these technologies emerge, it is essential for HCPs and educators to have the competencies required to rapidly develop and incorporate these changes into their practices and disciplines.At the individual level, a lack of AI literacy is a significant barrier to the adoption and use of AI-enabled technologies to their full capacity in various medical specialties. In AI education programs specifically, there are barriers to implementation at various levels of medical education (undergraduate, postgraduate, practice-based education, or continuing professional development). For instance, health informatics plays a valuable role in modern medicine; yet, it is not the focus of most medical school curricula [
]. Technology experts are often consulted to provide training on the use of electronic clinical tools, but this does not support the level of skill required to understand how it could be used to enhance patient interactions and improve care [ ]. Another example exists within radiology residency programs, where the lack of awareness as well as lack of knowledge of implementing and using AI were cited as barriers to its adoption [ , ]. Incorporating AI fundamentals into health professionals’ curricula is essential, and it would be useful to balance this knowledge with providing patient-centered care by empowering future HCPs to consider AI in the context of their own clinical judgment. The combination of trust in their own judgment and basic statistical knowledge will be useful in understanding how to best apply new AI-driven technologies in clinical practice [ ]. AI needs to be considered within the context of HCPs’ broader skill sets, priorities, and ultimate goals in health care; this includes encouraging patient-centered, compassionate care in clinical practice [ , ].Martec’s Law refers to the idea that technology changes occur much more rapidly, and in fact exponentially, compared with the ability of organizations to adopt these technologies [
]. Therefore, organizations need to promote innovative technologies proactively and empower their professionals to be adequately trained to successfully implement AI-based tools in their practice [ ]. A concerted, deliberate approach is required to incorporate these new technologies, both effectively and compassionately, at an individual level and within the culture and operations of an organization [ ].A number of potential barriers to implementing these technologies exist; the 3 main limitations identified include regulatory, economic, and organizational culture issues [
]. Regulatory approval [ ] is needed to adopt AI technologies in clinical settings, and potential liabilities in using these technologies for patient care must be considered, as well as the safety, efficacy, and transparency of AI algorithms for clinical decision-making [ , ]. Regulatory issues can also come into play when it comes to accessing data for AI adoption; multi-institution data sharing is required for algorithm improvement and validation, as well as the accompanying research ethics board and regulatory approvals [ ]. To further improve adoption, these technologies will also have to be economical, supported by adequate funding [ ], and seem as valuable to the organization itself. At an organizational level, the use of AI should align with the goals and strategic plans of an organization; organizations will need to assess how well the AI technology will integrate into existing systems, including data warehouses and electronic health records [ ]. It may be difficult to generalize a particular AI model across different clinical contexts to a degree that would prove valuable at an organizational level while still working seamlessly and being clinically useful at the individual level [ ]. Furthermore, when choosing to adopt AI technologies, organizations can either collaborate with outside vendors or create the technologies in-house, which will require the use of additional human and material resources [ ].Objective
Deficits in AI education may be contributing to a lack of capacity in health care systems to fully integrate and adopt AI technologies to improve patient care, despite calls for AI integration as part of the National Academy of Medicine’s Quintuple Aim Model [
]. It is important to equip health care organizations and their stakeholders to have the cognitive, psychomotor, and affective skills to harness AI in enhancing and optimizing the delivery of care. This will also involve supporting AI education initiatives that are widely available for all types of HCPs. To support future AI education development, dissemination, and evaluation, it is important to assess the current situation within AI adoption in health care and further understand the extent of AI education implementation, including who is receiving AI training or education, what content is covered, how it is delivered, and whether this reflects what experts believe that AI education curricula should include. Therefore, this scoping review aims to establish a foundational understanding of education programs on AI for HCPs by determining the following:- What were the most effective educational approaches to enabling HCPs to harness AI in enhancing and optimizing health care delivery?
- What curricular content was delivered?
- What was the scope of content that should be delivered?
- What learning objectives were used in these approaches, using the taxonomy for learning formulated by Bloom [ ]?
- What were the enablers or barriers that contributed to the success of these programs and the implementation of AI curricula in health care education programs?
- What outcomes were used to assess the effectiveness of the education programs, using the Kirkpatrick-Barr Framework [ ]?
Methods
Overview
This scoping review followed the Arksey and O’Malley [
] guidelines and the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) Extension for Scoping Reviews checklist [ , ]. The objective of this scoping review is to examine and summarize the extant literature on AI education and training for HCPs.Stage 1: Search Strategy
A health sciences librarian (MA) developed strategies for Ovid MEDLINE All, Ovid Embase, Ovid APA PsycINFO, Ovid Emcare Nursing, Ovid Cochrane Database of Systematic Reviews, Ovid Cochrane Central Register of Controlled Trials, EBSCO ERIC, and Clarivate Web of Science using appropriate subject headings and keywords for AI and health professions education. As a result of the widespread use of terms relating to health professions and education in health sciences literature, the decision was made to focus the searches on health professions education concepts to reduce noise in the results sets. Searches for these subject headings were limited to where they were the major subject heading (the most important subject heading in the database record for an item). Keywords for these concepts were only searched in the study titles, the author-assigned keywords, heading words, and journal titles, depending on the content and field availability of the database. No language or date limits were applied. The searches were run and the results were downloaded on July 7, 2020. For the complete strategies, see
. If the search results included conference abstracts and proceedings, a subsequent search to find any corresponding follow-up studies was conducted in Google Scholar. Finally, pearl growing, also known as a hand comb process, was conducted where all cited works in the included studies from the initial screening underwent a 2-stage screening process (title and abstract scan as well as full-text review).Stage 2: Study Selection
The 2-stage screening process consisted of (1) title and abstract scan and (2) full-text review. Study eligibility was determined by 2 independent reviewers, and a third reviewer was involved to resolve any conflict when consensus was not reached between the 2 reviewers. For a study to be included for full-text review and to be chosen for subsequent inclusion, the title and abstract at each stage needed to have the following attributes:
- It discussed an actual training program or educational intervention or potential training program or educational intervention and the desired content to be covered.
- It focused on AI.
- It was designed or intended for HCPs (at any stage of their career).
A pilot review of 20% (595/2973) of the MEDLINE citations was conducted to establish interrater reliability. The interrater reliability threshold had a Cohen κ value of 0.70, indicating substantial agreement. Additional batches of 50 citations were reviewed until the threshold was met.
Stage 3: Data Collection
A standardized charting form was developed to capture the following domains: article details, study details (if publication was an empirical study), education program details, and implementation factors. The subdivisions of the domains for the data extraction are outlined in
.Domain | Subdomain |
Article details | Article type, year, and country |
Study details | Study design, participants, intervention, comparator, primary outcomes, and secondary outcomes |
Education program details | Name, setting, participants, program delivery and curriculum, program instructors (discipline), program length, and instructor training |
Implementation factors | Implementation enablers or facilitators, implementation barriers, and recommendations |
Stage 4: Synthesizing and Reporting the Results
To collate, summarize, and report on the included studies in this review, a narrative synthesis approach was used [
]. This included a numeric summary using descriptive statistics to report each domain (article details, study details, education program details, and implementation factors). For program curriculum under education program details, curriculum topics were inductively coded. Once a list of topics was generated, they were then grouped by domain using the taxonomy for learning formulated by Bloom. There are 3 domains: (1) cognitive, which refers to knowledge that learners should have, (2) psychomotor, which refers to skills learners should demonstrate and master, and (3) affective, which refers to attitudes learners should develop and incorporate into their practice [ ].The study outcomes were deductively coded using the Kirkpatrick-Barr Framework [ ] of educational outcomes. This framework was selected because it provided a standardized method of categorizing the type of educational outcomes reported by each study. The implementation factors subdomain was thematically analyzed by 2 independent reviewers using a priori codes. The reviewers compared coding schemes and iteratively determined overarching themes to frame their findings. For content validation, the project team members, patients, and experts in the fields of medical education and AI provided feedback on the thematic analysis.Results
Search Results
The initial database search yielded 13,449 results; once duplicates were removed, the titles and abstracts of 10,094 (75.05%) unique citations were identified. From the 10,094 unique citations, we identified 41 (0.41%) articles [
, , , - ], where 13 unique, existing programs [ , , , , , , , - ] were mentioned in 10 (24%) articles, and the remaining 31 (76%) articles [ , , , - , , , - , - , - , - , ] discussed the desired or recommended curricular content. The article selection process is presented in . Of the 10 articles that discussed an existing program, 8 (80%) were commentaries [ , , , , , - ], 1 (10%) was a case report [ ], and 1 (10%) was an empirical study [ ]. and describe the characteristics of the articles and programs included in this review.Characteristics | Frequency, n (%) | References | |||
Study type | |||||
Commentary | 30 (73) | [ | , - , - , , , - , , - , , , - ]|||
Review | 6 (15) | [ | , , , , , ]|||
Empirical study | 3 (7) | [ | , , ]|||
Case report | 1 (2) | [ | ]|||
Best Evidence Medical Education Guide | 1 (2) | [ | ]|||
Publication year | |||||
2013 | 1 (2) | [ | ]|||
2016 | 3 (7) | [ | , , ]|||
2017 | 2 (5) | [ | , ]|||
2018 | 8 (20) | [ | , , - , , , ]|||
2019 | 16 (39) | [ | , , , , , , , - , , , , , , ]|||
2020 | 11 (27) | [ | , , , , , , , , , , ]|||
Country | |||||
United States | 23 (56) | [ | , , - , , , , , , , - , , , , - ]|||
Canada | 4 (10) | [ | , , , ]|||
United Kingdom | 2 (5) | [ | , ]|||
Other | 12 (29) | [ | , , - , , , , , , ]
Characteristic | Frequency, n (%) | References | |||
Program type | |||||
Workshop | 2 (15) | [ | , ]|||
Fellowship | 3 (23) | [ | , ]|||
Biomedical informatics course | 2 (15) | [ | , ]|||
Data science course | 2 (8) | [ | , ]|||
Joint course-based program | 1 (8) | [ | ]|||
Educational summit | 1 (8) | [ | ]|||
Certificate program | 1 (8) | [ | ]|||
Artificial Intelligence Journal Club | 1 (8) | [ | ]|||
Program setting | |||||
Medical school | 6 (46) | [ | , , , , ]|||
Academic hospital | 4 (31) | [ | , , ]|||
National | 1 (8) | [ | ]|||
International | 2 (15) | [ | , ]|||
Program length | |||||
>1 year | 2 (15) | [ | , ]|||
>1 month | 2 (15) | [ | , ]|||
>1 day | 1 (8) | [ | ]|||
≤1 day | 2 (15) | [ | , ]|||
Not reported | 6 (46) | [ | , , ]|||
Program audience | |||||
Health care professionals | 12 (92) | [ | , , , , , , , - ]|||
Researchers or clinician scientists | 2 (15) | [ | , ]|||
Health care administrators | 1 (8) | [ | ]|||
Other health disciplines | 1 (8) | [ | ]|||
Continuum of learninga | |||||
Undergraduate medical education | 5 (39) | [ | , , , ]|||
Postgraduate medical education | 8 (62) | [ | , , , , , ]|||
Program objectivesb | |||||
Cognitive or psychomotor | 10 (77) | [ | , , , , , , , , ]|||
Affective | 1 (8) | [ | ]|||
Both | 2 (15) | [ | , ]|||
Program methods | |||||
Didactic | 9 (69) | [ | , , , , , , , , ]|||
Workshop | 2 (15) | [ | , ]|||
Case-based | 2 (15) | [ | , ]|||
Discussions | 2 (15) | [ | , ]|||
Experiential learning | 5 (39) | [ | , , ]|||
Web-based | 3 (23) | [ | , , ]|||
Number of methods used | |||||
1 method | 5 (39) | [ | , , , ]|||
2 methods | 5 (39) | [ | , , , , ]|||
≥3 methods | 2 (15) | [ | , ]|||
Study outcomesc | |||||
Level 1 | 3 (23) | [ | , , ]|||
Level 2a | 3 (23) | [ | , , ]|||
Level 2b | 2 (15) | [ | , ]|||
None | 8 (62) | [ | , , , , ]
aThere are no continuing medical education programs.
bCategorized based on the domains identified in the taxonomy for learning formulated by Bloom [
].cCategorized based on the education outcomes identified in the Kirkpatrick-Barr Framework [
].What Was the Mode of Delivery?
Summaries of the individual programs can be found in
. Of the 13 programs, 8 (62%) originated from the United States [ , , , , , - ], 1 (8%) from Canada [ ], 1 (8%) from France [ ], and 1 (8%) from Mexico [ ]. The typology described by Strosahl [ ] was used to classify the educational method. Of the 13 programs, 9 (69%) had a didactic approach [ , , , , , , , ] in combination with discussions [ ] (1/13, 8%), web-based [ , ] (2/13, 15%), workshop and case-based [ ] (1/13, 8%), and experiential learning [ ] (1/13, 8%). Of the 13 programs, 10 (77%) were taught in an academic setting [ , , , , , , ].Program name or first author; country; host institution; specialty; program length | Program setting | Curriculum delivery methods | ||||||||||
Medical school | Academic hospital | National | International | Didactic | Workshop | Case-based | Discussion | Experiential learning | Web-based | |||
Artificial Intelligence Journal Club; United States; American College of Radiology; Radiology; monthly for 1 hour [ | ]✓ | ✓ | ||||||||||
Educational Summit; United States; Duke University Medical Center; NSa; <1 day [ | ]✓ | ✓ | ✓ | |||||||||
Health Care by Numbers; United States; New York University; NS; 3 years [ | ]✓ | ✓ | ✓ | |||||||||
Joint course-based program; France; Gustave Roussy with École des Ponts ParisTech and CentraleSupélec; NS; NRb [ | ]✓ | ✓ | ||||||||||
Fellowship; United States; Emory University School of Medicine; Radiology; NR [ | ]✓ | ✓ | ||||||||||
Fellowship; United States; Hospital of the University of Pennsylvania; Imaging Informatics; NR [ | ]✓ | ✓ | ||||||||||
Elective courses; United States; Carle Illinois College of Medicine; NS; NR [ | ]✓ | ✓ | ✓ | |||||||||
Introduction to Comparative Effectiveness Research and Big Data Analytics for Radiology; United States; New York University School of Medicine; medical imaging; 2 days [ | ]✓ | ✓ | ✓ | ✓ | ||||||||
Kinnear; United States; University of Cincinnati; NS; <1 day [ | ]✓ | ✓ | ✓ | ✓ | ✓ | |||||||
Computing for Medicine certificate program; Canada; University of Toronto, Faculty of Medicine; NS; 2 years [ | ]✓ | ✓ | ✓ | |||||||||
The National Autonomous University of Mexico, Faculty of Medicine, biomedical informatics education; Mexico; University of Mexico’s Faculty of Medicine; NS; 2 one-semester courses [ | ]✓ | ✓ | ✓ | |||||||||
Formalized bioinformatics education; United States; Baylor Scott and White Medical Center; medical imaging; NR [ | ]✓ | ✓ | ||||||||||
National Cancer Institute–Food and Drug Administration Information Exchange and Data Transformation fellowship in oncology data science; United States; National Cancer Institute; medical imaging; NR [ | ]✓ | ✓ |
aNS: specialty not specified.
bNR: not reported.
Target Audience
There were 3 types of HCPs identified in the 41 reviewed papers: physicians [
, , , , , ] (6/41, 15%), nurses [ , ] (2/41, 5%), and radiology technologists [ ] (1/41, 2%). In addition, 2 specific specialties were identified: medical imaging [ , , - , , , , , , , , ] (13/41, 32%) and cardiology [ , ] (2/41, 5%), with others not being specified [ , , - , , , - , - , , - , , , , ] (26/41, 63%). illustrates the type of curriculum topics covered in the continuum of learning for clinicians, which includes undergraduate medical education [ , , - , , - , - , , , , , , , ] (20/41, 49%), postgraduate medical education [ , , - , , , - , - , , ] (19/41, 46%), and continuing professional development [ , ] (2/41, 5%). Other nonclinical professionals include researchers [ , , , , ] (5/41, 12%), health care administrators [ , , , , ] (5/41, 12%), and computer and data scientists [ , , , ] (4/41, 10%).What Content Was Covered?
From these papers, the program curriculum and desired or recommended content mentioned included topics on using AI, interpreting AI, and explaining results from AI, as framed by McCoy et al [
]. A description of each curricular topic can be found in .Of these 16 curricular topics, 9 (56%) fell under cognitive domain, 6 (38%) under psychomotor domain, and 1 (6%) under affective domain, and most of them were mentioned both by papers that discussed current education programs and commentaries that discussed what HCPs should be learning. The curricular topics were categorized into the 3 domains identified in the taxonomy for learning formulated by Bloom [
]. displays the curricular topics that were unique to 24% (10/41) of the papers [ , , , , , , , - ] that described what AI programs currently teach, 76% (31/41) of the papers [ , , , - , , , - , - , - , - , ] that described what AI programs should teach as part of their curriculum, and those that outlined both what was taught and what should be taught.Themes (framed by McCoy et al [ | ]) and topicDescription | Number of studies | References | |
Using AIa | ||||
Fundamentals of AI | An overview of all stages of model development, translation, and use in clinical practice. Specifically, this would cover nomenclature and principles such as data collection and transformation, algorithm selection, model development, training and validation, and interpreting model output | 20 | [ | , , , , , , , - , , , , , , , - , ]|
Fundamentals of health care data science | Fundamental understanding of the environment supported by AI. This includes an overview of biostatistics, big data, data streams available, and how algorithms and machine learning use and process data | 20 | [ | , , - , , , , , , , ]|
Fundamentals of biomedical informatics | An overview of essential concepts such as nomenclature (information and knowledge taxonomy), structure and function of computers, information and communications technology, standards in biomedical informatics, and technology evaluation | 1 | [ | ]|
Multidisciplinary collaboration | Learning how to partner and communicate with experts in engineering and data science to ensure clinical relevance and accuracy of AI systems | 13 | [ | , , , , , , - , , , ]|
Applications of AI | Providing examples of AI that have been implemented in health care settings to understand the impact of technologies that incorporate AI | 11 | [ | , , , , - , , , , ]|
Implementation of AI in health care settings | Understanding how to embed AI tools into clinical settings and workflows. Specifically, this includes requirements for clinical translation and interpretation of model outputs | 9 | [ | , , - , , , , ]|
Strengths and limitations of AI | Understanding the value, pitfalls, weaknesses and potential errors or unintended consequences that may occur when using AI tools | 13 | [ | , , - , , , , , , , , ]|
Ethical considerations | Understanding and building awareness of ethics, equity, inclusion, patient rights, and confidentiality when using AI tools | 13 | [ | , , - , , , , , , , , ]|
Legal considerations and governance strategy | Understanding data governance principles, regulatory frameworks, legislation, policy on using data and AI tools, as well as liability or intellectual property issues | 7 | [ | , , , , , , ]|
Economic considerations | “Understanding of how business or clinical processes will be altered through the integration of AI technologies into health care” [ | ] as well as commercialization2 | [ | , ]|
Interpreting results fromAI | ||||
Medical decision-making | Understanding decision science and probabilities from AI diagnostic and therapeutic algorithms to then meaningfully apply them in clinical decision-making | 8 | [ | , , - , , ]|
Data visualization | Understanding how to present and describe outputs from AI tools | 4 | [ | , , , ]|
Product development projects | Hands on experience to develop, test, and validate AI algorithms with real medical data | 2 | [ | , ]|
Explaining results fromAI | ||||
Communicating with patients | Mastering how to communicate results with patients in a personalized and meaningful way and discuss the use of AI in the medical decision-making process | 8 | [ | , - , , , , ]|
Compassion and empathy | Cultivating and expressing empathy and compassion when communicating with patients | 4 | [ | - , ]|
Critical appraisal | Understanding how to evaluate AI diagnostic and therapeutic algorithms | 7 | [ | , , , , , , ]
aAI: artificial intelligence.
Competencies | What programs currently teach | Similarities between the current program and recommended program topics | What programs should teach |
Cognitive | Informatics |
|
|
Psychomotor | Leadership |
|
|
Affective | Perception of humanistic AI-enabled care |
|
|
aEHR: electronic health record.
Cognitive Domain
Of the 41 papers, 20 (49%) [
, , , , , , - , , - , , , , - , ] highlighted the importance of providing HCPs with a baseline understanding of AI and 10 (24%) [ , , , , , , , - ] recommended teaching them AI applications. The studies focused on various applications of AI, including diagnostic systems, data gathering, assessment and use, clinical applications, and personalized care. In addition, many of the papers reported that medical curricula should integrate fundamentals of health care data science [ , , - , , , , , , ] (19/41, 46%), including, but not limited to, big data and bioinformatics. Matheny et al [ ] stated that data science curricula should encompass how to form multidisciplinary development teams to improve the value of AI and to be aware of the ethics, equity, diversity, and inclusion principles at play and the inadvertent ramifications that may result from AI implementation. The studies also focused on statistics, ML with model development, model translation and use in clinical knowledge, data extraction, and applications for visualization of patients. Familiarity with ML vocabulary and a basic understanding of the methodology (algorithms and machine gathering and process of data) were deemed important to understand this rapidly emerging field.Psychomotor Domain
Most of the papers focused on clinicians being able to effectively analyze the data [
, , , , , , , , , , , , , , ] (15/41, 37%) to identify trends and efficiency correlations. As highlighted by Balthazar et al [ ] and Forney and McBride [ ], it is imperative to learn how to evaluate the efficacy and precision of AI applications. This point was reinforced in a review conducted by Park et al [ ] that stated medical students should be able to validate the clinical accuracy of algorithms. HCPs will need to become accustomed and understand how to embrace real-time health information to help make decisions in their practice setting [ ]. Of the 41 papers, 8 (20%) discussed the significance of understanding and interpreting the findings with a reasonable degree of accuracy, including awareness of source error, bias, or clinical irrelevance [ , - , , , ]. Moreover, the study findings described problem-solving [ , , ] (3/41, 7%) as a critical skill, entailing the management and application of several distinct resources. Clinicians will need to become adept in communicating the results and processes [ , - , , , , ] (8/41, 20%) with patients in a personalized and meaningful manner. Cultivating and expressing empathy and compassion [ - , ] (4/41, 10%) when communicating with the patient was emphasized in several studies.Affective Domain
Of the 41 papers, 8 (20%) stressed that HCPs should have the attitude to harness AI tools effectively to improve outcomes for patients and their communities [
, , , , , , , ]. Wiljer and Hakim [ ] asserted the importance of breaking the mass stereotypes about AI as an initial step. It is essential that professionals perceive AI as augmenting their delivery of care, rather than taking over different aspects of the health care system [ ]. Forney and McBride [ ] stated that clinicians are not as likely to perceive AI as a threat if they are able to see the wide array of AI tools and the impact these tools have on workflow and patient care. Furthermore, Sit et al [ ] mentioned that medical students are not as likely to be discouraged from pursuing certain specialties when they are presented with use cases and understand the boundaries of AI tools; almost half of the respondents believed the misconception that because of AI, certain specialists such as radiologists will become obsolete in the near future. Moreover, Brouillette [ ] mentioned the need for collaborative programs among medical students, computer science students, and engineering students, where they can better understand each other’s disciplines. A few papers recommended that future AI programs should integrate change management and establish a culture of trust and transparency with relevant stakeholders, which will support organizations to more rapidly adopt and implement AI technologies within the health care ecosystem [ , ]. Thus, it is vital to help organizations manage change at a rate in pace with the rapid advancement of technology.What Were the Critical Implementation Factors?
Enablers
The factors identified as contributing, or potentially contributing, to the success and implementation of these programs include promoting interfaculty collaboration [
, , ] and working within existing regulatory structures [ , , , ]. Not all institutions have clinical faculty who also have experience with data science; hence, there is a need in both practice and teaching for collaboration with data science faculty. Promoting interfaculty collaboration was described in the studies as the sharing of expertise among faculty members, thus creating a multidisciplinary team [ , , ]. Collaborative teaching by clinical and nonclinical instructors may increase the educational value when preparing future HCPs and also provide data science support to faculty [ , ]. Another facilitator to implementation is working within existing regulatory structures. Curriculum changes require the support of existing accreditation and regulatory bodies [ ]. A few papers discussed the need for the integration of mandatory AI coursework and assessments with the current curricula [ , ]. Hence, this could address varying AI literacy levels; enhancing knowledge of AI will increase the likelihood that it will be used in practice settings [ ].Barriers
Overall, 2 major barriers were identified that could potentially impede an organization’s implementation efforts: (1) varying levels of AI literacy among faculty in designing curricula [
, ] and (2) lack of infrastructure to integrate AI into the current curriculum [ , , , ]. Varying levels of AI literacy among faculty and curriculum leaders was discussed as a major barrier that encumbers the implementation of AI programs. Of the 41 papers, 2 (5%) discussed how faculty have limited knowledge of AI fundamentals (eg, big data or data science) and software, as well as limited time to teach [ , ]. There is a lack of technical expertise to design AI-based curricula [ , ]. Moreover, a few studies voiced concerns about the lack of infrastructure to integrate AI into the curriculum. Some studies highlighted that the existing curricula are comprehensive and complex and additional content on AI will increase the course load [ , , ]. Academic institutions are faced with several encumbrances such as faculty retirement, staff not being well-versed in AI, and inadequate financial resources [ ]. Finally, integrating the AI content into existing curricula can be an impediment for many organizations [ ].What Measures and Outcomes Were Used to Assess the Effectiveness of Education Programs?
Of the 41 papers, 5 (12%) presented the results of their training evaluation [
, , , , ]. As the educational approaches varied across studies, each approach will be briefly discussed ( ), followed by the measures and outcomes associated with each educational initiative. Categorized according to the Kirkpatrick-Barr Framework, the outcomes were either level 1 (ie, learner reaction and satisfaction with the education) [ , , ], level 2a (ie, change in attitude) [ , , ], or level 2b (ie, change in knowledge or skill) [ , ]. There were no outcomes that could be categorized as level 3 or level 4; thus, the program evaluations did not comment on the change in behavior or affect at the organizational level or on patient outcomes.Programs and authors | Measure | Actual outcomes | |
Educational summit | |||
Barbour et al 2019, [ | ]
|
| |
Workshops | |||
Kang et al 2017, [ | ]
|
| |
Kinnear et al 2019, [ | ]
|
| |
Biomedical informatics course within medical education | |||
Sanchez-Mendiola et al 2013, [ | ]
|
| |
Sybenga et al 2016, [ | ]
|
|
aAI: artificial intelligence.
bCER: comparative effectiveness research.
Discussion
Current State of AI Education Programs
This review identified pivotal knowledge gaps in our understanding of effective AI education programs for HCPs. The gaps identified through this review illustrated the limited AI education and training opportunities available for HCPs and thus emphasized the necessity of curating further AI education programs targeted to HCPs. The existing programs tend to focus only on the development and implementation of AI; yet, it is essential to also prepare HCPs to not only work with AI but also to advance AI for health and clinical decision-making. AI education programs should be designed in a way that enables HCPs to not only safely adopt these technologies, but also to adapt and shift their scope of practice to stay relevant. A significant and meaningful change to AI curricula in health care will only occur by increasing AI literacy among HCPs and by providing them with the ability to leverage relevant digital and data-driven decision-making tools. Although the studies demonstrate that efforts are being made to evaluate the outcomes of AI-related education initiatives, there is a lack of consistency in the measures for a comprehensive assessment of these outcomes. Most of the papers used self-constructed and nonvalidated instruments and delineated their findings in qualitative terms. Given the variety of instruments that have been employed in the studies, the absence of a standard, comprehensive tool impedes the integration and synthesis of findings across the studies. The guiding principles provided in this review will also hopefully inform future development and design of these programs.
Critical Implementation Factors
A lack of infrastructure to integrate AI content into current curricula could hinder the development of these types of programs; some of the programs described embedded their content within existing professional certifying bodies’ infrastructure to facilitate content development. The Royal College of Physicians and Surgeons of Canada, in particular, further emphasizes the need for these regulatory strategies, which are currently in process but not yet in practice [
, ]. The promotion of multidisciplinary collaboration was indicated as an enabler of content delivery; yet, varying levels of AI literacy among faculty could impede successful delivery of AI content [ , ]. Curricular adaptations and building an infrastructure for AI technologies could be helpful to HCPs wanting to adopt AI to improve patient care; this includes improvements in the types of health care data available for AI education [ ]. Of note, much of the health data generated are often inaccessible to researchers and limited by regulatory or infrastructure-level barriers, including institutional ethics approvals and data-sharing agreements [ ]. The use of deidentified data, security, and privacy controls could potentially widen the scope of access; broader collaboration with multidisciplinary experts could also help to establish secure data networks to improve use and access of health care data [ ]. Lower levels of AI literacy could be augmented by standardizing competency statements and engaging and training faculty in e-learning, for instance [ - ]. The World Health Organization’s Global Strategy on Digital Health further suggests that the barriers to AI adoption need to be addressed at the systems level and all aspects of implementation should be considered.Our recommendations have been formed into guiding principles that could be used to guide the development of future AI curricula or to incorporate AI education into existing curricula.
Guiding Principles
Principle 1: Need for Regulatory Strategies
Many studies discussed that working within the existing regulatory structure can hinder the implementation of AI education initiatives. Faculty can be inhibitors to changing curricula that were initially developed to prepare students for their national board examinations [
, ]. In addition, teaching approaches may be too outdated to incorporate new and emerging technologies [ ] into the changing digital and AI landscape. New regulatory strategies will be required, and organizations will have to prioritize developing a workforce that not only has the knowledge and skills to provide care with these tools, but also the competencies to rapidly learn and adapt. The studies also highlighted that accrediting bodies can be a roadblock to change [ - ]. Wartman and Combs [ ] stated that to prepare future care providers for AI-enabled care, there is a need for accreditors to move beyond traditional models (based on fact memorization and clinical clerkships) and be willing to innovate and consider new approaches to lifelong learning.Principle 2: Multidisciplinary Approach to Design and Delivery
The rapidly evolving nature of the field and the dynamic regulatory, legal, and economic landscape may hinder the implementation of an AI curriculum and thus affect the deployment of AI tools in clinical practice. An initial AI curriculum must be developed iteratively because many of these areas still entail considerable research and advancement [
], ensuring that new knowledge gains and policy changes are reflected within the curriculum. This finding was reinforced in a paper by Wiljer and Hakim [ ]. The authors reported that AI applications have not yet developed to a level of complexity and clinical value because many of these applications are currently in the research and development stages.Wiens et al [
] stated that successful ML deployment entails assembling experts and stakeholders from various disciplines, including knowledge experts, decision-makers, and users. The approach to curriculum redesign will need to focus on multiple disciplines and levels of training; curricula should be specialized to the needs of various individuals such as health care researchers, clinicians, and quality improvement teams [ ]. Therefore, the development of an AI-based curriculum should involve a multidisciplinary team comprising health system leaders, frontline providers, data scientists, patients, and education experts to ensure accuracy and clinical relevance of the curriculum [ , ]. It is imperative for all stakeholders and experts in the field to work collaboratively to understand and address the potential biases, thus reducing the existing social inequalities and ultimately leading to optimal care for all patients [ ].Principle 3: Competence-Based Curriculum Design
To influence the development of their future practice, it is essential for HCPs to have a foundational level of AI competencies and skills [
]. Education should be designed in a manner that teaches HCPs to work with, and understand, the AI they use in their clinical practice. Furthermore, a level of baseline competencies in AI should allow trainees to make significant contributions to health policy decisions related to their scope of practice [ ]. AI will likely contribute significantly to the medical practice of the future; therefore, fundamentals and applications of AI tools and terminologies should be integrated into medical school curricula. Specifically, training current and future physicians on how to use these tools to provide quality health care, while taking into account the limitations and ethical implications of such technologies, will be useful [ ]. In addition to medical learners and physicians, medical teachers need to be trained to deliver this innovative AI curriculum content; this is a shift that needs to occur without delay, given the steep learning curve ahead [ ]. Paranjape et al [ ] recommended a staged approach to educating future care providers about AI and its application in health care that spans from undergraduate to continuing medical education.On the basis of the findings of this review, an ideal flow of AI concepts could be split across the 3 stages of medical education defined by Oxford Medicine: undergraduate medical education, postgraduate medical education, and continuing professional development (
) [ ]. Undergraduate medical education should be focused on HCPs becoming familiar with AI terminology, the fundamentals of ML and data science, capabilities of AI, and how to identify opportunities and applications in health where AI would be appropriate with a health equity lens. During postgraduate medical education, emphasis should be placed on how to engage in validation and prospective evaluation of models, as well as deployment. Ethical and legal considerations, including governance strategy development, should be explored in more depth. Finally, during continuing professional development, providers should be involved in facilitating ethical and societal discussions, teaching AI courses, and keeping abreast of new AI knowledge and skills as well as teaching methods.Principle 4: Patient-Clinician Interaction
In the age of AI-enabled care, HCPs must consider the potential impact of the patient and clinician interaction as well as the strategies for improving the quality of care delivered in a technology-enabled environment [
, ]. Li et al [ ] stated that health professions education should teach and cultivate altruism and compassion, unique skills to humans that are integral to the emergence of AI applications. This will ensure that HCPs are not disrupted by novel tools. To equip themselves to use AI in practice, care providers should develop competencies that allow them to differentiate between credible and false information in their delivery of care [ ]. Similar to the situation in other industries, the challenge of adopting and implementing AI in health care will lead to winners and laggards [ ]. In the successful adoption of AI, HCPs should engage with their patients because these interactions will be important to complement the technical expertise of AI as AI transforms the health care milieu [ ].Limitations
Our scoping review findings should be examined in the context of the following limitations. Because of the nature of the scoping review, the quality of each identified study was not assessed. Given the nature of the topic being investigated, we excluded studies that discussed AI as a tool for medical education or continuing professional development. Only studies in English were included. In addition, the educational approaches varied across the studies; thus, we were unable to conduct formal comparisons among the curricula to determine which were effective. However, reviewing the literature enabled us to identify the gaps in current education programs and provide insights and best practices to guide future education efforts. As this review was inclusive of all types of studies and focused on a breadth of literature, the depth in reporting of education program details was inconsistent and varied based on the scope of the study.
Conclusions
With the inevitable progression of health care digitization, health professions education should foster unique human abilities, which will complement these emerging technologies. This review provided an overview of the current state of AI in health professions education and future directions on preparing care providers for the era of AI in health care. Future education efforts should focus on the development of regulatory strategies, a multidisciplinary approach to curriculum redesign, a competency-based curriculum, and patient-clinician interaction.
Acknowledgments
Accelerating the appropriate adoption of artificial intelligence in health care through building new knowledge, skills, and capacities in the Canadian health care professions is funded by the Government of Canada’s Future Skills Centre.
Accélérer l'adoption appropriée de l'intelligence artificielle dans la santé en développant de nouvelles connaissances, compétences et capacités pour les professionnels desanté canadiennesest financé par le Centre des Compétences Futures du gouvernement du Canada.
Authors' Contributions
RC led the conceptualization, design, and execution of the review. RC, TJ, and SY collaborated on the numeric and thematic analyses, drafting, and finalization of the manuscript. MA developed the search strategy and conducted the search. In addition to RC, TJ, and SY, DAM, SH, SW, and TT contributed to the identification of papers and screening. DW and ED provided guidance on the conceptualization and design of the study. DW, ED, MS, and WT contributed to the development of ideas that were instrumental in surfacing and maturing many of the concepts contained in this study. They also served as content experts in validating the findings and revising all drafts of this manuscript for important intellectual content and clarity. All authors have revised drafts of this manuscript as well as read and approved the final manuscript.
Conflicts of Interest
None declared.
Database search strategies.
DOCX File , 43 KBReferences
- Brinker S. Martec's Law: technology changes exponentially, organizations change logarithmically 2013. Chief Martec. URL: https://chiefmartec.com/2013/06/martecs-law-technology-changes-exponentially-organizations-change-logarithmically/ [accessed 2021-10-20]
- Sapci AH, Sapci HA. Artificial intelligence education and tools for medical and health informatics students: systematic review. JMIR Med Educ 2020 Jun 30;6(1):e19285 [FREE Full text] [CrossRef] [Medline]
- Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med 2019 Jan;25(1):44-56. [CrossRef] [Medline]
- Berner ES, McGowan JJ. Use of diagnostic decision support systems in medical education. Methods Inf Med 2010;49(4):412-417. [CrossRef] [Medline]
- SFR-IA Group, CERF, French Radiology Community. Artificial intelligence and medical imaging 2018: French Radiology Community white paper. Diagn Interv Imaging 2018 Nov;99(11):727-742 [FREE Full text] [CrossRef] [Medline]
- Mattessich S, Tassavor M, Swetter SM, Grant-Kels JM. How I learned to stop worrying and love machine learning. Clin Dermatol 2018;36(6):777-778. [CrossRef] [Medline]
- Meek RD, Lungren MP, Gichoya JW. Machine learning for the interventional radiologist. AJR Am J Roentgenol 2019 Oct;213(4):782-784. [CrossRef] [Medline]
- The impact of digital health technologies on the future of medical specialties in one infographic. The Medical Futurist. URL: https://medicalfuturist.com/towards-creativity-in-healthcare-the-impact-of-digital-technologies-on-medical-specialties-in-an-infographic/ [accessed 2021-10-20]
- How artificial intelligence is transforming the world. Brookings. 2018. URL: https://www.brookings.edu/research/how-artificial-intelligence-is-transforming-the-world/ [accessed 2021-11-24]
- Fridsma DB. Health informatics: a required skill for 21st century clinicians. BMJ 2018 Jul 12;362:k3043. [CrossRef] [Medline]
- Collado-Mesa F, Alvarez E, Arheart K. The role of artificial intelligence in diagnostic radiology: a survey at a single radiology residency training program. J Am Coll Radiol 2018 Dec;15(12):1753-1757. [CrossRef] [Medline]
- Moore JH, Boland MR, Camara PG, Chervitz H, Gonzalez G, Himes BE, et al. Preparing next-generation scientists for biomedical big data: artificial intelligence approaches. Per Med 2019 May 01;16(3):247-257 [FREE Full text] [CrossRef] [Medline]
- Li D, Kulasegaram K, Hodges BD. Why we needn't fear the machines: opportunities for medicine in a machine learning world. Acad Med 2019 May;94(5):623-625. [CrossRef] [Medline]
- Han E, Yeo S, Kim M, Lee Y, Park K, Roh H. Medical education trends for future physicians in the era of advanced technology and artificial intelligence: an integrative review. BMC Med Educ 2019 Dec 11;19(1):460 [FREE Full text] [CrossRef] [Medline]
- Singh RP, Hom GL, Abramoff MD, Campbell JP, Chiang MF, AAO Task Force on Artificial Intelligence. Current challenges and barriers to real-world artificial intelligence adoption for the healthcare system, provider, and the patient. Transl Vis Sci Technol 2020 Aug 11;9(2):45 [FREE Full text] [CrossRef] [Medline]
- Varghese J. Artificial intelligence in medicine: chances and challenges for wide clinical adoption. Visc Med 2020 Dec;36(6):443-449 [FREE Full text] [CrossRef] [Medline]
- Holm EA. In defense of the black box. Science 2019 Apr 05;364(6435):26-27. [CrossRef] [Medline]
- He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nat Med 2019 Jan;25(1):30-36 [FREE Full text] [CrossRef] [Medline]
- Cox M, Blouin AS, Cuff P, Paniagua M, Phillips S, Vlasses PH. The role of accreditation in achieving the quadruple aim. National Academy of Medicine. 2017. URL: https://nam.edu/the-role-of-accreditation-in-achieving-the-quadruple-aim/ [accessed 2021-10-20]
- Hoque M. Three domains of learning: cognitive, affective and psychomotor. Academic Research - What the research project is intended to achieve. 2017. URL: https://www.researchgate.net/publication/330811334_Three_Domains_of_Learning_Cognitive_Affective_and_Psychomotor [accessed 2021-10-20]
- Shen N, Yufe S, Saadatfard O, Sockalingam S, Wiljer D. Rebooting Kirkpatrick: integrating information system theory into the evaluation of web-based continuing professional development interventions for interprofessional education. J Contin Educ Health Prof 2017;37(2):137-146. [CrossRef] [Medline]
- Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol 2005 Feb;8(1):19-32. [CrossRef]
- Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med 2018 Oct 02;169(7):467-473 [FREE Full text] [CrossRef] [Medline]
- PRISMA for scoping reviews. PRISMA. URL: http://www.prisma-statement.org/Extensions/ScopingReviews [accessed 2021-10-20]
- Colquhoun HL, Levac D, O'Brien KK, Straus S, Tricco AC, Perrier L, et al. Scoping reviews: time for clarity in definition, methods, and reporting. J Clin Epidemiol 2014 Dec;67(12):1291-1294. [CrossRef] [Medline]
- Wood MJ, Tenenholtz NA, Geis JR, Michalski MH, Andriole KP. The need for a machine learning curriculum for radiologists. J Am Coll Radiol 2019 May;16(5):740-742. [CrossRef] [Medline]
- Wiljer D, Hakim Z. Developing an artificial intelligence-enabled health care practice: rewiring health care professions for better care. J Med Imaging Radiat Sci 2019 Dec;50(4 Suppl 2):S8-14. [CrossRef] [Medline]
- Wartman SA, Combs CD. Medical education must move from the information age to the age of artificial intelligence. Acad Med 2018 Aug;93(8):1107-1109. [CrossRef] [Medline]
- Wartman S, Combs C. Reimagining medical education in the age of AI. AMA J Ethics 2019 Feb 01;21(2):E146-E152 [FREE Full text] [CrossRef] [Medline]
- Wartman SA. The empirical challenge of 21st-century medical education. Acad Med 2019 Oct;94(10):1412-1415. [CrossRef] [Medline]
- Topaz M, Pruinelli L. Big data and nursing: implications for the future. Stud Health Technol Inform 2017;232:165-171. [Medline]
- Thompson RF, Valdes G, Fuller CD, Carpenter CM, Morin O, Aneja S, et al. Artificial intelligence in radiation oncology: a specialty-wide disruptive transformation? Radiother Oncol 2018 Dec;129(3):421-426. [CrossRef] [Medline]
- Tang A, Tam R, Cadrin-Chênevert A, Guest W, Chong J, Barfett J, Canadian Association of Radiologists (CAR) Artificial Intelligence Working Group. Canadian association of radiologists white paper on artificial intelligence in radiology. Can Assoc Radiol J 2018 May;69(2):120-135 [FREE Full text] [CrossRef] [Medline]
- Tajmir SH, Alkasab TK. Toward augmented radiologists: changes in radiology education in the era of machine learning and artificial intelligence. Acad Radiol 2018 Jun;25(6):747-750. [CrossRef] [Medline]
- Sybenga A, Zreik RT, Mohammad A, Rao A. Big Data: bioinformatics education during residency demonstrates immediate value. Nature Publishing Group. URL: https://www.nature.com/articles/labinvest20168.pdf?proof=t [accessed 2021-11-24]
- Srivastava TK, Waghmare L. Implications of Artificial Intelligence (AI) on dynamics of medical education and care: a perspective. J Clin Diagnos Res 2020 Mar;14(3):1-2. [CrossRef]
- Sit C, Srinivasan R, Amlani A, Muthuswamy K, Azam A, Monzon L, et al. Attitudes and perceptions of UK medical students towards artificial intelligence and radiology: a multicentre survey. Insights Imaging 2020 Feb 05;11(1):14 [FREE Full text] [CrossRef] [Medline]
- Saqr M, Tedre M. Should we teach computational thinking and big data principles to medical students? Int J Health Sci (Qassim) 2019;13(4):1-2 [FREE Full text] [Medline]
- Sánchez-Mendiola M, Martínez-Franco AI, Lobato-Valverde M, Fernández-Saldívar F, Vives-Varela T, Martínez-González A. Evaluation of a Biomedical Informatics course for medical students: a pre-posttest study at UNAM Faculty of Medicine in Mexico. BMC Med Educ 2015 Apr 01;15:64 [FREE Full text] [CrossRef] [Medline]
- Park SH, Do K, Kim S, Park JH, Lim Y. What should medical students know about artificial intelligence in medicine? J Educ Eval Health Prof 2019;16:18 [FREE Full text] [CrossRef] [Medline]
- Paranjape K, Schinkel M, Nannan Panday R, Car J, Nanayakkara P. Introducing artificial intelligence training in medical education. JMIR Med Educ 2019 Dec 03;5(2):e16048 [FREE Full text] [CrossRef] [Medline]
- Nguyen GK, Shetty AS. Artificial intelligence and machine learning: opportunities for radiologists in training. J Am Coll Radiol 2018 Sep;15(9):1320-1321. [CrossRef] [Medline]
- McCoy LG, Nagaraj S, Morgado F, Harish V, Das S, Celi LA. What do medical students actually need to know about artificial intelligence? NPJ Digit Med 2020 Jun 19;3:86 [FREE Full text] [CrossRef] [Medline]
- Mathur P, Burns M. Artificial intelligence in critical care. Int Anesthesiol Clin 2019;57(2):89-102. [CrossRef] [Medline]
- Matheny ME, Whicher D, Thadaney Israni S. Artificial intelligence in health care: a report from the national academy of medicine. JAMA 2020 Feb 11;323(6):509-510. [CrossRef] [Medline]
- Masters K. Artificial intelligence in medical education. Med Teach 2019 Sep;41(9):976-980. [CrossRef] [Medline]
- Kolachalama VB, Garg PS. Machine learning and medical education. NPJ Digit Med 2018 Sep 27;1:54 [FREE Full text] [CrossRef] [Medline]
- Kobayashi Y, Ishibashi M, Kobayashi H. How will "democratization of artificial intelligence" change the future of radiologists? Jpn J Radiol 2019 Jan;37(1):9-14. [CrossRef] [Medline]
- Kinnear B, Hagedorn P, Kelleher M, Ohlinger C, Tolentino J. Integrating Bayesian reasoning into medical education using smartphone apps. Diagnosis (Berl) 2019 Jun 26;6(2):85-89. [CrossRef] [Medline]
- Kang SK, Lee CI, Pandharipande PV, Sanelli PC, Recht MP. Residents' introduction to comparative effectiveness research and big data analytics. J Am Coll Radiol 2017 Apr;14(4):534-536 [FREE Full text] [CrossRef] [Medline]
- Kang J, Thompson RF, Aneja S, Lehman C, Trister A, Zou J, et al. National cancer institute workshop on artificial intelligence in radiation oncology: training the next generation. Pract Radiat Oncol 2021;11(1):74-83 [FREE Full text] [CrossRef] [Medline]
- Jeffery A. ANI emerging leader project: identifying challenges and opportunities in nursing data science. Comput Inform Nurs 2019 Jan;37(1):1-3. [CrossRef] [Medline]
- Gorman D, Kashner TM. Medical graduates, truthful and useful analytics with big data, and the art of persuasion. Acad Med 2018 Aug;93(8):1113-1116. [CrossRef] [Medline]
- Foster M, Tasnim Z. Data science and graduate nursing education: a critical literature review. Clin Nurse Spec 2020;34(3):124-131. [CrossRef] [Medline]
- Forney MC, McBride AF. Artificial intelligence in radiology residency training. Semin Musculoskelet Radiol 2020 Feb;24(1):74-80. [CrossRef] [Medline]
- Evans J, Banerjee A. Global health and data science: future needs for tomorrow’s cardiologist. Br J Cardiol 2016 Aug:87-88. [CrossRef]
- Chan KS, Zary N. Applications and challenges of implementing artificial intelligence in medical education: integrative review. JMIR Med Educ 2019 Jun 15;5(1):e13930 [FREE Full text] [CrossRef] [Medline]
- Chamunyonga C, Edwards C, Caldwell P, Rutledge P, Burbery J. The impact of artificial intelligence and machine learning in radiation therapy: considerations for future curriculum enhancement. J Med Imaging Radiat Sci 2020 Jun;51(2):214-220. [CrossRef] [Medline]
- Brouillette M. AI added to the curriculum for doctors-to-be. Nat Med 2019 Dec;25(12):1808-1809. [CrossRef] [Medline]
- Briganti G, Le Moine O. Artificial intelligence in medicine: today and tomorrow. Front Med (Lausanne) 2020 Feb 5;7:27 [FREE Full text] [CrossRef] [Medline]
- Bhavnani SP, Muñoz D, Bagai A. Data science in healthcare: implications for early career investigators. Circ Cardiovasc Qual Outcomes 2016 Nov;9(6):683-687. [CrossRef] [Medline]
- Barbour AB, Frush JM, Gatta LA, McManigle WC, Keah NM, Bejarano-Pineda L, et al. Artificial intelligence in health care: insights from an educational forum. J Med Educ Curric Dev 2020 Jan 28;6:2382120519889348 [FREE Full text] [CrossRef] [Medline]
- Balthazar P, Tajmir SH, Ortiz DA, Herse CC, Shea LA, Seals KF, et al. The Artificial Intelligence Journal Club (#RADAIJC): a multi-institutional resident-driven web-based educational initiative. Acad Radiol 2020 Jan;27(1):136-139. [CrossRef] [Medline]
- Strosahl K. Training behavioral health and primary care providers for integrated care: a core competencies approach. In: Behavioral Integrative Care: Treatments That Work in the Primary Care Setting. New York: Brunner-Routledge; 2005.
- Schneeweiss S, Ahmed S, Burhan A, Campbell C. Competency-based CPD: implications for physicians, CPD providers and health care institutions. Canada: Royal College of Physicians and Surgeons of Canada. URL: https://journals.sagepub.com/doi/abs/10.1177/1039856219859279 [accessed 2021-10-20]
- Sargeant J, Bhanji F, Holmboe E, Kassen B, McFadyen R, Mazurek K. Assessment and feedback for continuing competence and enhanced expertise in practice. Canada: Royal College of Physicians and Surgeons of Canada. URL: http:/file:///C:/Users/user/Downloads/cb-cpd-white-paper-assessment-e.pdf [accessed 2021-10-20]
- Ghassemi M, Goldenberg A, Morris Q, Rudzicz F, Wang B, Zemel R. Accessible data, health AI and the human right to benefit from science and its applications. Health Law Canada 2019 Aug;40(1):38.
- O'Doherty D, Dromey M, Lougheed J, Hannigan A, Last J, McGrath D. Barriers and solutions to online learning in medical education - an integrative review. BMC Med Educ 2018 Jun 07;18(1):130 [FREE Full text] [CrossRef] [Medline]
- Park JY, Mills KA. Enhancing interdisciplinary learning with a learning management system. MERLOT J Online Learn Teach 2014 Jun;10(2):299-313.
- Global strategy on digital health 2020-2025. World Health Organization. 2021. URL: https://www.who.int/docs/default-source/documents/gs4dhdaa2a9f352b0445bafbc79ca799dce4d.pdf [accessed 2021-10-20]
- Wiens J, Saria S, Sendak M, Ghassemi M, Liu VX, Doshi-Velez F, et al. Do no harm: a roadmap for responsible machine learning for health care. Nat Med 2019 Sep;25(9):1337-1340. [CrossRef] [Medline]
- Westerman M, Teunissen P. Oxford Textbook of Medical Education. Oxford, UK: Oxford University Press; 2013.
Abbreviations
AI: artificial intelligence |
HCP: health care professional |
ML: machine learning |
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses |
Edited by G Eysenbach; submitted 07.06.21; peer-reviewed by S Purkayastha, S You, Y Zidoun; comments to author 08.09.21; revised version received 04.10.21; accepted 04.10.21; published 13.12.21
Copyright©Rebecca Charow, Tharshini Jeyakumar, Sarah Younus, Elham Dolatabadi, Mohammad Salhia, Dalia Al-Mouaswas, Melanie Anderson, Sarmini Balakumar, Megan Clare, Azra Dhalla, Caitlin Gillan, Shabnam Haghzare, Ethan Jackson, Nadim Lalani, Jane Mattson, Wanda Peteanu, Tim Tripp, Jacqueline Waldorf, Spencer Williams, Walter Tavares, David Wiljer. Originally published in JMIR Medical Education (https://mededu.jmir.org), 13.12.2021.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic information, a link to the original publication on https://mededu.jmir.org/, as well as this copyright and license information must be included.