Abstract
Background: Artificial intelligence (AI) is changing continuing professional development (CPD) in health care and its interactions with the broader health care system. However, current scholarship lacks an integrated theoretical model that explains how AI impacts CPD as a complex sociotechnical system. Existing frameworks usually focus on isolated phenomena, such as ethics, literacy, or learning theory, leaving unaddressed the dynamics of how those phenomena interact in the complex sociotechnical AI-enhanced CPD system, as well as the new roles that AI-empowered patients and society play.
Objective: The objective of this study is to propose a comprehensive, theory-driven framework that provides insight into how AI transforms CPD systems. The goal was to integrate established AI constructs with Complexity Theory (CT) and Actor-Network Theory (ANT) to develop a model that guides practice, research, and policy.
Methods: We conducted a multimethod theory construction. The process started with identifying the AI-enhanced CPD as an established yet evolving phenomenon. Through a structured literature review, the main building blocks of AI-enhanced CPD were identified, as well as the ontological base (CT and ANT). The model was developed through iterative human-led and AI-assisted abductive analysis. The final model was abductively validated on a case study of a national organization pioneering AI use, demonstrating the theoretical model makes sense in practice. All conceptual decisions were reviewed collaboratively by the author group.
Results: The ALEERRT-CA framework is made of 6 pillars: AI literacy, explainability, ethics, readiness, reliability, and learning theories, and 2 theoretical lenses: CT and ANT. CT elucidates macro-level system behaviors in the AI-enhanced CPD system. Those behaviors include emergence, feedback loops, adaptation, and reality made of nested complex systems. ANT explains how localized interactions among human and nonhuman actors shape AI-enhanced CPD. Together, these lenses illustrate how AI redistributes agency, amplifies tensions, and generates emergent learning dynamics within CPD and the broader health care system.
Conclusions: This study presents a novel conceptual model of AI-enhanced CPD as a sociotechnical system. The integration of CT and ANT with AI constructs improves explanatory power of the ALEERRT-CA framework. Educators, program leaders, and policymakers can use the framework as a structured toolset to evaluate AI readiness, design responsible AI-enhanced CPD practices, and plan future empirical research. The framework provides a theoretical lens for observing the rapidly evolving field of AI-enhanced CPD and health care practice.
doi:10.2196/69156
Keywords
Introduction
AI as a Transformative Sociotechnical Force in Continuing Professional Development
Artificial intelligence (AI) applications are increasingly impacting all aspects of the health care system, from clinical practice (diagnosis, treatment, prevention) to research, communication, administration, and learning []. The super-connected, postdigital nature of our society [] enables AI tools to access vast amounts of data and allows their impacts to spread quickly globally, amplifying AI’s power. AI’s rapid evolution is associated with ethical, legal, social, and professional opportunities and risks for health and continuing professional development (CPD) professionals, patients, and the broader society [,]. Therefore, we need to increase our capacity to analyze and improve AI-enhanced CPD and the broader sociotechnical systems with which it is associated [].
AI tools are gaining transformative powers. They act as a mediating technology that changes how information flows, decisions are made, and how CPD occurs []. They are not just new tools improving existing CPD routines; they modify the relationships between clinician-learners, educators, patients, technologies, organizational structures, and regulators.
From an actor-network perspective, AI acts as a nonhuman actor that reorganizes agency, influences interactions, and supports transformations of established professional practices []. For example, the introduction of AI into CPD environments contributes to nonlinear, emergent system behavior characteristic of complex sociotechnical adaptive systems, in which small changes in one part of the network can propagate unpredictably across broader educational and clinical contexts [].
That phenomenon of AI-empowered patients illustrates this transformation. It is reshaping the clinician-patient relationship, influencing decision-making processes, redefining the learning needs of health care professionals, the health system, and the public [,]. Those changes can significantly affect patient outcomes.
Gap, Aim, and Research Question
Despite the increasing importance of AI-enhanced CPD, we lack a framework for examining it as a sociotechnical system. Most published approaches focus on specific issues like ethics, literacy, or digital learning, while leaving out complex relationships between people and technology, as well as the feedback and system dynamics that influence CPD and its role in the broader health care ecosystem.
This study aims to address this by creating an accessible, theory-based framework to analyze how AI changes networks of people and technology, introduces new feedback loops, and transforms CPD systems. Using a multimethod approach based on Borsboom et al [], we bring together Complexity Theory (CT) and Actor-Network Theory (ANT) to study AI-enhanced CPD. Our primary research question is: How can we best explain AI-enhanced CPD as a sociotechnical system, and what new system-level dynamics and relationships appear when we look at it through both CT and ANT?
This paper makes three contributions. First, it conceptualizes AI-enhanced CPD as a complex sociotechnical system rather than a set of isolated tools. Second, it integrates CT and ANT into a single explanatory framework (ALEERRT-CA). That connects macro-level system dynamics with micro-level human–AI interactions. Third, it provides a reusable analytic tool for educators, organizations, and researchers to evaluate AI readiness, governance, and learning design in CPD contexts.
Literature Review
AI-enhanced CPD does not emerge in isolation. Our increasingly networked society and health care systems are characterized by increasing interdependence, speed, and systemic complexity []. They create a context inseparable from AI-enhanced CPD. In this context, professional learning is no longer bounded by formal educational activities or institutional settings []. Instead, AI-enhanced CPD is interwoven with clinical workflows, digital infrastructures, regulatory environments, patient participation, and continuously evolving technologies [].
This qualitatively changes how CPD functions. Traditional linear, tool-centered, or competency-based models are increasingly insufficient to explain how AI reshapes learning, decision-making, and professional practice across interconnected health care systems [,].
This literature review, therefore, approaches AI-enhanced CPD through the dual lenses of complex adaptive systems (CASs) [,] and ANT []. Rather than treating AI as an isolated educational innovation, we examine AI-enhanced CPD as a dynamic system embedded in a networked society—one in which relationships between human and nonhuman actors, feedback loops, and emergent behaviors are central. This perspective indicates the need for an integrative framework that explains AI’s systemic influence on CPD.
Changing Complex Systems
CPD acts as an open, adaptive, complex system (CAS) [,]. CPD meets established CAS criteria not because of its educational content alone, but because (1) it is embedded within a complex health care system and (2) it tackles complex individual-, team-, and organization-wide learning processes. CPD is structurally connected to clinical workflows, organizational incentives, regulatory regimes, and patient outcomes. CPD is focused on addressing the contextual learning needs of individuals, teams, and organizations and is shaped by emerging technologies and clinical practices. AI adds additional layers of complexity []. It provides new variables to the CPD system and connects it more dynamically with the broader health care system.
Networked Society: A New Context
Initiatives using AI in education have a long history []. From teaching machines in the 1920s to expert systems in the 1970s to 1990s, AI tools existed as (semi)isolated centers, relying almost entirely on rules designed by human experts. Now, our society’s super-connected, big-data nature creates a considerably different context. From (semi)isolated centers with limited input of data, AI has become omnipresent and empowered by the global internet of knowledge [].
Whether we talk about personalized search findings, text editor suggestions, AI-generated meeting minutes, health monitoring apps, 1357 AI-enhanced medical devices recognized by the US Food and Drug Administration [], learning analytics, or health care quality improvement [], AI tools are here, tightly embedded in our daily practices.
There Is Nothing So Practical as a Good Theory
Despite the close relationship between AI, individuals, and society, research on AI in education and CPD has been mostly technocentric and atheoretical [,]. Technocentric research may miss the complex interactions between AI, CPD, and broader health care contexts [].
As we aim to understand how AI impacts CPD activities and the broader sociotechnical health care system, our focus must shift from individual phenomena of AI toward the interaction between AI and various phenomena and activities in the complex context of health care CPD []. Theoretical frameworks help us make that shift, allowing us to understand better and manage interactions between various elements of the complex system [].
Kurt Lewin, one of the pioneers of organizational and social psychology, famously noted, “There is nothing so practical as a good theory” and “the best way to understand something is to try to change it” [,]. Guided by those maxims, we propose a theoretical framework to better comprehend and guide the integration of AI and CPD. This is a distinct change within the CPD domain; it creates an opportunity to better understand and improve CPD within the broader sociotechnical context [,].
Why an Integrative Framework Is Needed
Mouloudj et al [] review of AI adoption in health care has revealed a fragmented landscape of success factors and barriers. Numerous studies describe the importance of AI literacy, explainability, trust, ethical safeguards, organizational readiness, workflow compatibility, social influence, and system reliability. However, these determinants are presented as isolated pillars rather than tightly interconnected components of an AI-enhanced sociotechnical system. In practice, these elements can interact very dynamically. For example, when AI enters a clinical CPD environment, it affects learning design, decision-making processes, roles, relationships, and system behavior.
Since CPD is a CAS inseparable from the complexity of the broader health care system, we need a lens that lets us observe the system and relationships in the system. Focusing on isolated fragments of the system will not provide the needed insight into the system and how AI reshapes it [].
This creates a gap. We know what forces influence the adoption of AI and that, if properly used, AI can have a transformative positive impact []. However, we lack a coherent framework describing how these forces interact across individual, team, organizational, and system levels. AI is being transformed from a tool that supports existing practice into a set of tools that are transforming our sociotechnical system. It reorganizes workflows, amplifies feedback loops, and alters the learning system itself. Therefore, it is becoming increasingly important to have a theoretical lens that can track that transformation.
The ALEERRT-CA framework responds to this need. It brings together 6 foundational AI pillars and situates them within CT and ANT, providing a practical and theoretically grounded way to analyze how AI operates within CPD as a sociotechnical system. By articulating the “anatomy” of AI-enhanced CPD and offering tools to examine both macro-level patterns and micro-level interactions, ALEERRT-CA provides the conceptual infrastructure needed for responsible innovation, rigorous research, and evidence-informed CPD strategy.
Methods
Methodology: Theory Construction
This study adopts a theory construction methodology rather than an empirical evaluation design, with the goal of producing an explanatory framework suitable for subsequent empirical testing.
We employed the theory construction methodology outlined by Borsboom et al []. This process included the following 5 steps:
- Identification of Phenomenon: We identified AI-enhanced CPD as a relevant, explainable, and reproducible phenomenon that is rapidly evolving but also stable in terms of its broad and continued impact on CPD [,].
- Drafting Core Principles and Models: VH drafted core principles to explain the phenomenon and created multiple explanatory models using abductive reasoning (explanatory inference). ChatGPT assisted with initial brainstorming and concept review. After concepts were refined, the draft was shared with the author group (2023).
- Model Development: After multiple additional iterations and contributions from all authors, the final model was developed.
- Assessment of Explanatory Adequacy: As the fourth step, we assessed the adequacy of the framework and its ability to explain how AI-enhanced CPD can enhance learning.
- Evaluation of Theoretical Framework: We concluded the process by evaluating the value of the constructed theoretical framework through 2 simulated scenarios and a real-world case study.
Complexity-Ready Lenses
Overview
AI-enhanced CPD occurs in the open, adaptive, complex sociotechnical health care system, where many elements are on the edge of chaos [,]. Working with complex, constantly evolving phenomena requires tolerance for ambiguity, a shift of focus from individual phenomena to interactions among multiple phenomena within the system, and the development of unique theoretical tools [,]. CT and ANT, 2 complementary theoretical lenses, were used to examine the sociotechnical health care system and the role of AI within it.
Selection of the 6 Foundational Pillars
The 6 AI pillars (Literacy, Explainability, Ethics, Readiness, Reliability, and Learning Theories) were selected through an iterative process of pillar identification (). Selection is based on 3 criteria: necessity, nonredundancy, and system relevance. The necessity criterion stipulates that each pillar addresses an important mechanism through which AI influences CPD and the broader health care system. Nonredundancy ensures that no pillar duplicates another pillar’s explanatory function. System relevance requires that each pillar can meaningfully participate in macro-level system dynamics (CT) and micro-level actor interactions (ANT).
- Literature frequency and significance: Pillars repeatedly cited across AI, health care, CPD, and education fields
- Distinct explanatory role: Each pillar explains a unique facet of AI-CPD interaction
- Alignment with CT and ANT: Must connect to system-level dynamics and actor-level mechanisms
- Parsimony: Remove overlapping or derivative constructs; retain essential ones
Final Pillars: Literacy, Explainability, Ethics, Readiness, Reliability, and Learning Theories
Constructs frequently mentioned in literature but treated as subsets of broader domains (eg, trust, governance, data quality) were incorporated under more comprehensive theoretical pillars. The final 6 pillars represent the minimal set needed to explain AI’s influence on CPD within a sociotechnical system while maintaining parsimony.
In ALEERRT-CA, learning theories serve a dual role. Epistemologically, they explain how learning occurs across different levels of the system. Functionally, they serve as mechanisms through which AI influences CPD design and outcomes. For this reason, learning theories are treated as both a foundational pillar and a mediating layer within the framework.
AI-Enhanced Authoring and Analytic Support
All parts of this article—except the practical examples of framework use—were created by human authors with AI support []. For example, ChatGPT was used for brainstorming and scenario setting. Google Scholar was used organically to search for literature supporting or expanding the topics discussed. AI-enhanced typing assistants (Grammarly, Microsoft, and Google) were used to improve the text’s clarity, syntax, and concision ().
All selected tools come with significant benefits and limitations—most noticeably ChatGPT. For example, ChatGPT output can be biased and lack deep domain-specific insight. ChatGPT cannot make ethical decisions or provide a transparent “thinking process.” Additionally, ChatGPT can hallucinate [,]. To address these limitations, we used iterative, collaborative human evaluation and cross-verification with existing literature.
Abductive reasoning was used for framework validation. We tested the framework/hypothesis against real-world case studies.
Due to the complex, emerging nature of the investigated phenomena, inductive and deductive reasoning cannot work well for this task. Therefore, abductive reasoning is the most appropriate approach for this task.

Ethical Considerations
This manuscript is a conceptual paper. It does not involve human participants, human data, or identifiable personal information. Therefore, per applicable ethical guidelines, including COPE recommendations, ethics committee or institutional review board approval was not required.
Results
An Explanatory Framework for AI-Enhanced CPD
This study developed the ALEERRT-CA framework. It is a theory-driven model that provides insight into ways AI can transform CPD as a sociotechnical system. Our multimethod analysis identified 6 central AI pillars and demonstrated how their interactions become clearer when interpreted through CT and ANT. The findings indicated that, in addition to new technological capabilities, AI serves as a transformative agent. It can change relationships, mediating artifacts, and system-level dynamics in CPD and the broader health care system. This study contributes to the conceptual clarity of how AI transforms CPD. It delivers an integrative explanatory model to an under-theorized domain, providing a foundation for future empirical, design-oriented, and policy-focused work in AI-enhanced CPD.
AI and Learning Theories
Learning theories are an essential building block of AI-enhanced learning environments. For example, Cognitive Load Theory (CLT) and Connectivism can provide valuable insight into using AI to enhance learning activities in different parts of the learning system.
CLT focuses on managing learners’ cognitive load to enhance learning efficiency []. The goal is to decrease or eliminate extraneous load (eg, unnecessary examples), adjust the intrinsic load to the learner’s skill level (eg, context appropriate to the skill level), and ensure that the remaining work memory capacity is focused on germane load (ie, cognitive learning processes). Examples of AI-enhanced and CLT-guided interventions are adaptive learning modules, in which AI adjusts difficulty based on learner performance [,], and multimedia learning, in which AI optimizes multimedia delivery to optimize cognitive load [].
Connectivism emphasizes the role of networks and connections in the learning process [,]. AI and connectivism may facilitate expanding and managing learning networks in the health care sector []. For example, AI can aggregate and curate up-to-date information for health care professionals [,] and suggest learning paths based on individual goals and interests.
The examples above illustrate the interaction between learning theories and AI. Learning theories explain how learning happens, and AI can help enhance the learning processes described by learning theories. However, learning theories do not describe the broader framework of how AI, learning practices, and learning theories interact in the broader health care context.
Framework for Integration of AI and CPD
The complex interaction between AI, learning theories, and CPD practices introduces the need for a framework to comprehend and guide the integration of AI with CPD for health professionals. We propose a framework made of the 6 foundational pillars: AI Literacy, Explainability, Ethics, Readiness, Reliability, and Learning Theories, and 2 complementary theoretical lenses: Complexity theory and Actor-network theory, ALEERRT-CA in short ().
AI literacy describes our capacity to understand AI’s basic concepts and principles, such as natural language processing, machine learning, computer vision, and deep data analytics []. It also involves the ability to use AI tools and applications in clinical practice, such as decision support systems, diagnostic tools, treatment recommendations, patient monitoring, and health education. AI literacy is essential for health and CPD professionals to leverage AI’s potential for improving quality and efficiency.

AI explainability (internal logic of how AI makes decisions) and the explainability of the impact AI makes on learning interventions (relationship between AI actions and broader sociotechnical CPD context) are enablers of AI literacy, readiness, and ethics—understanding how AI-enhanced systems work eases successful implementation of AI [-]. However, it seems that trusting that AI is reliable and performs well is more important than having a deep understanding of how AI algorithms work []. Very often, we tolerate AI-enhanced solutions as black boxes ().
Black boxing is a common process associated with maturation, reliability, and wide acceptance of technology. While the inputs and outputs of the system are known, the user does not understand, or is not even aware of the processes inside the black box. Smartphones are a typical example of a black box []. As users, we are experienced in inputting and using smartphone outputs. Yet, the average user is minimally aware of the processes in smartphones and how networked and often AI-enhanced apps in the phone interact with external actors. We focus on the service it delivers to us, not on how it works. We trust it works well.
When failures occur or in attempts to improve the system, we need tools to open black boxes of sociotechnical systems that use AI to make processes visible and ideally understandable to humans []. It is an opportunity to examine human and nonhuman actors in the network of relationships between them and how their interactions deliver desired or, in some cases, erroneous outcomes.
AI ethics refers to AI’s ethical, legal, social, and professional implications for health care and CPD practice. It involves protecting patient privacy, obtaining informed consent, ensuring accountability, managing bias, ensuring fairness, protecting intellectual property, promoting transparency, and building trust []. AI ethics principles enable health professionals to use AI safely and responsibly.
AI readiness refers to the willingness and preparedness to adopt and implement AI technologies in learning and clinical practice []. It involves having a positive attitude and mindset towards AI, being open to learning from and with AI systems, and being able to cope with the changes and challenges that AI brings. AI readiness empowers health professionals to embrace AI as a partner in health care delivery and CPD. This readiness involves properly implementing AI-enhanced, research- and data-driven care by developing mental models, skill sets, and support systems for health care and CPD providers, their teams, and their organizations. Enterprise-wide AI readiness models can be considered to help organize and prioritize the organizational resources for the successful implementation of AI technologies [-].
AI reliability is the ability of an AI system to perform consistently and accurately under varying conditions []. It is crucial in high-stakes clinical and CPD applications. Yet, it comes with considerable plasticity []. AI systems, or humans using AI tools, must be more reliable than humans alone. For example, apps that flag at-risk students or problematic content posted by students may not be as reliable as humans, yet they may reduce the time-consuming task of reading posts and communicating findings, enabling humans to perform much better and faster than they would alone.

Learning theories describe how learning happens and explain where and how AI can improve learning interventions.
CT and ANT are proposed as complementary theoretical lenses, where the CT lens is better suited to deliver a holistic view, while ANT can easily zoom in on a specific part of the system or a specific AI-enhanced app and deliver more actionable insight.
CT explains that our world is made of complex, constantly evolving systems. Those systems are open, and they adapt to changes in the context. Complex systems have emerging properties. Therefore, we cannot understand them simply by analyzing their parts. To analyze AI-enhanced systems, we should not focus solely on AI, but on the system and how the addition of AI is transforming the system.
Complex systems are connected, open, and nested. For example, an individual clinician is a system. Yet she is part of the operating room team—a supersystem. Above that, we have suprasystems such as hospital, national, and global health care systems [,]. On all those levels, AI can play a role []. Furthermore, AI-related improvement in one system, for example, in the operating room, will stimulate changes in external systems such as CPD, administration, and patient communication.
CT explains the need for multiple learning theories and the interaction between them. Learning theories observe the same phenomena—learning. However, they observe learning in different parts of nested hierarchies of CASs—that is, different contexts [,]. The CLT, for example, focuses on individual and learning that happens primarily internally—in a learner’s brain. Connectivism, on the other hand, focuses on learning as a global, social, and technology and artifact-enhanced endeavor. AI can have an impact on all levels of our reality (from individual to global society). Therefore, it is fair to believe that the interaction between AI and established learning theories will be fruitful, allowing us to optimize CPD interventions at every level (individual, team, organization, population, state, and global society) [].
ANT explains that our reality is shaped through evolving networks of relationships between human and nonhuman actors []. ANT posits that nonhuman actors, such as text, digital devices (smartphones or electronic health records), software programs, ideas, organizations, or AI tools, have the agency to shape our reality like humans.
ANT can serve as a magnifier for analyzing micro-level networks of nonhuman and human actors in a specific part of a complex sociotechnical system and at a particular time []. It is a good lens to analyze the use of a specific AI-enhanced app or department using AI. CT is better suited to the holistic view of the system [,], such as how AI is changing CPD. While the picture created with CT is more inclusive, it is blurry. The macro system’s complex, evolving nature does not allow us to capture all system elements. As a combination, ANT and CT enable us to observe the big picture and, when needed, zoom in and observe a specific part of the AI-enhanced learning health care system.
The framework provides tools that can help us open the black box and observe the “main anatomical structures of AI-enhanced CPD” through both a macro-level (CT) and micro-level (ANT) lenses ().
The ALEERRT-CA framework is organized in three conceptual layers. CT and ANT serve as the ontological foundation by defining CPD as a nested, adaptive sociotechnical system made of interacting human and nonhuman actors. Learning theories serve as the epistemological layer, offering assumptions about how knowledge is constructed and how learning can be enhanced. The 6 AI pillars—Literacy, Explainability, Ethics, Readiness, Reliability, and Learning Theories—function as interactional mechanisms that shape how AI influences relationships, behaviors, and emergent patterns within the CPD system. Learning theories simultaneously serve as an AI impact mechanism that enables AI impact and an epistemological lens.
Discussion
Reframing AI-Enhanced CPD as a Sociotechnical System
This study presents a novel theoretical lens for analyzing the sociotechnical dynamics of AI-enhanced CPD. It provides insight into how AI reshapes relationships, redistributes agency, and generates emergent learning dynamics across CPD systems. The framework combines 6 AI pillars (AI literacy, explainability, ethics, readiness, reliability, and learning theories) with CT and ANT. The main finding is that complex interactions among system elements are the primary forces shaping AI-enhanced CPD, while isolated AI tools or adoption factors are of secondary importance. Due to its complex nature, AI-enhanced CPD often acts as a black box. The framework strengthens our capacity to analyze the complex sociotechnical nature of AI-enhanced CPD.
Adequacy of the Framework
The proposed framework provides a checklist of 6 building blocks of AI implementation and 2 theoretical lenses to observe system changes associated with AI. It is a tool set that can help us plan AI implementation and, when needed, open black boxes of AI-enhanced CPD. Therefore, it addresses the criteria of practical application and simplicity (parsimony) []. CT and ANT, as macro-level and micro-level theories, allow us to observe the system and large-scale structures (CT), and, when needed, focus on the interaction between a small network of human and nonhuman actors (ANT). The proper theoretical lens for a proper task model aligns with the principle of parsimony.
The framework is rooted in 2 established theories (CT and ANT) and the literature on AI implementation, which enhances its external consistency and ensures it fits with the broader theoretical landscape of AI in CPD. It appears to provide a good balance of practicality, simplicity, and theoretical robustness, suggesting it can effectively explain how successful AI-enhanced improvement of CPD can occur.
Value of the Constructed Theoretical Framework
The ALEERRT-CA allows us to open the black boxes of AI-enhanced CPD. The black boxes exist on at least 2 levels. The first level is AI algorithms. That level is not addressed in this paper. On the level above, where this framework is focused, we have AI-enhanced sociotechnical systems and need to understand hidden, complex interactions between AI, technical, and social elements that influence learning health system performance and outcomes. The framework describes the main parts—the anatomy—of the AI-enhanced CPD and 2 theoretical lenses used to analyze them.
AI implementation-focused guides, such as Rashidi Your AI Survival Guide [], highlight many of the conditions necessary for successful AI adoption, including literacy, workflow redesign, trust, governance, readiness, and interprofessional collaboration. These themes map closely onto the ALEERRT-CA pillars. The proposed framework builds on that. ALEERRT-CA extends beyond operational guidance.
Whereas leadership manuals explain what organizational leaders must pay attention to, ALEERRT-CA explains how these elements interact within a complex adaptive learning ecosystem and why AI reshapes roles, routines, and system behaviors. By embedding AI constructs within CT and ANT, the framework provides a level of explanatory depth that practice-oriented texts do not have. The framework explains the dynamics through which AI mediates relationships, generates tensions, and produces emergent patterns in CPD. In this way, ALEERRT-CA transforms a collection of best practices into a coherent, theory-driven model capable of guiding both empirical research and strategic system design.
Interaction Between Pillars and Theoretical Lenses
The 6 pillars of ALEERRT-CA demonstrate distinct yet interdependent explanatory functions when analyzed through ANT and CT ().
For example, ANT shows that explainability is not a feature strictly associated with AI algorithms. Reality is more complex. Explainability is negotiated across a network of users, artifacts, interfaces, and organizational structures. Therefore, explainability is relational, contextual, and dynamic. What counts as “a good explanation” varies with the actor making the assessment, purpose, location within the system, and level of risk.
ANT informs AI ethics by showcasing how agency, accountability, and power are redistributed when nonhuman actors participate in decisions. System-wide ethical outcomes are created through interactions among designers, clinicians, learners, data, models, platforms, policies, and workflows. ANT enables CPD leaders to evaluate not only whether AI is ethical, but how ethical behavior is produced, constrained, or undermined by the network.
Yang et al [] provide a practical example. Explainability and perceived reliability during peer reviews are shaped by the social “advice-taking phenomenon.” Physicians who openly use AI tools during decision-making usually receive lower competence ratings, despite shared belief that AI is helpful. Positioning AI as a verification tool only moderately reduces that negative effect. The probable reason is that we have a significant reconfiguration of the actor network. Epistemic authority, agency, and accountability are redistributed and reshaped by AI. That change reduces trust and perceived reliability, despite improved performance.

As illustrated in , ANT enables micro-level analytic tracing of how AI reshapes agency, accountability, and epistemic authority through human–nonhuman associations. CT complements this by explaining how localized interactions among these actors influence emergent system trajectories across nested sociotechnical strata. Together, the dual-lens positioning of ALEERRT-CA reframes AI-enhanced CPD not as a linear implementation challenge but as a dynamic process of relational negotiation and systemic adaptation. This dual-theoretical integration provides explanatory depth and design utility, capturing both mechanism and emergence.
| ALEERRT-CA pillar | ANT contribution, a micro-level analytical mechanism | CT contribution, a macro-level system mechanism | Theoretical implication |
| AI literacy | Positions literacy as a prerequisite for relational/contextual agency. Actors with greater literacy have disproportionate influence over problem definition, meaning-making, and artifact interpretation. | Literacy accelerates or constrains system adaptation; uneven literacy becomes a systemic bottleneck spreading nonlinearly across teams and institutions. | AI literacy shapes epistemic power and mediates how knowledge circulates within sociotechnical networks. |
| Explainability | ANT frames explainability as an outcome of interaction between various human and nonhuman actors and ideas; what is “understandable” is rooted in actor identity, status/position in the networks, network structures, and interpretive frames. | Changes in the level of explainability can amplify or suppress feedback loops, affecting trust, adoption, and the rate of transformation. Explainability is needed at multiple levels: from the work of AI algorithms, individual-focused AI-enhanced CPD, team- and organization-focused learning, and integration between learning and clinical practice. | Explainability is relational, negotiated, and context-dependent rather than a static technical property. Explainability of processes (eg, AI algorithms) on one level influences explainability on other levels (eg, clinical practice) |
| Ethics | Shows how responsibility and agency are redistributed when nonhuman actors coproduce decisions and outcomes. | Ethical disruptions propagate via feedback loops, enabling amplification of harms or benefits at scale. Ethics is important on all levels: algorithm, individual, team, organization, society | Ethics becomes a governance outcome of network configuration, not solely adherence to level-specific principles. |
| Readiness | Readiness depends on network alignment. Misaligned goals, competencies, and affordances produce friction and resistance. | Initial conditions and small deviations shape emergent trajectories (“sensitivity to initial states”). Readiness is multilayered. At different levels of system aggregation (individual, team, organization, society), we can observe considerable differences in readiness. | Readiness is dynamic and coevolving, requiring continual recalibration across nested systems. |
| Reliability | Reliability emerges from situated interaction between human and nonhuman actors; failure modes are relational. | Failures or successes can trigger rapid changes in trust in AI-enhanced systems. That can impact behavior across the system(s). | Reliability is contingent and contextual, negotiated by the system (and between systems) rather than guaranteed by the artifact. |
| Learning theories | ANT reveals how AI acquires pedagogical agency (content generation, evaluation, orchestration) and shapes relationships in a network of actors. | CT positions AI-enhanced learning as emergent across individual, social, and organizational strata. | CPD must align AI-enhanced learning design with both cognitive and sociotechnical learning dynamics. AI can enhance individual, team, and system learning. |
aANT: Actor-Network Theory.
bCT: Complexity Theory.
cAI: artificial intelligence.
dCPD: continuing professional development.
To illustrate the value of the ALEERRT-CA framework, we offer an explanatory case.
Explanatory Case: The American Society of Anesthesiologists BeaconBot and the Emerging AI-Enhanced CPD Ecosystem
The American Society of Anesthesiologists (ASA) is coproducing an AI-enhanced learning ecosystem that tackles clinical practice, quality improvement, education, research dissemination, and workforce development. A central AI-enhanced component is ASA BeaconBot, a generative AI knowledge assistant available to ASA members []. BeaconBot synthesizes information from ASA’s public materials, member-only resources, Anesthesiology® journal, committee documents, and ASA Standards, Guidelines, and Statements. BeaconBot functions as a real-time mediator of organizational knowledge, supporting clinicians, educators, and staff in accessing evidence, policies, and practice recommendations more efficiently. It serves as a librarian, quickly providing answers to complicated questions with citations and links to sources that informed the answers.
In parallel, the Anesthesia Quality Institute is advancing a multistage integration strategy linking (1) Epic electronic health record data, (2) the National Anesthesia Clinical Outcomes Registry [], and (3) ASA’s association management system. Public descriptions of this work emphasize using high-volume clinical data to improve benchmarking and quality improvement across anesthesia practices.
A long-term, aspirational goal, currently conceptual rather than operational, is to explore whether abstracted, nonpatient-identified insights about clinicians’ practice patterns (eg, procedural focus, strengths, and learning needs) could be generated in the future. At present, legal, regulatory, and data privacy constraints limit such use.
These insights could eventually inform ASA’s learning management system and learning experience platform, enabling AI-supported personalized CPD recommendations aligned with actual clinical practice [].
ASA operates across multiple educational layers: individual clinicians and residents, perioperative teams, organizational purchasers of CPD, national public education and legislative advocacy, and international patient-safety leadership. Across these layers, AI is already entangled with everyday practice: faculty and members use AI tools for literature synthesis, communication, research workflows, and meeting documentation. Many committee processes and reports are generated or refined with AI-assisted writing tools, and several e-learning materials, including SCORM (Shareable Content Object Reference Model) e-learning modules, are produced with AI-supported content development [].
Internally, ASA is implementing an empowered product team model []. The traditional hierarchical and often siloed, feature-focused waterfall project management practices are being replaced with an agile continuous discovery model built around the active involvement of all stakeholders [].
The empowered product teams model aligns with both CT and ANT, as it operates as an adaptive, feedback-driven hub capable of responding to emergent interactions among human and nonhuman actors. In contrast, waterfall models assume stability, linearity, and fixed requirements—assumptions that are poorly aligned with the aim of innovation in AI-enhanced CPD systems.
Viewed through the ALEERRT-CA framework, this evolving ecosystem illustrates:
- Literacy: Heterogeneous AI literacy across clinicians, faculty, learners, and staff creates both capability gaps and opportunities for targeted support.
- Explainability: BeaconBot’s responses and any future recommendation logic require transparent reasoning pathways for clinicians, educators, and regulators. In addition to how recommendations are created, we want to know how they help clinicians improve their practice.
- Ethics: Registry-to-learning integration raises governance questions around data use, privacy, and boundaries between performance data and educational personalization.
- Readiness: Organizational readiness fluctuates across departments as product teams, educators, IT, and leadership adapt to new workflows and AI-mediated artifacts.
- Reliability: Perceived reliability depends on both technical accuracy and user trust, particularly when AI synthesizes standards or summarizes clinical knowledge.
- Learning theories: Learning shifts from content-centric modules toward system-wide, data-driven adaptive learning, where clinical practice informs CPD and CPD feeds back into practice.
From a CT perspective, ASA’s system shows nonlinear interactions: small changes (eg, improved data quality, AI-generated insights, or revised governance policies) can propagate across clinical, educational, and organizational layers. From an ANT lens, AI agents (BeaconBot, analytics engines, transcription models, generative AI) operate as nonhuman actors that mediate relationships among clinicians, educators, committees, systems, and CPD infrastructures. Those actions empower CPD to act as a learning health ecosystem, characterized by evolving feedback loops, distributed agency, and emergent patterns of adaptation.
While results are promising, strong tensions around data governance, ethics, explainability, change fatigue, and the need for AI literacy among clinicians, faculty, and staff impede progress.
This case illustrates how AI-enhanced CPD cannot be understood as a discrete educational intervention but must be analyzed as a system-level transformation involving data flows, governance, professional identity, and learning infrastructure—precisely the analytic space addressed by ALEERRT-CA. Thus, it is equally beneficial for organizations and CPD professionals.
As a relatively simple but inclusive and complexity-ready model, it can serve as a toolset that will help us with the integration of AI in CPD. It can support interdisciplinary collaboration between medical, engineering, social, ethical, and legal domains and help us shape human-centric AI-enhanced CPD.
As with physical toolsets, it is not necessary to use all tools in the box for every situation. Some analytic tasks require only a subset of the framework’s components. The framework is also extensible. While CT and ANT are well-suited for explaining macro-level system dynamics and micro-level sociotechnical interactions in AI-enhanced CPD, other theoretically compatible lenses could be incorporated if they better match a specific analytic need or reflect the expertise of a given research team.
Limitations
The complex, constantly evolving, and rapid nature of AI in our society requires an agile, continuous-improvement mindset. Therefore, we propose this framework as a work in progress and a position on the direction we, as the health care CPD community empowered with AI tools, may take. The framework does not claim completeness or finality but is intended as a theoretically grounded starting point for iterative refinement through empirical studies.
The iterative nature of the framework allows us to improve both our use of it and the framework itself. Similarly, repeated empirical validation will help confirm its value.
Conclusion
Integration of AI into health care CPD is reshaping a complex system, demanding a deeper understanding and strategic application of available resources.
This work introduces ALEERRT-CA, a theory-driven framework that provides a lens for observing dynamic interactions in AI-enhanced CPD. The framework explains how AI reorganizes relationships, redistributes agency, and generates emergent learning across health care and CPD systems.
ALEERRT-CA can serve as an analytic tool for practice. Educators and organizational leaders can use the tool to assess AI-enhanced learning at multiple levels, from individual learners to organizations and broader health care systems. The paired macro and micro lenses support both strategic “big picture planning” and focused analysis of local AI-enhanced CPD initiatives.
Lessons for Practice
As AI becomes increasingly embedded in healthcare education and practice, CPD professionals are faced with practical decisions about how to design, implement, and govern AI-enhanced learning. The following lessons for practice distill key insights from this study:
- AI-enhanced tools have become part of our daily lives. It is not whether we use AI, but how effectively we use it to enhance our CPD practices.
- AI makes an impact on complex sociotechnical systems. Therefore, AI-enhanced CPD interventions should be designed to foster beneficial interactions between AI tools, health care professionals, and the broader health care system.
- Theoretical tools, such as ALEERRT-CA, can increase our ability to open the black boxes of AI-enhanced CPD sociotechnical systems and understand and improve how AI is used in CPD, ultimately improving the impact of CPD.
Acknowledgments
We thank our colleagues and leaders at SACME, especially Todd Dorman, MD; Heather MacNeill, MD; David Wiljer, PhD; Simon Kitto, PhD; Olivier Petinaux, MS; and Marianna Shershneva, MD, PhD, for their helpful feedback, encouragement, and support.
AI tools, such as ChatGPT 3.5 (starting in 2023) to 5.1 (2025), Google Gemini, and AI-based writing assistants, supported idea generation, summarization, and text refinement. Human authors conducted all initial brainstorming, conceptual development, theoretical interpretation, and final editing. All AI outputs were reviewed and verified.
Data Availability
No datasets were generated or analyzed during the current study. This work is conceptual and theory-building in nature.
Authors' Contributions
Conceptualization: VH (lead)
Methodology: VH (lead), SV (supporting), GRD (supporting), HD (supporting), ROB (supporting), RW (supporting)
Formal analysis: VH (lead)
Validation: SV (equal), GRD (equal), HD (equal), ROB (equal), RW (equal)
Visualization: VH (lead), GRD (supporting)
Writing – original draft: VH (lead)
Writing – review & editing: VH (lead), SV (supporting), GRD (supporting), HD (supporting), ROB (supporting), RW (supporting)
All authors contributed to the study design, iterative framework refinement, critical manuscript review, and approval of the final version.
Conflicts of Interest
None declared.
References
- Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. Jun 2019;6(2):94-98. [CrossRef] [Medline]
- Jandrić P, Knox J, Besley T, Ryberg T, Suoranta J, Hayes S. Postdigital science and education. Educ Philos Theory. 2018;50(10):893-899. [CrossRef]
- Federspiel F, Mitchell R, Asokan A, Umana C, McCoy D. Threats by artificial intelligence to human health and human existence. BMJ Glob Health. May 2023;8(5):e010435. [CrossRef] [Medline]
- Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. Jan 2019;25(1):44-56. [CrossRef] [Medline]
- Salwei ME, Carayon P. A sociotechnical systems framework for the application of artificial intelligence in health care delivery. J Cogn Eng Decis Mak. Dec 2022;16(4):194-206. [CrossRef] [Medline]
- Ensign D, Nisly SA, Pardo CO. The future of generative AI in continuing professional development (CPD): crowdsourcing the Alliance community. J CME. Dec 9, 2024;13(1):2437288. [CrossRef] [Medline]
- Mastrianni A, Kmetz-Cutrone P, Chang K, Stein JY, Sarcevic A. Beyond decision making: considering collaboration and agency in the design of AI-based decision-support systems for fast-response medical teams. 2025. Presented at: 28th ACM Conference on Computer-Supported Cooperative Work and Social Computing (ACM CSCW 2025); Oct 18-22, 2025. [CrossRef]
- Greenhalgh T, Papoutsi C. Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med. Jun 20, 2018;16(1):95. [CrossRef] [Medline]
- Mesko B, deBronkart D, Dhunnoo P, Arvai N, Katonai G, Riggare S. The evolution of patient empowerment and its impact on health care’s future. J Med Internet Res. May 1, 2025;27:e60562. [CrossRef] [Medline]
- Bachina L, Kanagala A. Health revolution: AI-powered patient engagement. Int J Chem Biochem Sci. 2023;24(5):722-731. URL: https://www.iscientific.org/wp-content/uploads/2023/12/97-ijcbs-23-24-5-97.pdf [Accessed 2026-01-22]
- Borsboom D, van der Maas HLJ, Dalege J, Kievit RA, Haig BD. Theory construction methodology: a practical framework for building theories in psychology. Perspect Psychol Sci. Jul 2021;16(4):756-766. [CrossRef] [Medline]
- Chen Y, Lehmann CU, Malin B. Digital information ecosystems in modern care coordination and patient care pathways and the challenges and opportunities for AI solutions. J Med Internet Res. Dec 2, 2024;26:e60258. [CrossRef] [Medline]
- Doraiswamy PM, Blease C, Bodner K. Artificial intelligence and the future of psychiatry: insights from a global physician survey. Artif Intell Med. Jan 2020;102:101753. [CrossRef] [Medline]
- Ning Y, Ong JCL, Cheng H, et al. How can artificial intelligence transform the training of medical students and physicians? Lancet Digit Health. Oct 2025;7(10):100900. [CrossRef] [Medline]
- Wartman SA, Combs CD. Reimagining medical education in the age of AI. AMA J Ethics. Feb 1, 2019;21(2):E146-E152. [CrossRef] [Medline]
- Mosch L, Fürstenau D, Brandt J, et al. The medical profession transformed by artificial intelligence: qualitative study. Digit Health. 2022;8:20552076221143903. [CrossRef] [Medline]
- Lipsitz LA. Understanding health care as a complex system: the foundation for unintended consequences. JAMA. Jul 18, 2012;308(3):243-244. [CrossRef] [Medline]
- Ogden K, Kilpatrick S, Elmer S. Examining the nexus between medical education and complexity: a systematic review to inform practice and research. BMC Med Educ. Jul 5, 2023;23(1):494. [CrossRef] [Medline]
- Mitchell B. Engaging with Actor-Network Theory as a Methodology in Medical Education Research. Routledge; 2021. ISBN: 9781000362831
- Woodruff JN. Accounting for complexity in medical education: a model of adaptive behaviour in medicine. Med Educ. Sep 2019;53(9):861-873. [CrossRef] [Medline]
- Smith CS. Complex adaptive systems. In: Foundations of Interprofessional Health Education: An Ecological Theory. Springer; 2023:89-97. [CrossRef] ISBN: 9783031334146
- Alami H, Lehoux P, Auclair Y, et al. Artificial intelligence and health technology assessment: anticipating a new level of complexity. J Med Internet Res. Jul 7, 2020;22(7):e17707. [CrossRef] [Medline]
- Randhawa GK, Jackson M. The role of artificial intelligence in learning and professional development for healthcare professionals. Healthc Manage Forum. Jan 2020;33(1):19-24. [CrossRef] [Medline]
- Dazzi P. The internet of AI agents (IAIA): a new frontier in networked and distributed intelligence. Int J Netw Distrib Comput. Jun 2025;13:16. [CrossRef]
- Artificial intelligence and machine learning (AI/ML)-enabled medical devices. Software as a Medical Device (SaMD). US Food and Drug Administration; 2025. URL: https://www.fda.gov/medical-devices/digital-health-center-excellence/software-medical-device-samd [Accessed 2026-01-22]
- Nilsen P, Svedberg P, Neher M, et al. A framework to guide implementation of AI in health care: protocol for a cocreation research project. JMIR Res Protoc. Nov 8, 2023;12:e50216. [CrossRef] [Medline]
- Zawacki-Richter O, Marín VI, Bond M, Gouverneur F. Systematic review of research on artificial intelligence applications in higher education – where are the educators? Int J Educ Technol High Educ. 2019;16:39. [CrossRef]
- Papa R, et al. Discourse analysis on learning theories and AI. 2020. Presented at: Intelligent Computing: Proceedings of the 2020 Computing Conference; Jul 16-17, 2020. [CrossRef]
- Xu W, Ouyang F. A systematic review of AI role in the educational system based on a proposed conceptual framework. Educ Inf Technol. 2022;27(3):4195-4223. [CrossRef]
- Bleakley A, Cleland J. Sticking with messy realities: how ‘thinking with complexity’ can inform health professions education research. In: Cleland J, Durning SJ, editors. Researching Medical Education. 2022:197-208. [CrossRef] ISBN: 9781119839415
- Turner JR, Baker RM. Complexity theory: an overview with potential applications for the social sciences. Systems. 2019;7(1):4. [CrossRef]
- Lewin K. Problems of research in social psychology (1943-44). In: Lewin GW, editor. Resolving Social Conflicts and Field Theory in Social Science. American Psychological Association; 1997:279-288. [CrossRef] ISBN: 9781557984159
- Greenwood DJ, Levin M. Introduction to Action Research: Social Research for Social Change. SAGE Publications; 2006. URL: https://methods.sagepub.com/book/introduction-to-action-research [Accessed 2026-01-22] [CrossRef] ISBN: 9781483389370
- Makridakis S. The forthcoming artificial intelligence (AI) revolution: its impact on society and firms. Futures. Jun 2017;90:46-60. [CrossRef]
- Zhang W, Cai M, Lee HJ, Evans R, Zhu C, Ming C. AI in medical education: global situation, effects and challenges. Educ Inf Technol. 2024;29(4):4611-4633. [CrossRef]
- Mouloudj K, et al. Adopting artificial intelligence in healthcare: a narrative review. In: Teixeira S, Remondes J, editors. The Use of Artificial Intelligence in Digital Marketing: Competitive Strategies and Tactics. IGI Global Scientific Publishing; 2024:1-20. [CrossRef] ISBN: 9781668493243
- Rajpurkar P, Lungren MP. The current and future state of AI interpretation of medical images. N Engl J Med. May 25, 2023;388(21):1981-1990. [CrossRef] [Medline]
- Preiksaitis C, Rose C. Opportunities, challenges, and future directions of generative artificial intelligence in medical education: scoping review. JMIR Med Educ. Oct 20, 2023;9:e48785. [CrossRef] [Medline]
- Allen PM, Varga L. Complexity: the co-evolution of epistemology, axiology and ontology. Nonlinear Dynamics Psychol Life Sci. Jan 2007;11(1):19-50. [Medline]
- Salvagno M, Taccone FS, Gerli AG. Can artificial intelligence help for scientific writing? Crit Care. Feb 25, 2023;27(1):75. [CrossRef] [Medline]
- Dave T, Athaluri SA, Singh S. ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front Artif Intell. May 4, 2023;6:1169595. [CrossRef] [Medline]
- Young JQ, Van Merrienboer J, Durning S, Ten Cate O. Cognitive load theory: implications for medical education: AMEE Guide No. 86. Med Teach. May 2014;36(5):371-384. [CrossRef] [Medline]
- Triola MM, Burk-Rafel J. Precision medical education. Acad Med. Jul 1, 2023;98(7):775-781. [CrossRef] [Medline]
- Narang A, Velagapudi P, Rajagopalan B, et al. A new educational framework to improve lifelong learning for cardiologists. J Am Coll Cardiol. Jan 30, 2018;71(4):454-462. [CrossRef] [Medline]
- Bays HE, Fitch A, Cuda S, et al. Artificial intelligence and obesity management: an obesity medicine association (OMA) clinical practice statement (CPS) 2023. Obes Pillars. Apr 2023;6:100065. [CrossRef] [Medline]
- MacNeill H, Masters K, Nemethy K, Correia R. Online learning in health professions education. Part 1: teaching and learning in online environments: AMEE Guide No. 161. Med Teach. Jan 2024;46(1):4-17. [CrossRef] [Medline]
- Goldie JGS. Connectivism: a knowledge learning theory for the digital age? Med Teach. Oct 2016;38(10):1064-1069. [CrossRef] [Medline]
- Chaudhry S, Dhawan S. AI-based recommendation system for social networking. Presented at: Proceedings of the International Conference on Soft Computing: Theories and Applications (SoCTA 2018); Dec 21-23, 2018. [CrossRef]
- Stamatis D, Kefalas P, Kargidis T. A multi‐agent framework to assist networked learning. J Comput Assist Learn. Sep 1999;15(3):201-210. [CrossRef]
- Eggmann N. Not plug-and-play: successful adoption of an AI-based learning experience platform. In: Ifenthaler D, Seufert S, editors. Artificial Intelligence Education in the Context of Work. Springer International Publishing; 2022:215-226. [CrossRef] ISBN: 9783031144899
- Laupichler MC, Aster A, Schirch J, Raupach T. Artificial intelligence literacy in higher and adult education: a scoping literature review. Comput Educ Artif Intell. 2022;3:100101. [CrossRef]
- Amann J, Blasimme A, Vayena E, Frey D, Madai VI, Precise4Q consortium. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak. Nov 30, 2020;20(1). [CrossRef] [Medline]
- Barredo Arrieta A, Díaz-Rodríguez N, Del Ser J, et al. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fusion. Jun 2020;58:82-115. [CrossRef]
- Dwivedi R, Dave D, Naik H, et al. Explainable AI (XAI): core ideas, techniques, and solutions. ACM Comput Surv. 2023;55(9):1-33. [CrossRef]
- McCoy LG, Brenna CTA, Chen SS, Vold K, Das S. Believing in black boxes: machine learning for healthcare does not need explainability to be evidence-based. J Clin Epidemiol. Feb 2022;142:252-257. [CrossRef] [Medline]
- Sage D, Vitry C, Dainty A. Exploring the organizational proliferation of new technologies: an affective actor‑network theory. Organ Stud. Mar 2020;41(3):345-363. [CrossRef]
- Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D. A survey of methods for explaining black box models. ACM Comput Surv. 2018;51(5):1-42. [CrossRef]
- Murphy K, Di Ruggiero E, Upshur R, et al. Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics. Feb 15, 2021;22(1):14. [CrossRef] [Medline]
- Karaca O, Çalışkan SA, Demir K. Medical artificial intelligence readiness scale for medical students (MAIRS-MS) - development, validity and reliability study. BMC Med Educ. Feb 18, 2021;21(1):112. [CrossRef] [Medline]
- Uba C, Lewandowski T, Böhmann T. The AI-based transformation of organizations: the 3D-model for guiding enterprise-wide AI change. Presented at: Proceedings of the 56th Hawaii International Conference on System Sciences; Jan 3-7, 2023. URL: https://scholarspace.manoa.hawaii.edu/items/5f90237d-7ad2-444c-9cba-66c95c0ac1e0 [Accessed 2026-01-22] [CrossRef]
- Jöhnk J, Weißert M, Wyrtki K. Ready or not, AI comes— an interview study of organizational AI readiness factors. Bus Inf Syst Eng. 2021;63:5-20. [CrossRef]
- Wiljer D, Salhia M, Dolatabadi E, et al. Accelerating the appropriate adoption of artificial intelligence in health care: protocol for a multistepped approach. JMIR Res Protoc. Oct 6, 2021;10(10):e30940. [CrossRef] [Medline]
- Asan O, Bayrak AE, Choudhury A. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res. Jun 19, 2020;22(6):e15154. [CrossRef] [Medline]
- Appelganc K, Rieger T, Roesler E, Manzey D. How much reliability is enough? A context‑specific view on human interaction with (artificial) agents from different perspectives. J Cogn Eng Decis Mak. 2022;16(4):207-221. [CrossRef]
- Begun JW, Zimmerman B, Dooley K. Health care organizations as complex adaptive systems. In: Mick SM, Wyttenbach M, editors. Advances in Health Care Organization Theory. Jossey-Bass; 2003:253-288. URL: https://citeseerx.ist.psu.edu/document?doi=1f1da1cc58471e929b11090811479441057fff6f&repid=rep1&type=pdf [Accessed 2026-01-22] ISBN: 9780787957643
- Borghi J, Ismail S, Hollway J, et al. Viewing the global health system as a complex adaptive system - implications for research and practice. F1000Res. Oct 7, 2022;11:1147. [CrossRef] [Medline]
- Gibson D, Kovanovic V, Ifenthaler D, Dexter S, Feng S. Learning theories for artificial intelligence promoting learning processes. Br J Educ Technol. 2023;54(5):1125-1146. [CrossRef]
- Mukhalalati BA, Taylor A. Adult learning theories in context: a quick guide for healthcare professional educators. J Med Educ Curric Dev. Apr 10, 2019;6:2382120519840332. [CrossRef] [Medline]
- Cresswell KM, Worth A, Sheikh A. Actor-Network Theory and its role in understanding the implementation of information technology developments in healthcare. BMC Med Inform Decis Mak. Nov 1, 2010;10(1):67. [CrossRef] [Medline]
- Pourbohloul B, Kieny MP. Complex systems analysis: towards holistic approaches to health systems planning and policy. Bull World Health Organ. Apr 1, 2011;89(4):242-242. [CrossRef] [Medline]
- Sober E. The principle of parsimony. Br J Philos Sci. Jun 1, 1981;32(2):145-156. [CrossRef]
- Rashidi S. Your AI survival guide: scraped knees, bruised elbows, and lessons learned from real-world AI deployments. John Wiley & Sons; 2024. ISBN: 978-1-394-27263-1
- Yang H, Dai T, Mathioudakis N, Knight AM, Nakayasu Y, Wolf RM. Peer perceptions of clinicians using generative AI in medical decision-making. NPJ Digit Med. Aug 18, 2025;8(1):530. [CrossRef] [Medline]
- Betty platform. URL: www.asahq.org/beaconbotpublic [Accessed 2026-01-23]
- American society of anesthesiologists announces first-of-its-kind medical specialty society collaboration with Epic to create anesthesia community registry. American Society of Anesthesiologists (ASA). May 2025. URL: https://www.asahq.org/about-asa/newsroom/news-releases/2025/05/asa-announces-medical-specialty-society-collaboration-with-epic-to-create-anesthesia-community-registry [Accessed 2026-01-22]
- Pusic MV, Birnbaum RJ, Thoma B, et al. Frameworks for integrating learning analytics with the electronic health record. J Contin Educ Health Prof. Jan 1, 2023;43(1):52-59. [CrossRef] [Medline]
- Guidance on the responsible use of artificial intelligence (AI) in accredited continuing education (CE). Accreditation Council for Continuing Medical Education (ACCME); 2025. URL: https://accme.org/wp-content/uploads/2025/12/1098_20251125_Guidance_on_Artificial_Intelligence_in_Accredited_CE_ACCME.pdf [Accessed 2026-01-22]
- Product operating model: the healthcare IT approach that can unleash transformational value. Chartis. 2023. URL: https://www.chartis.com/insights/product-operating-model-healthcare-it-approach-can-unleash-transformational-value [Accessed 2026-01-22]
- Cagan M, et al. Transformed: Moving to the Product Operating Model. 1st ed. Silicon Valley Product Group; 2024. URL: https://www.wiley.com/en-us/Transformed%3A+Moving+to+the+Product+Operating+Model-p-9781119697398 ISBN: 978-1-119-69739-8
Abbreviations
| AI: artificial intelligence |
| ALEERRT-CA: AI Literacy, Explainability, Ethics, Readiness, Reliability, and learning Theories-Complexity theory and Actor-network theory |
| ANT: Actor-Network Theory |
| ASA: American Society of Anesthesiologists |
| CAS: complex adaptive system |
| CLT: Cognitive Load Theory |
| CPD: continuing professional development |
| CT: Complexity Theory |
Edited by Blake Lesselroth; submitted 23.Nov.2024; peer-reviewed by Aviad Raz, Jesu Marcus Immanuvel Arockiasamy, Kamel Mouloudj, Pei-Hung Liao; final revised version received 23.Dec.2025; accepted 06.Jan.2026; published 12.Feb.2026.
Copyright© Vjekoslav Hlede, Sofia Valanci, G Robert D'Antuono, Heather Dow, Ronan O'Beirne, Richard Wiggins. Originally published in JMIR Medical Education (https://mededu.jmir.org), 12.Feb.2026.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic information, a link to the original publication on https://mededu.jmir.org/, as well as this copyright and license information must be included.

