Review
Abstract
Background: Artificial intelligence (AI) is increasingly integrated into health care, including psychiatry and psychology. In educational contexts, AI offers new possibilities for enhancing clinical reasoning, personalizing content delivery, and supporting professional development. Despite this emerging interest, a comprehensive understanding of how AI is currently used in mental health education, and the challenges associated with its adoption, remains limited.
Objective: This scoping review aimed to identify and characterize current applications of AI in the teaching and learning of psychiatry and psychology. It also sought to document reported facilitators of and barriers to the integration of AI within educational contexts.
Methods: A systematic search was conducted across 6 electronic databases (MEDLINE, PubMed, Embase, PsycINFO, EBM Reviews, and Google Scholar) from inception to October 2024. The review followed Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines. Studies were included if they focused on psychiatry or psychology, described the use of an AI tool, and discussed at least 1 facilitator of or barrier to its use in education. Data were extracted on study characteristics, population, AI application, educational outcomes, facilitators, and barriers. Study quality was appraised using several design-appropriate tools.
Results: From 6219 records, 10 (0.2%) studies met the inclusion criteria. Eight categories of AI applications were identified: clinical decision support, educational content creation, therapeutic tools and mental health monitoring, administrative and research assistance, natural language processing (NLP), program/policy development, students’ study aid, and professional development. Key facilitators included the availability of AI tools, positive learner attitudes, digital infrastructure, and time-saving features. Barriers included limited AI training, ethical concerns, lack of digital literacy, algorithmic opacity, and insufficient curricular integration. The overall methodological quality of included studies was moderate to high.
Conclusions: AI is being used across a range of educational functions in psychiatry and psychology, from clinical training to assessment and administrative support. Although the potential for enhancing learning outcomes is clear, its successful integration requires addressing ethical, technical, and pedagogical barriers. Future efforts should focus on AI literacy, faculty development, and institutional policies to guide responsible and effective use. This review underscores the importance of interdisciplinary collaboration to ensure the safe, equitable, and meaningful adoption of AI in mental health education.
doi:10.2196/75238
Keywords
Introduction
Cloud computing, artificial intelligence (AI), machine learning (ML), telehealth, digitally assisted diagnosis and treatment, and consumer-focused mobile health applications have reshaped the landscape of health care delivery. These technologies are now widely used in clinical care, scientific research, and self-management []. These developments offer the potential to improve treatment outcomes, promote greater patient engagement, and enable earlier diagnosis and intervention []. In addition to enhancing conventional clinical procedures, such as teleconsultation and patient record management, these advancements have made way for fresh, data-driven methods of diagnosing and treating patients with a wide range of illnesses []. Particularly in the fields of psychology and psychiatry, new research highlights the expanding impact of AI online learning environments, and e-therapies, all of which could potentially reshape clinical practice, educational routes, and health policy [,]. As they provide new avenues for psychiatric condition screening, diagnosis, and monitoring, AI and ML have, in fact, attracted a lot of attention in the field of mental health care []. Self-guided mental health apps and web-based cognitive behavioral therapy (CBT) modules are examples of digital interventions that can help address systemic gaps in mental health care by improving access in underserved areas, reducing stigma, and lowering infrastructure costs. Still, limitations, such as weakened therapeutic alliance, limited support for complex conditions, and digital literacy barriers persist, highlighting the need for complementary innovations, such as AI []. The goal of ML, a branch of AI, is to empower computers to learn from data and make predictions or judgments by using statistical models and algorithms []. Potential applications of ML include predicting the effectiveness of antidepressant medicine, defining depression, estimating the risk of suicide, and predicting psychotic episodes in people with schizophrenia [-]. Beyond individual diagnosis and prognosis, ML has also been explored for system-level uses, such as optimizing service triage, forecasting population mental health trends, and informing resource allocation strategies []. Although promising, these uses raise critical concerns about the limits of algorithmic decision-making in capturing complex human experiences, therapeutic nuance, and clinical judgment.
In any case, any increase in accuracy or efficiency must be balanced against possible drawbacks, including algorithmic bias, concerns about data privacy, and the requirement for open systems to maintain patient confidence []. There are concerns over chatbots and other automated therapy tools’ ability to give compassionate care and uphold therapeutic boundaries, despite being promoted as affordable ways to provide mental health help []. Therefore, the use of AI-based technologies (eg, chatbots, virtual assistants, or automated therapy platforms) necessitates careful evaluation of any potential negative effects, such as algorithmic bias, transparency issues, patient confidentiality, and wider ethical issues [,]. These tools are already transforming the patient-clinician relationship and raise complex questions about how therapeutic alliances can be established and maintained in digital environments []. The traditional roles and responsibilities of health care providers may change because of the advent of automated evaluations and AI-driven therapy recommendations. As a matter of fact, clinicians will soon have to balance the results of algorithms with their professional judgment [].
These factors highlight the need for careful consideration for psychologists, psychiatrists, and other medical practitioners to gain a thorough understanding of the ethical and therapeutic aspects of developing technology []. Numerous experts stress that substantial training for aspiring professionals is necessary for a meaningful integration of AI into mental health treatment, especially when it comes to e-therapy techniques or semiautomated diagnostics [,]. Beyond the purview of one’s initial licensing, lifelong learning in order to keep up with technological advancements to guarantee that mental health practitioners stay knowledgeable about the rapidly changing field of e-therapies and digital health solutions could be seen as nonnegotiable []. This need aligns with constructivist and experiential learning theories, which emphasize that learners build knowledge most effectively through active engagement and real-world application, both of which are necessary for the safe and ethical use of AI in clinical practice. The requirements for certification and continuing education programs, which may include specific workshops, learning modules centered on digital proficiency, or periodic re-examinations, reflect these needs []. Education researchers warn that lectures or brief seminars alone will not be sufficient to close the knowledge gap; clinical and research experiences that enable practitioners to assess, adjust, and use AI technologies with confidence in a safe and ethical way are also necessary [].
The purpose of this scoping review was to determine the types of AI that are currently used in academic programs and educational curricula in psychology and psychiatry. Furthermore, it attempted to identify the barriers to and facilitators of such uses. By evaluating various apps, this review aimed to synthesize existing apps and highlight areas where further development or inquiry may be warranted. These included not only content delivery and assessment but also more challenging domains, such as teaching clinical judgment, fostering empathy, assessing nuanced interpersonal interactions, and supporting therapeutic decision-making—areas that are often context dependent and difficult to replicate algorithmically. Finally, suggestions were offered for future research directions to support the responsible and effective incorporation of AI into mental health education.
Methods
Search Strategies
A broad scoping search was systematically performed to retrieve the recent literature from multiple electronic databases, including Medline, PubMed, Embase, PsycNet (PsycINFO), EBM Reviews – Cochrane Database of Systematic Reviews, and Google Scholar, covering records from database inception through October 2024. The review adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines []. The search strategy integrated both free-text keywords and controlled vocabulary (Medical Subject Headings [MeSH] terms), centering on themes related to AI (eg, “artificial intelligence,” “AI”) and medical education (eg, “education,” “medical,” “students”), in accordance with the study’s aims. Full search strategies are detailed in . The search strategies were designed by an experienced librarian in the field of mental health (author MD). They were also cross-validated by another librarian using the Peer Review of Electronic Search Strategies (PRESS) approach. The methodology was designed by the corresponding author; searches were conducted by author AH and independently verified by author JP. No filters were applied concerning geographical location or institutional setting. The completed PRISMA-ScR checklist is available in .
Study Eligibility
Studies were eligible for inclusion if they met the following criteria: (1) study focused on a topic within the fields of psychiatry or psychology education; (2) involved the use of an AI tool, model, or approach; (3) included a discussion of facilitators of or barriers to the use of AI; and (4) were available in either English or French. Papers that did not pertain to psychiatry or psychology or that referenced AI technologies without a clearly defined implementation or application were excluded. In addition, studies were excluded if they featured AI tools outside the domain of relevance, such as rule-based expert systems or search algorithms unrelated to data-driven models. Unpublished studies and gray literature were not considered for inclusion.
Data Extraction
A standardized data extraction form was developed to systematically collect and organize relevant information from each included study. Data extraction was performed independently by at least 2 of the authors, with discrepancies resolved through discussion and consensus or by consulting a third author, when necessary. The process was guided by the research objectives for examining the integration of AI in health profession education.
The following elements were extracted from each study:
- Authors: citation details, including the first author and year of publication.
- Population: description of the study participants, including the sample size, educational level (eg, undergraduate, postgraduate, continuing professional development), and discipline (eg, psychiatry, psychology, medical education).
- Use of AI: specific application or role of AI within the educational context, such as adaptive learning platforms, natural language processing (NLP) tools, intelligent tutoring systems, virtual patient simulations, or predictive analytics.
- Main outcomes: primary findings related to educational effectiveness, learner satisfaction, knowledge acquisition, clinical reasoning, engagement, or other measured outcomes. Where applicable, outcomes were categorized by study design (qualitative, quantitative, or mixed methods).
- Facilitators: factors that supported the successful implementation or perceived value of AI-enhanced educational interventions, such as user-friendliness, institutional support, personalization of learning, or alignment with curricular goals.
- Barriers: reported challenges or limitations in the adoption of AI tools, including technical constraints, ethical concerns, paucity of digital literacy among users, limited evidence of effectiveness, or resistance to change within educational environments.
Extracted data were tabulated in Microsoft Excel (version 17.0) to allow comparison across studies and to identify patterns, gaps, and emerging themes related to the integration of AI in teaching and learning within the fields of psychiatry and psychology.
Quality Assessment
To examine the included studies’ methodology, clarity, and transparency, we carried out a systematic evaluation. Considering the variety of research designs seen in the literature on AI in psychiatry and addiction education, this phase attempted to improve the findings’ interpretability. We used evaluation instruments that were specific to each research type’s design to guarantee methodological adequacy.
The Joanna Briggs Institute (JBI) Checklist for Analytical Cross-Sectional Studies was used to assess both quantitative and observational research []. The methodological soundness of studies looking at correlations between exposures and outcomes at a specific moment in time can be evaluated with this tool. Items on the checklist evaluate the appropriateness of statistical analyses, the identification and control of confounding factors, the validity and reliability of exposure and outcome measurements, and the clarity of the inclusion criteria.
The JBI Checklist for Qualitative Research was also used []. This tool assesses how well research methodology and research topics align, how data are collected, how participant voices are represented and interpreted, and how much the researcher has influenced the study. To guarantee that qualitative insights are obtained through a thorough and reliable process, it also evaluates ethical issues and the openness of data analysis techniques.
Mixed methods studies were appraised using the Mixed Methods Appraisal Tool (MMAT), 2018 version []. The MMAT is a validated tool specifically developed for the assessment of studies that combine qualitative and quantitative approaches. It allows for concurrent evaluation of the components of each methodology within a single study and includes criteria for the integration of qualitative and quantitative data, the appropriateness of the design to the research questions, and the coherence of interpretations drawn from the combined methods.
For nonempirical contributions, such as viewpoints, editorials, and conceptual papers, we used the authority, accuracy, coverage, objectivity, date, and significance (AACODS) checklist []. This framework assesses 6 key domains: authority (credibility of the author or source), accuracy (evidence supporting claims), coverage (comprehensiveness of content), objectivity (balance and absence of bias), date (currency and relevance), and significance (contribution to the field). It is particularly useful for evaluating gray literature and opinion-based texts where conventional empirical criteria may not apply.
All studies were reviewed independently by 2 researchers (JP and AH). Each appraisal tool includes a set of key domains that were rated as “yes,” “partial,” or “no” based on the extent to which the methodological criteria were clearly addressed and appropriately implemented in each study.
Results
Description of Studies
The scoping review explored the use of AI in teaching and learning within psychiatry and psychology. The initial search across 6 electronic databases yielded 6219 records. After removing 3324 (53.4%) duplicates, 2895 (46.6%) records were screened by title and abstract. Of these, 2711 (93.6%) papers were excluded for not meeting the inclusion criteria. A total of 184 (6.4%) full-text papers were sought for retrieval, with 2 (1.1%) reports not successfully retrieved. The remaining 182 (98.9%) papers were assessed for eligibility in full. Following detailed evaluation, 172 (94.5%) papers were excluded due to being outside the field of interest (n=93, 54.1%), lacking the use of AI (n=56, 32.6%), not addressing facilitators or barriers (n=19, 11%), or not being available in English or French (n=4, 2.3%). Ultimately, 10 (5.5%) studies met all inclusion criteria and were included in the final analysis. A flowchart summarizing the selection process is presented in , and details of the included studies can be found in .

Main Uses of Artificial Intelligence
Across the 10 studies [-] included in this scoping review, AI was applied to a variety of educational functions in psychiatry and psychology. The most frequently observed category was clinical decision support (n=5, 50%), where AI tools were used to train learners in diagnosis, prognosis, risk assessment, and early intervention strategies. This was followed by 5 categories that each appeared in 3 (30%) studies: educational content creation and enhancement, therapeutic tools and mental health monitoring, administrative and research assistance, NLP applications, and program/policy development. These categories reflect AI’s growing role in the development of educational materials, therapeutic simulations, and institutional planning. Less frequently, studies addressed AI’s applications in professional development and assessment (n=1, 10%) and student/applicant support (n=1, 10%), highlighting emerging but less explored domains. This distribution suggests a strong current emphasis on clinical reasoning, content automation, and digital service integration in educational contexts.
Clinical Decision Support
AI tools are increasingly integrated into psychiatry and psychology education to train learners in diagnosis, prognosis, and risk assessment. Through exposure to AI-driven systems, such as ML models and NLP tools, trainees can learn how to identify suicide risk, detect patterns in substance use disorders, or evaluate the progression of neurodegenerative conditions []. Banerjee et al [] emphasized the importance of training programs that illustrate how AI can triage patients or generate diagnostic suggestions, helping doctors understand both the power and limitations of these technologies. Furthermore, students are being introduced to AI’s role in treatment personalization, such as adapting therapy plans or medication regimens using predictive modeling []. These systems also facilitate early intervention, teaching clinicians how AI can flag high-risk patients in real time, thus embedding preventive care principles into training [].
Educational Content Creation and Enhancement
Generative AI is transforming the design and delivery of educational content in psychiatry and psychology. One example is the use of ChatGPT to develop script concordance tests (SCTs), which promote clinical reasoning in undergraduate medical education. Hudon et al [] demonstrated that AI-generated SCTs are nearly indistinguishable from those written by human experts, highlighting AI’s potential to rapidly generate quality training materials aligned with psychiatric diagnostic frameworks. In addition, AI can support exam preparation, providing structured explanations, study plans, and interactive feedback for licensing exams and clinical cases []. Spallek et al [] noted that although AI tools require human oversight, they offer accessible and customizable resources that align with the best practices in mental health communication and literacy, ultimately empowering educators and enhancing learner engagement.
Student/Applicant Support
AI can now be used to assist learners with the residency and fellowship application process, offering new educational opportunities in self-presentation and professional writing. Mangold and Ream [] explored how students and faculty use AI to draft and refine personal statements and letters of recommendation, improving clarity and grammar and reducing biased language. From a pedagogical perspective, this practice teaches students to reflect critically on AI-generated outputs and engage in iterative editing processes. Moreover, AI is being proposed for application screening, potentially reducing human bias and increasing efficiency in admissions, which prompts institutions to educate both applicants and reviewers on ethical and equitable AI use []. These developments signal a shift in how learners engage with professional identity formation and how educators must adapt guidance accordingly.
Therapeutic Tools and Mental Health Monitoring
A body of literature highlighted AI’s use in therapeutic education, particularly around digital interventions, such as CBT- or acceptance and commitment therapy (ACT)-based apps and chatbots. Blease et al [] and Gratzer and Goldbloom [] emphasized the need to train future psychiatrists to evaluate and potentially integrate these tools into treatment plans. E-therapy technologies, including AI-powered chatbots, provide psychoeducation on demand and demonstrate how therapy can be delivered asynchronously []. Trainees also learn about real-time symptom tracking, which is increasingly used in digital platforms to monitor patient well-being and guide interventions. These digital tools expose students to new care modalities, challenging them to assess efficacy, ethics, and clinical utility in both individual and population-based care models [].
Administrative and Research Assistance
AI is also reshaping how clinicians and students interact with administrative and scholarly tasks, including documentation, summarization, and literature reviews. Banerjee et al [] observed that AI is viewed as helpful in reducing the time spent on clinical documentation, allowing more focus on direct learning and patient care. Additionally, AI tools are being used in academic psychiatry to assist with automated literature searches, summarization, and even initial drafting of manuscripts, offering educational insights into how information is synthesized and presented in research []. These uses favorize critical thinking and teach students to evaluate AI-generated content, thereby strengthening their roles as both users and producers of scientific knowledge.
Professional Development and Assessment
In postgraduate and continuing education contexts, AI supports competency-based assessment and professional development. Anzia [] discussed how longitudinal assessment platforms, potentially enhanced by AI, are replacing traditional high-stakes exams in psychiatry, promoting lifelong learning and more reflective practice. These tools can track learning progress, provide feedback, and recommend tailored educational pathways. Moreover, AI may be integrated into clinical skills evaluations, simulating patient scenarios or providing automated assessments of diagnostic reasoning. This shift calls for educators to incorporate digital literacy and AI fluency into curricula, ensuring that learners are prepared to navigate evolving certification and assessment landscapes [].
Natural Language Processing Applications
NLP tools powered by AI are being introduced in educational settings to illustrate how clinical language can be analyzed, interpreted, and used for real-time support. Banerjee et al [] noted NLP’s utility in automating clinical documentation, reducing the clerical burden and highlighting the importance of clear, structured input. In psychiatry training, NLP is also applied to suicide risk detection, teaching learners how digital footprints and language cues from online interactions can signal crisis []. Additionally, NLP technologies are integrated into telehealth platforms, enhancing communication between patients and providers, and offering opportunities for students to learn how AI can support culturally sensitive, evidence-based interactions [].
Program/Policy Development
Finally, AI’s influence on institutional teaching structures is growing, particularly through the design of AI-compatible teaching modules and the creation of policies to guide AI use. Manjunatha et al [] illustrated how telepsychiatry programs in India are incorporating AI into remote learning for primary care doctors, serving as a scalable model for underserved settings. These modules reflect a shift toward digital curricula that embed clinical translation and evidence-based AI use. Similarly, Mangold and Ream [] emphasized the need for training programs to define guidelines for AI use in admissions and evaluation, prompting educators to prepare learners for ethical dilemmas and policy engagement in the evolving digital landscape.
Facilitators
A number of facilitators support the integration of AI into teaching and learning in psychiatry and psychology.
Technological Readiness and Tool Availability
A recurring theme across the 10 studies was the availability and accessibility of AI tools, particularly large language models, such as ChatGPT, which simplify the creation of educational content and customization of learning materials for diverse learner needs [,]. These tools are perceived as valuable due to their ability to generate structured and accessible outputs, supporting educators in preparing mental health scenarios or assessments rapidly and effectively. Moreover, other technologies, such as telepsychiatry and mobile-based learning platforms, are supported by the widespread use of smartphones and digital infrastructure, increasing scalability and access to remote or underserved areas [].
Educational and Efficiency Enhancement
Several studies have emphasized that structured prompting or prompt engineering significantly enhances output quality, improving both relevance and accuracy for educational use []. In clinical training contexts, AI is seen as a time-saving facilitator, capable of reducing administrative burdens, such as documentation, and allowing learners to focus on clinical reasoning and decision-making []. In addition, generative AI technologies and related models can facilitate the production of highly realistic synthetic data and the seamless integration of unstructured content across diverse formats []. These innovations have the potential to transform core practices, such as risk assessment, diagnostic decision-making, and treatment planning, while simultaneously creating new opportunities in educational and training environments.
Learner Engagement and Openness
Finally, positive attitudes from students and trainees, particularly their willingness to explore new tools, and their recognition of AI’s role in increasing efficiency, accuracy, and engagement, are essential to AI adoption in educational environments []. These facilitators reflect a confluence of technological readiness, user engagement, and curricular flexibility.
Barriers
Despite growing enthusiasm, several barriers hinder the seamless integration of AI into psychiatry and psychology education.
Digital and Educational Gaps
A dominant concern is the absence of formal training and digital literacy among students and educators, which limits their ability to interpret and critically evaluate AI-generated outputs [,]. Many studies have noted a limited presence of AI content in medical curricula, resulting in missed opportunities to prepare learners for evolving clinical environments where AI plays a central role [].
Ethical and Legal Issues
There are also ethical and legal concerns, particularly around data privacy, algorithmic bias, and informed consent, which raise questions about the responsible use of AI tools, such as chatbots or diagnostic aids [,]. The scarcity of high-quality data and the opacity of AI algorithms, often referred to as the “black box” problem, are also cited as major obstacles to trust and widespread adoption []. Additionally, concerns persist about the accuracy and reliability of AI-generated educational materials, particularly when outputs are not subject to expert review, posing risks of misinformation or oversimplification [,].
Pedagogical Limitations
Finally, several papers highlighted that AI outputs can lack nuance, empathy, or personalization, making them less suitable for teaching relational and humanistic aspects of psychiatric care [].
These barriers highlight the need for comprehensive strategies that include AI literacy, ethical guidance, and faculty support to ensure safe and effective integration into educational practice.
Quality Assessment of the Identified Studies
Overall, the methodological quality of the included literature was moderate to high. Mixed methods and empirical studies demonstrated clear objectives, coherent data collection and analysis strategies, and ethical transparency. However, several studies lacked detailed descriptions of sampling procedures or integration of data types. Nonempirical papers were generally strong in authority and relevance but limited by the absence of primary data or systematic methodology. Despite variability in design and depth, most studies provided valuable insights into the educational applications of AI in psychiatry and psychology. The full quality appraisal is presented in .
Discussion
Principal Findings
This scoping review aimed to identify the different ways AI is currently used in the teaching and learning of psychiatry and psychology. A total of 10 studies were fully analyzed, and 8 categories of AI applications were identified: clinical decision support, educational content creation and enhancement, therapeutic tools and mental health monitoring, administrative and research assistance, NLP applications, program and policy development, student/applicant support, and professional development and assessment. These categories reflect the diverse roles AI plays in shaping educational strategies, curricular design, and learner engagement in mental health training. The studies included were overall of moderate-to-high quality. The most notable facilitators to AI integration in teaching and learning in psychiatry and psychology are technological readiness and tool availability, educational and efficiency enhancement, and learner engagement and openness. The barriers that hinder the integration of AI into psychiatry and psychology education are digital and educational gaps, ethical and legal issues, and pedagogical limitations.
Comparison With Prior Work
The findings of this scoping review confirm and expand on other studies that demonstrate the growing integration of AI into psychological and psychiatric education, especially through clinical decision support technologies. Previous research has shown, for example, that ML models can help forecast the risk of depression, schizophrenia, and suicide. Our study also noted that similar predictive technologies are already being used in educational contexts [,]. This is consistent with the findings of Rajkomar et al [], who observed that clinical AI tool exposure aids in the development of important data literacy in health care trainees. Our work showcases the potential of AI in supporting students as they develop their critical reasoning. However, it is important to keep in mind that these critical reasoning skills are still mostly shaped over time through clinical exposure and case discussion, both of which cannot be replaced by algorithms or AI. The multidisciplinary team also holds a significant place in clinical management and risk sharing. AI can give theoretical advice to students regarding how to lead a team, but it cannot teach a student how to lead a team in real time. Although previous research has often emphasized clinical outcomes, we focused here on the learning process itself, with the understanding that AI is only one of many educational tools.
The literature on AI also supports the increasing use of generative AI in the production of instructional materials. Recent research has switched to looking at how tools such as ChatGPT, GPT-4, and Claude might scaffold learning in medical education, whereas the majority of earlier applications were on patient-facing educational interventions (eg, mental health apps) []. These studies support our findings by demonstrating that AI is capable of creating excellent test questions, simulating clinical situations, and even co-creating course curricula. However, critical gaps still exist, notably the possibility that students will passively accept AI outputs without engaging sufficiently. In fact, some of these critical knowledge gaps might be due to the current state of research, which remains limited when it comes to assessing how heavy reliance on dialogue AI may affect decision-making, critical thinking, and analytical reasoning in both educational and research contexts []. Meskó [] shared this concern and called for clear instruction in AI prompt engineering, appropriate medical professional tutorials, and verification skills. The findings of this scoping review highlight that such competencies are increasingly essential in psychiatry and psychology, where nuance and context matter deeply.
According to certain studies, an important area of AI integration in training is mental health apps and therapeutic chatbots, which is consistent with previous research [,]. According to these studies, chatbot-based psychotherapy and psychoeducational tools are beneficial teaching tools, in addition to being successful for patients. When included in clinical simulations, they give students an opportunity to analyze intervention results, gauge the therapeutic tone, and practice making moral decisions. Our results highlight the need for directed education to guarantee that students acquire abilities in digital empathy, data protection, and cultural adaptation, even when these tools show promise. Training programs need to change in a way that assists clinicians in interpreting AI-driven outputs, while upholding person-centered treatment and therapeutic alliances, as D’Alfonso et al [] contended.
Lastly, the findings confirm that faculty development, institutional preparedness, and ethical advice are drivers of AI adoption, which is in line with research on digital transformation in health professions education []. Although students frequently use AI tools for convenience, educators are nevertheless worried about the loss of critical thinking, professional identity, and interpersonal skills. These difficulties point to the necessity of organized curricula, such as one that incorporates AI literacy into psychology and psychiatry undergraduate and graduate education programs [].
Directions for Further Research
This scoping review revealed that slowly but surely, the integration of AI, although remaining nascent in psychology and psychiatry education, is nonetheless there to stay. Future research should prioritize rigorous, outcome-based studies that evaluate further the impact of the AI-enhanced educational tools that this paper described, such as diagnostic stimulation, e-therapies, AI-assisted clinical decision-making, and the impact on real-life learning performance. Another important aspect that would be crucial to investigate is the management of algorithmic bias and transparency and understanding the extent of protecting one’s private data. Research that captures the perspectives of students and educators could shed light on readiness, perceived barriers, and opportunities for meaningful adoption of AI.
Furthermore, given the fact that the majority of studies analyzed are from Euro-American and high-income contexts, there is a clear need for research that centers on diverse populations, including Indigenous, racialized, and culturally distinct groups. Together, these directions can help guide the development of inclusive and ethically grounded approaches to AI integration into mental health education.
Recommendations for Institutions
To ensure that psychology and psychiatry programs prepare trainees for a rapidly evolving clinical landscape, educational institutions should take proactive steps toward integrating AI literacy into core curricula. For example, digital literacy could be integrated early into medical education, with foundational topics such as ML and data ethics. This effort would benefit from interdisciplinary collaboration among health sciences, computer science, and bioethics departments to ensure a well-rounded approach.
The extent to which a person believes that a technology can enhance their performance at work (or enhance their learning experience, for that matter) is commonly referred to as “perceived usefulness,” and it plays a pivotal role in determining whether individuals are likely to adopt new technologies []. A meta-analysis by Scherer and Teo [] identified perceived usefulness as a strong predictor of a teacher’s readiness to engage with digital tools. Conversely, barriers included anxiety and a lack of AI-specific training []. Thus, to effectively deal with the growing threats posed by AI, it seems crucial to promote the various uses of AI among the professors. Teachers who understand AI better are more equipped to use it in ways that meet the varied needs of their students []. In addition, when educators develop a solid understanding of AI, they are better equipped to tackle its ethical issues, such as algorithmic bias, data privacy, and the risk of becoming overly dependent on AI [].
Hence, educators require adequate training, resources, and institutional support to effectively teach and oversee the responsible use of emerging digital tools in clinical and academic settings.
How will universities effectively manage the growing threats that AI presents to medical education? Those threats, as previously mentioned, include academic dishonesty in assessments (eg, plagiarism), the spread of misinformation, and the difficulty AI-using students face in discerning some of the nuances in fields where human interaction is key, such as psychiatry and psychology. Because these threats are often multifactorial in nature, their management requires an inclusive approach involving all key actors (students, educators, institutions) []. In practice, at the university level, an interesting avenue would be the constitution of a task force (or a committee) comprising students, faculty members, AI scholars, and IT personnel. This task force would be aimed at determining permissible uses of AI versus prohibited uses of AI and how to detect the latter. The detection of AI-generated content can be facilitated by applications such as GPTZero and QuillBot, for example. Higher education institutions (HEI) should not only establish such thorough guidelines but also review them periodically to ensure their continued relevance []. Bozkurt [] suggested that HEI also require transparent disclosure of AI usage by their personnel and students (eg, in a given assessment, the students would be expected to explicitly declare the sections drafted by ChatGPT with human oversight). HEI could also offer some new job positions focused on AI or hire professionals specifically trained to identify AI-related academic misconduct and sanction such ethical issues. These professionals would ideally also have some degree of experience with the efficient implementation of AI in an academic context.
Strengths and Limitations
The authors brought diverse disciplinary perspectives to this review, including training in psychology, psychiatry, medical education, and digital health. These backgrounds informed the framing of the review and the interpretation of its findings.
This scoping review also has a few limitations. First, although a comprehensive search strategy was used across multiple databases, it is possible that relevant studies were missing, particularly those published in nonindexed journals or categorized under broader terms not captured by the search keywords. Second, the authors acknowledge the potential for disciplinary bias rooted in Western academic systems. An effort was made to include literature from a range of geographic regions and institutional contexts; however, due to the limited published research available on the subject, the review was limited to studies published in English or French, potentially excluding valuable insights from non-English/French literature; in addition, the majority of published research available in English originated from Euro-American or high-income settings, which may limit the generalizability of the review’s conclusions. Third, due to the heterogeneity of study designs and the inclusion of both empirical and conceptual papers, no formal meta-analysis was conducted, and the synthesis remained primarily narrative. In addition, although efforts were made to assess study quality using validated tools, some included papers, particularly perspectives and editorials, lacked sufficient methodological detail, which may limit the generalizability of their conclusions. Finally, because the field of AI in mental health education is rapidly evolving, newer studies and technologies may have emerged since the time of data collection, warranting future updates to this review.
Conclusion
To conclude, this scoping review provided an overview of how AI is being integrated into the teaching and learning of psychiatry and psychology. By analyzing 10 studies, 8 distinct categories of AI uses were identified, ranging from clinical decision support and educational content generation to digital therapeutic tools and policy development. These findings highlight the emerging role of AI not only as a clinical adjunct but also as a transformative educational tool that can support adaptive learning, promote efficiency, and broaden access to mental health training. Although the overall quality of the studies included was moderate to high, important challenges remain, particularly related to ethical considerations, digital literacy, and institutional readiness. As AI technologies continue to evolve, future research and curriculum development efforts should focus on promoting safe, equitable, and pedagogically sound integration of AI in mental health education. Equipping educators and students will require the integration of AI literacy into core curricula, embedding foundational topics such as ML and data ethics and fostering interdisciplinary collaboration between various departments. Faculty personnel development, institutional preparedness, and student involvement are essential drivers for successful AI adoption. Future research should prioritize rigorous, outcome-based studies and capture diverse perspectives in order to guide the development of inclusive and effective AI integration strategies in mental health education. This review underscores the need for ongoing interdisciplinary collaboration between educators, clinicians, technologists, and policymakers to ensure that future practitioners are well equipped to engage with AI in meaningful and responsible ways.
Acknowledgments
This study was funded indirectly by the La Fondation de l’Institut universitaire en santé mentale de Montréal and operating funds from IVADO for AH.
Authors' Contributions
AH and JP were involved in conceptualization. Data curation, writing—original draft, and writing—review and editing were performed by all the authors; formal analysis by AH and JP; funding acquisition, investigation, project administration, resources, and supervision by AH; methodology by AH and MD; and validation by AH and JP.
Conflicts of Interest
None declared.
Electronic search strategy for the scoping review conducted.
DOCX File , 17 KBPRISMA-ScR checklist.
DOCX File , 85 KBScoping review study selection detailed results and quality assessment.
DOCX File , 35 KBReferences
- Abernethy A, Adams L, Barrett M, Bechtel C, Brennan P, Butte A, et al. The promise of digital health: then, now, and the future. NAM Perspect. 2022:1-24. [FREE Full text] [CrossRef] [Medline]
- Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. Jan 2019;25(1):44-56. [FREE Full text] [CrossRef] [Medline]
- Lee EE, Torous J, De Choudhury M, Depp CA, Graham SA, Kim H, et al. Artificial intelligence for mental health care: clinical applications, barriers, facilitators, and artificial wisdom. Biol Psychiatry Cogn Neurosci Neuroimaging. Sep 2021;6(9):856-864. [FREE Full text] [CrossRef] [Medline]
- Thakkar A, Gupta A, De Sousa A. Artificial intelligence in positive mental health: a narrative review. Front Digit Health. Mar 18, 2024;6:1280235. [FREE Full text] [CrossRef] [Medline]
- Gan DZQ, McGillivray L, Han J, Christensen H, Torok M. Effect of engagement with digital interventions on mental health outcomes: a systematic review and meta-analysis. Front Digit Health. 2021;3:764079. [FREE Full text] [CrossRef] [Medline]
- Hudon A, Beaudoin M, Phraxayavong K, Potvin S, Dumais A. Exploring the intersection of Schizophrenia, machine learning, and genomics: scoping review. JMIR Bioinform Biotechnol. Nov 15, 2024;5:e62752. [FREE Full text] [CrossRef] [Medline]
- Chekroud AM, Zotti RJ, Shehzad Z, Gueorguieva R, Johnson MK, Trivedi MH, et al. Cross-trial prediction of treatment outcome in depression: a machine learning approach. Lancet Psychiatry. Mar 2016;3(3):243-250. [CrossRef] [Medline]
- Schnyer DM, Clasen PC, Gonzalez C, Beevers CG. Evaluating the diagnostic utility of applying a machine learning algorithm to diffusion tensor MRI measures in individuals with major depressive disorder. Psychiatry Res Neuroimaging. Jun 30, 2017;264:1-9. [FREE Full text] [CrossRef] [Medline]
- Reece AG, Danforth CM. Instagram photos reveal predictive markers of depression. EPJ Data Sci. Aug 8, 2017;6(1):1-12. [FREE Full text] [CrossRef]
- Wager TD, Woo CW. Imaging biomarkers and biotypes for depression. Nat Med. Jan 06, 2017;23(1):16-17. [CrossRef] [Medline]
- Walsh CG, Ribeiro JD, Franklin JC. Predicting risk of suicide attempts over time through machine learning. Clin Psychol Sci. Apr 11, 2017;5(3):457-469. [FREE Full text] [CrossRef]
- Franklin JC, Ribeiro JD, Fox KR, Bentley KH, Kleiman EM, Huang X, et al. Risk factors for suicidal thoughts and behaviors: a meta-analysis of 50 years of research. Psychol Bull. Feb 2017;143(2):187-232. [CrossRef] [Medline]
- Just MA, Pan L, Cherkassky VL, McMakin DL, Cha C, Nock MK, et al. Machine learning of neural representations of suicide and emotion concepts identifies suicidal youth. Nat Hum Behav. Oct 30, 2017;1(12):911-919. [FREE Full text] [CrossRef] [Medline]
- Chung Y, Addington J, Bearden CE, Cadenhead K, Cornblatt B, Mathalon DH, et al. North American Prodrome Longitudinal Study (NAPLS) Consortium and the Pediatric Imaging, Neurocognition, and Genetics (PING) Study Consortium. Use of machine learning to determine deviance in neuroanatomical maturity associated with future psychosis in youths at clinically high risk. JAMA Psychiatry. Sep 01, 2018;75(9):960-968. [FREE Full text] [CrossRef] [Medline]
- Fiske A, Henningsen P, Buyx A. Your robot therapist will see you now: ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. J Med Internet Res. May 09, 2019;21(5):e13216. [FREE Full text] [CrossRef] [Medline]
- Chin H, Song H, Baek G, Shin M, Jung C, Cha M, et al. The potential of chatbots for emotional support and promoting mental well-being in different cultures: mixed methods study. J Med Internet Res. Oct 20, 2023;25:e51712. [FREE Full text] [CrossRef] [Medline]
- Mennella C, Maniscalco U, De Pietro G, Esposito M. Ethical and regulatory challenges of AI technologies in healthcare: a narrative review. Heliyon. Feb 29, 2024;10(4):e26297. [FREE Full text] [CrossRef] [Medline]
- Malouin-Lachance A, Capolupo J, Laplante C, Hudon A. Does the digital therapeutic alliance exist? Integrative review. JMIR Ment Health. Feb 07, 2025;12:e69294. [FREE Full text] [CrossRef] [Medline]
- Oudin A, Maatoug R, Bourla A, Ferreri F, Bonnot O, Millet B, et al. Digital phenotyping: data-driven psychiatry to redefine mental health. J Med Internet Res. Oct 04, 2023;25:e44502. [FREE Full text] [CrossRef] [Medline]
- Gutierrez G, Stephenson C, Eadie J, Asadpour K, Alavi N. Examining the role of AI technology in online mental healthcare: opportunities, challenges, and implications, a mixed-methods review. Front Psychiatry. May 7, 2024;15:1356773. [FREE Full text] [CrossRef] [Medline]
- Yeo G, Reich SM, Liaw NA, Chia EYM. The effect of digital mental health literacy interventions on mental health: systematic review and meta-analysis. J Med Internet Res. Feb 29, 2024;26:e51268. [FREE Full text] [CrossRef] [Medline]
- Orsolini L, Jatchavala C, Noor IM, Ransing R, Satake Y, Shoib S, et al. Training and education in digital psychiatry: a perspective from Asia-Pacific region. Asia Pac Psychiatry. Dec 07, 2021;13(4):e12501. [FREE Full text] [CrossRef] [Medline]
- Bekbolatova M, Mayer J, Ong CW, Toma M. Transformative potential of AI in healthcare: definitions, applications, and navigating the ethical landscape and public perspectives. Healthcare (Basel). Jan 05, 2024;12(2):125. [FREE Full text] [CrossRef] [Medline]
- Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. Oct 02, 2018;169(7):467-473. [FREE Full text] [CrossRef] [Medline]
- Joanna Briggs Institute. Checklist for systematic reviews and research syntheses. JBI Global. 2017. URL: https://jbi.global/critical-appraisal-tools [accessed 2025-02-12]
- Hong QN, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, et al. The Mixed Methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers. Educ Inf. Dec 18, 2018;34(4):285-291. [FREE Full text] [CrossRef]
- Tyndall J. AACODS checklist. Flinders University, Adelaide. 2010. URL: https://dspace.flinders.edu.au/jspui/bitstream/2328/3326/4/AACODS_Checklist.pdf [accessed 2025-02-12]
- López-Ojeda W, Hurley RA. Medical metaverse, part 2: artificial intelligence algorithms and large language models in psychiatry and clinical neurosciences. J Neuropsychiatry Clin Neurosci. Oct 2023;35(4):316-320. [CrossRef] [Medline]
- Banerjee M, Chiew D, Patel KT, Johns I, Chappell D, Linton N, et al. The impact of artificial intelligence on clinical education: perceptions of postgraduate trainee doctors in London (UK) and recommendations for trainers. BMC Med Educ. Aug 14, 2021;21(1):429. [FREE Full text] [CrossRef] [Medline]
- Blease C, Kharko A, Annoni M, Gaab J, Locher C. Machine learning in clinical psychology and psychotherapy education: a mixed methods pilot survey of postgraduate students at a Swiss University. Front Public Health. 2021;9:623088. [FREE Full text] [CrossRef] [Medline]
- Spallek S, Birrell L, Kershaw S, Devine EK, Thornton L. Can we use ChatGPT for mental health and substance use education? Examining its quality and potential harms. JMIR Med Educ. Nov 30, 2023;9:e51243. [FREE Full text] [CrossRef] [Medline]
- Hudon A, Kiepura B, Pelletier M, Phan V. Using ChatGPT in psychiatry to design script concordance tests in undergraduate medical education: mixed methods study. JMIR Med Educ. Apr 04, 2024;10:e54067. [FREE Full text] [CrossRef] [Medline]
- Mangold S, Ream M. Artificial intelligence in graduate medical education applications. J Grad Med Educ. 2024;16(2):115-118. [CrossRef]
- Gratzer D, Goldbloom D. Therapy and e-therapy—preparing future psychiatrists in the era of apps and chatbots. Acad Psychiatry. Apr 02, 2020;44(2):231-234. [CrossRef] [Medline]
- Anzia JM. Lifelong learning in psychiatry and the role of certification. Psychiatr Clin North Am. Jun 2021;44(2):309-316. [CrossRef] [Medline]
- Manjunatha N, Kumar C, Math S, Thirthalli J. Designing and implementing an innovative digitally driven primary care psychiatry program in India. Indian J Psychiatry. 2018;60(2):236-244. [CrossRef]
- Tortora L. Beyond discrimination: generative AI applications and ethical challenges in forensic psychiatry. Front Psychiatry. Mar 8, 2024;15:1346059. [FREE Full text] [CrossRef] [Medline]
- Dwyer DB, Falkai P, Koutsouleris N. Machine learning approaches for clinical psychology and psychiatry. Annu Rev Clin Psychol. May 07, 2018;14(1):91-118. [CrossRef] [Medline]
- Shatte ABR, Hutchinson DM, Teague SJ. Machine learning in mental health: a scoping review of methods and applications. Psychol Med. Jul 2019;49(9):1426-1448. [CrossRef] [Medline]
- Rajkomar A, Dean J, Kohane I. Machine learning in medicine. N Engl J Med. Apr 04, 2019;380(14):1347-1358. [CrossRef]
- Kung TH, Cheatham M, Medenilla A, Sillos C, De Leon L, Elepaño C, et al. Performance of ChatGPT on USMLE: potential for AI-assisted medical education using large language models. PLOS Digit Health. Feb 9, 2023;2(2):e0000198. [FREE Full text] [CrossRef] [Medline]
- Zhai C, Wibowo S, Li L. The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review. Smart Learn Environ. Jun 18, 2024;11(1):28. [FREE Full text] [CrossRef]
- Meskó B. Prompt engineering as an important emerging skill for medical professionals: tutorial. J Med Internet Res. Oct 04, 2023;25:e50638. [FREE Full text] [CrossRef] [Medline]
- Miner AS, Milstein A, Hancock JT. Talking to machines about personal mental health problems. JAMA. Oct 03, 2017;318(13):1217-1218. [CrossRef] [Medline]
- Vaidyam AN, Wisniewski H, Halamka JD, Kashavan MS, Torous JB. Chatbots and conversational agents in mental health: a review of the psychiatric landscape. Can J Psychiatry. Jul 2019;64(7):456-464. [FREE Full text] [CrossRef] [Medline]
- D'Alfonso S, Lederman R, Bucci S, Berry K. The digital therapeutic alliance and human-computer interaction. JMIR Ment Health. Dec 29, 2020;7(12):e21895. [FREE Full text] [CrossRef] [Medline]
- Cureton D, Jones J, Hughes J. The postdigital university: do we still need just a little of that human touch? Postdigit Sci Educ. Dec 21, 2021;3(1):223-241. [FREE Full text] [CrossRef] [Medline]
- Tolentino R, Baradaran A, Gore G, Pluye P, Abbasgholizadeh-Rahimi S. Curriculum frameworks and educational programs in AI for medical students, residents, and practicing physicians: scoping review. JMIR Med Educ. Jul 18, 2024;10:e54793. [FREE Full text] [CrossRef] [Medline]
- Sanusi I, Ayanwale M, Chiu T. Investigating the moderating effects of social good and confidence on teachers' intention to prepare school students for artificial intelligence education. Educ Inf Technol. Nov 06, 2023;29(1):273-295. [FREE Full text] [CrossRef]
- Scherer R, Teo T. Unpacking teachers’ intentions to integrate technology: a meta-analysis. Educ Res Rev. Jun 2019;27:90-109. [FREE Full text] [CrossRef]
- Granström M, Oppi P. Assessing teachers’ readiness and perceived usefulness of AI in education: an Estonian perspective. Front Educ. Jun 19, 2025;10:1622240. [CrossRef]
- Pörn R, Braskén M, Wingren M, Andersson S. Attitudes towards and expectations on the role of artificial intelligence in the classroom among digitally skilled Finnish K-12 mathematics teachers. LUMAT: Int J Math Sci Technol Educ. 2024;12(3):53-77. [FREE Full text] [CrossRef]
- Molefi R, Ayanwale M, Kurata L, Chere-Masopha J. Do in-service teachers accept artificial intelligence-driven technology? The mediating role of school support and resources. Comput Educ Open. Jun 2024;6:100191. [FREE Full text] [CrossRef]
- Rasul T, Nair S, Kalendra D, Balaji M, Santini FDO, Ladeira WJ, et al. Enhancing academic integrity among students in GenAI Era: a holistic framework. Int J Manag Educ. Nov 2024;22(3):101041. [CrossRef]
- Bozkurt A. GenAI et al.: cocreation, authorship, ownership, academic ethics and integrity in a time of generative AI. Open Praxis. 2024:1-10. [FREE Full text] [CrossRef]
Abbreviations
| AI: artificial intelligence |
| CBT: cognitive behavioral therapy |
| HEI: higher education institutions |
| JBI: Joanna Briggs Institute |
| ML: machine learning |
| MMAT: Mixed Methods Appraisal Tool |
| NLP: natural language processing |
| PRISMA-ScR: Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews |
| SCT: script concordance test |
Edited by AH Sapci; submitted 30.03.25; peer-reviewed by L Ng, J-J Beunza; comments to author 01.05.25; revised version received 24.06.25; accepted 15.07.25; published 28.07.25.
Copyright©Julien Prégent, Van-Han-Alex Chung, Inès El Adib, Marie Désilets, Alexandre Hudon. Originally published in JMIR Medical Education (https://mededu.jmir.org), 28.07.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic information, a link to the original publication on https://mededu.jmir.org/, as well as this copyright and license information must be included.

