Published on in Vol 10 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/51157, first published .
Assessing ChatGPT’s Competency in Addressing Interdisciplinary Inquiries on Chatbot Uses in Sports Rehabilitation: Simulation Study

Assessing ChatGPT’s Competency in Addressing Interdisciplinary Inquiries on Chatbot Uses in Sports Rehabilitation: Simulation Study

Assessing ChatGPT’s Competency in Addressing Interdisciplinary Inquiries on Chatbot Uses in Sports Rehabilitation: Simulation Study

Original Paper

1Department of Microbiology, Immunology, & Cell Biology, West Virginia University, Morgantown, WV, United States

2Department of Chemical and Biomedical Engineering, West Virginia University, Morgantown, WV, United States

3College of Health Solutions, Arizona State University, Phoenix, AZ, United States

4Biodesign Institute, Arizona State University, Tempe, AZ, United States

5College of Health, Education, and Human Services, Wright State University, Dayton, OH, United States

6Lane Department of Computer Science & Electrical Engineering, West Virginia University, Morgantown, WV, United States

7Department of Electrical Engineering and Computer Science, Christopher S. Bond Life Sciences Center, University of Missouri, Columbia, MO, United States

Corresponding Author:

Gangqing Hu, PhD

Department of Microbiology, Immunology, & Cell Biology

West Virginia University

64 Medical Center Drive

Morgantown, WV, 26506-9177

United States

Phone: 1 304 581 1692

Fax:1 304 293 7823

Email: gh00001@mix.wvu.edu


Background: ChatGPT showcases exceptional conversational capabilities and extensive cross-disciplinary knowledge. In addition, it can perform multiple roles in a single chat session. This unique multirole-playing feature positions ChatGPT as a promising tool for exploring interdisciplinary subjects.

Objective: The aim of this study was to evaluate ChatGPT’s competency in addressing interdisciplinary inquiries based on a case study exploring the opportunities and challenges of chatbot uses in sports rehabilitation.

Methods: We developed a model termed PanelGPT to assess ChatGPT’s competency in addressing interdisciplinary topics through simulated panel discussions. Taking chatbot uses in sports rehabilitation as an example of an interdisciplinary topic, we prompted ChatGPT through PanelGPT to role-play a physiotherapist, psychologist, nutritionist, artificial intelligence expert, and athlete in a simulated panel discussion. During the simulation, we posed questions to the panel while ChatGPT acted as both the panelists for responses and the moderator for steering the discussion. We performed the simulation using ChatGPT-4 and evaluated the responses by referring to the literature and our human expertise.

Results: By tackling questions related to chatbot uses in sports rehabilitation with respect to patient education, physiotherapy, physiology, nutrition, and ethical considerations, responses from the ChatGPT-simulated panel discussion reasonably pointed to various benefits such as 24/7 support, personalized advice, automated tracking, and reminders. ChatGPT also correctly emphasized the importance of patient education, and identified challenges such as limited interaction modes, inaccuracies in emotion-related advice, assurance of data privacy and security, transparency in data handling, and fairness in model training. It also stressed that chatbots are to assist as a copilot, not to replace human health care professionals in the rehabilitation process.

Conclusions: ChatGPT exhibits strong competency in addressing interdisciplinary inquiry by simulating multiple experts from complementary backgrounds, with significant implications in assisting medical education.

JMIR Med Educ 2024;10:e51157

doi:10.2196/51157

Keywords



The sports industry is a significant economic contributor in the United States, which was projected to generate US $83.1 billion in revenue in 2023 [1]. Concurrently, sports/recreation-related injuries are prevalent, with an estimated rate of 34 per 1000 individuals, accumulating to an annual total of 8.6 million cases [2]. Sports rehabilitation, aiming to facilitate full recovery, minimize sports downtime, and prevent future injuries, is a process of coordinated efforts between the athlete and health care professionals across various disciplines [3]. However, the rehabilitation process often spans a lengthy period and demands expensive medical and psychological support, making it inaccessible for many patients. In recent years, the integration of artificial intelligence (AI) in sports medicine has shown promise in enhancing both the accessibility to service and the efficacy of treatment outcomes [4,5]. Nevertheless, the use of chatbots in assisting sports rehabilitation is still in its formative stages, with many potential benefits and pitfalls yet to be explored and understood.

ChatGPT, a sophisticated large language model (LLM)–based chatbot, is capable of human-like dialogue [6]. This chatbot exhibits promise as a virtual assistant in medical education by providing real-time personalized feedback and enhancing student engagement [7]. However, controlled assessments in medical education have identified considerable limitations such as the need for precise prompts (also known as prompt engineering), instances of hallucination, and a lack of critical thinking in its responses [8-10]. Another challenge is that many of the topics in health care are interdisciplinary, involving multiple contributors such as physicians, pharmacists, and social workers to ensure better treatment outcomes and patient satisfaction. Unfortunately, current evaluations of ChatGPT are often confined to tasks from a specifical discipline, leaving its competency in addressing interdisciplinary topics largely unexplored [11,12], especially in medical education fields such as sports rehabilitation [5,13].

Here, we highlight an attractive feature of ChatGPT in addressing interdisciplinary questions via multirole-playing, which allows the chatbot to assume the roles of several discipline-specific experts simultaneously in one chat session. This unique feature inspired us to propose a model named PanelGPT for exploring interdisciplinary topics through a simulated panel discussion, where ChatGPT assumes the roles of a moderator and various experts on the panel. The aim of the study was to evaluate ChatGPT’s competency through PanelGPT in addressing the opportunities and challenges of chatbot uses in sports rehabilitation, an interdisciplinary field that covers topics on patient education, physical therapy, psychological support, nutrition, and ethics.


The PanelGPT Model

We developed a model named PanelGPT to evaluate ChatGPT’s competency in addressing interdisciplinary inquiry (Figure 1A). In this model, ChatGPT assumes the roles of both the moderator and panel experts, while a human operator, representing the audience, poses questions and sends reminders to the moderator or the panelists. Questions from the human operator are directly copied and pasted into the chat session, with ChatGPT determining which panel member(s) should respond. If the discussion stalls, the human operator prompts the moderator or panelists to continue by sending reminders. After each round of discussion, the moderator summarizes the comments before moving to the next question from the audience. Upon conclusion of the discussion, we summarized and evaluated the chatbot's responses based on the literature and our expertise.

Figure 1. Overview of the PanelGPT model for a ChatGPT-simulated panel discussion (A) and a flowchart that delineates the simulation process (B).

Application to Chatbots in Sports Rehabilitation

We applied PanelGPT to explore the pros and cons of chatbot uses in sports rehabilitation. The simulated panelists included 4 experts representing essential disciplines related to the topic: a physiotherapist, psychologist, nutritionist, and AI expert specializing in clinical applications. In addition, a virtual athlete who had successfully recovered from a severe injury participated in the panel. We formulated 4 main questions based on personal experience and/or a literature review. After reviewing the responses from pilot simulations, we added 2 more questions (Multimedia Appendix 1 [14-16]). During one of the pilot simulations, ChatGPT autonomously introduced opening questions, which we subsequently included in the final simulations. This finding also inspired us to instruct the chatbot to ask closing questions at the end of each simulation.

To clarify, our focus is not on using ChatGPT to provide sports rehabilitation advice. Instead, we centered on using ChatGPT to drive a panel discussion titled “Chatbots in sports rehabilitation” in a “self-consistency” manner [17]. The prompts used to steer the final simulations are detailed in Multimedia Appendix 2. A flowchart that outlines the process of the simulation is shown in Figure 1B. At the beginning of the prompts, we instructed ChatGPT to undertake multiple roles and specified other settings in the simulation (Multimedia Appendix 2). Next, the moderator was prompted to introduce the panelists and kick off the discussion with opening questions. Following the responses to these initial questions from the panelists, the moderator was tasked to summarize the responses and open the platform for questions from the audience. In response, the human operator copied each question from the audience directly into the chat session, allowing ChatGPT to select which expert should respond autonomously. After each round of questions and answers, the moderator was prompted to summarize the responses and call for the next question. This process was iterated until all of the audience’s questions had been addressed. At the end of the panel discussion, the moderator was asked to propose a closing question and provide a summary of the responses. Additional prompts were introduced as needed to ensure a smooth progression of the panel discussion (Multimedia Appendix 2). We repeated the simulation 3 times using ChatGPT-4 (May 24, 2023, version) with its online web interface [18].

As shown in Multimedia Appendix 1, we initiated the simulation with an opening question and concluded with a closing question. During the simulation, we prompted ChatGPT to simulate a panel discussion on topics from chatbot uses in sports rehabilitation in the order of “patient education,” “physical therapy,” “psychological support,” “nutrition,” “tracking & other alternatives,” and “ethics.” After 3 rounds of simulations, we manually evaluated the panel’s response to questions from each topic by referring to the literature and our human expertise.

Ethical Considerations

This work was based on analyzing ChatGPT’s response to designed prompts. As the work is classified as not human subjects research, review of the Institutional Review Board of West Virginia University was not required [19].


Overview

The complete chat histories, including prompts and ChatGPT’s response from the simulated panel discussion, are accessible in Multimedia Appendices 3-5 (audio versions are available upon request). As expected, 2 or more experts responded to each question (Table 1); the experts generally offered insights from their respective fields of expertise. We evaluated the responses by citing relevant references and according to our own expertise. The most relevant findings are compiled and summarized below for each question.

Table 1. Records of direct responses to questions during the simulation.a
QuestionPhysiotherapistPsychologistNutritionistAthleteAIb expert
Opening question1, 2, 31, 2, 31, 2, 31, 2, 31, 2, 3
Patient education1, 2, 31, 2, --, -, -1, -, 31, 2, 3
Physical therapy1, 2, 3-, -, --, -, --, -, -1, 2, 3
Psychological support-, -, -1, 2, 3-, -, -1, -, -1, 2, 3
Nutrition-, -, --, -, -1, 2, 3-, -, -1, 2, 3
Tracking and other alternatives1, 2, 31, 2, --, -, -1, 2, -1, 2, 3
Ethics1, 2, 31, 2, -1, 2, -1, -, -1, 2, 3
Closing question1, 2, 31, 2, 31, 2, 31, 2, 31, 2, 3

aNumbers 1, 2, and 3 indicate when a response directly targeting the question was made for rounds 1, 2, and 3 of the simulation, respectively, whereas “-” denotes the absence of such a response.

bAI: artificial intelligence.

Opening Question

The simulated panel discussion began with introductions and requests for the panelists’ perspectives on the role of chatbots in sports rehabilitation, to which all panel members responded (Table 1). The ensuing dialogue identified chatbots as round-the-clock support systems, adept at monitoring, offering reminders, consulting, and nurturing a positive mindset in athletes during their recovery. Similar observations have been reported for orthopedic patients with AI assistance in the literature [14,20,21]. Looking into the future and consistent with expectations, chatbots might grow increasingly adept at analyzing biomechanical data, emotional indicators, and nutritional needs, thus providing personalized feedback that helps athletes better comprehend their bodies and healing journeys.

Patient Education

The conversation pinpointed several critical factors in educating athletes on using chatbots for rehabilitation. Both the athlete and the psychologist touched on the importance of understanding the benefits of using a chatbot, such as a readily available source of advice and mental support [22]. The AI expert emphasized the education on transparency, including how data are collected, processed, stored, and protected. Effective communication with a chatbot is a nontrivial task [23]. The physiotherapist focused on how to guide users to interact with the chatbot effectively and how to interpret the responses. The discussion also underscored that the chatbot system is designed to enhance recovery, not to replace the human touch. Through education, athletes need to be able to identify situations that call for direct communication with health care professionals.

Physical Therapy

The primary focus of these questions was on the chatbot’s potential to facilitate physical therapy by analyzing movements and weight distributions [15]. Relevant responses were from the physiotherapist and the AI expert, who acknowledged that current chatbots primarily interact with users through text and voice, which restricts their direct applicability to the question. However, the AI expert envisioned integrating chatbots, wearables, cameras, and smart devices to analyze an athlete’s movement patterns and provide real-time, personalized feedback. A good example, as has also been noted in the literature, is computer vision–based analysis that has been applied to monitor and improve sports performance [24]. The AI expert further highlighted that the accuracy of this application depends on the size and quality of the training data, as well as advances in AI technologies such as machine learning and computer vision.

Psychological Support

This round of discussion explored the role of chatbots in analyzing emotional cues via sentiment analysis, a technique previously shown to enhance patient satisfaction in several medical chatbot applications [16,25,26] and other applications [27]. The panel’s responses aligned with the existing literature: by delivering tailored responses to emotions, chatbots offer athletes emotional support and reduce their feelings of isolation. Nevertheless, the panel did not explore the impact of chatbots on psychological outcome measures such as improvements in communication skills, cognitive level, motivation, and abilities in coping with the injury. The psychologist and the AI expert cautioned that sentiment analysis may not always capture human emotions accurately. Thus, the psychological support provided via chatbots should be regarded as a complement to human interventions, which, in our opinion, can extend from health care professionals to coaches, teammates, friends, and family members.

Nutrition

Chatbots have been used for nutrition advice [28-30]. The nutritionist outlined multiple roles for chatbots in nutritional management, such as reminding athletes to stay hydrated, tracking dietary intake, and suggesting meal plans. A personalized dietary plan could use an advanced AI algorithm to analyze factors such as demographics, injury type, recovery stage, allergy history, and signals from wearable devices or health-tracking apps. The AI expert emphasized that building a personalized nutrition model demands a precise understanding of nutritional science, human physiology, and high-quality training data. However, given that chatbots might make mistakes such as recommending diets containing allergens [31] or harmful diet tips that promote eating disorders [32], they should be regarded as supplementary tools to human nutritionists rather than as their replacements.

Tracking and Other Alternatives

Responses from the physiotherapist and the AI expert to this topic largely echoed those provided during the “physical therapy” round. The athlete noted that the automated tracking, recording, and reminding function helps reduce stress, echoing the psychologist’s comments. In line with remarks made by other researchers [33], the simulation highlighted several advantages of chatbots over traditional methods in sports medicine. These included reducing the need for manual reporting, offering convenient cloud-based access to records, real-time data collection, instantaneous analysis, and providing immediate advice. Despite these benefits, the simulation lacked a discussion on how chatbots could potentially enhance treatment outcomes over alternative tools such as increasing patient satisfaction or reducing the recovery duration. In addition, the questions were designed to invoke engagement from all panelists. However, the nutritionist unexpectedly did not respond (Table 1).

Ethics

Distinct from other audience-initiated topics, questions regarding ethics prompted responses from all panelists (Table 1). Some comments reiterated points from previous discussions, particularly regarding patient education. The conversation emphasized the need for stringent adherence to medical privacy regulations such as the Health Insurance Portability and Accountability Act in the United States or the General Data Protection Regulation in Europe [34]. This discussion highlighted the necessity of robust protocols for data encryption and storage to ensure security, as well as the need for transparency on data collection, processing, and accessibility. However, the panel did not delve into the merits and drawbacks of open-source, locally deployed chatbots (especially those furnished with domain-specific knowledge) versus commercial and online chatbots about privacy and security [35].

Regarding bias and fairness, it was stressed that chatbot training should use diverse and representative datasets. As users, athletes should retain complete discretion on whether to use chatbots, alternative methods, or a combination of both. The psychologist highlighted the need to implement chatbots in a manner that avoids triggering anxiety or other negative emotions. All the comments align with the 5 ethical principles proposed by AI4People: beneficence, nonmaleficence, justice, autonomy, and explicability [36].

Closing Question

The moderator was prompted to steer the panel discussion toward its end with a final question. As anticipated, the questions were all forward-thinking (Multimedia Appendix 1). Panelists offered predictions drawing from their respective fields of expertise. Foreseeing rapid advancements in AI and complementary technologies, the panel envisaged a future of precision sports rehabilitation in the chatbot era. In this vision, the rehabilitation program would be tailored to individual needs, bolstered by health care providers, and empowered by chatbots. According to responses from the simulated athlete, this form of personalized support would make rehabilitation feel like a natural part of the recovery process, and the athlete would take charge of the rehabilitation journey.


Principal Findings

We evaluated ChatGPT’s competency in addressing interdisciplinary inquiry using sports rehabilitation as an example. Using a novel model named PanelGPT, we prompted ChatGPT to explore the pros and cons of chatbot use in sports rehabilitation. ChatGPT answered questions via a simulated panel discussion where it role-played multiple experts, including a physiotherapist, psychologist, nutritionist, AI expert, and athlete. Our analysis of its responses highlighted benefits such as 24/7 support, personalized advice, and automated tracking, as well as challenges such as limited interaction modes, inaccuracies in emotion-related advice, and data privacy concerns. We repeated the experiments with the most recent version, GPT-4o (May 2024), and obtained generally similar results. Thus, our findings highlight the potential of using ChatGPT through PanelGPT to enhance appreciation of any interdisciplinary topic.

The interdisciplinary approach through PanelGPT brings several benefits with significant implications in medical education. First, the responses come from a panelist of experts with complementary expertise, providing different perspectives that are automatically categorized and offering a comprehensive view of the topic in question. For instance, including an athlete on the panel yielded a unique user perspective that could be overlooked in simple prompts. For example, a simple prompt of the questions on “psychological support” to ChatGPT yielded responses rooted in knowledge based on a psychologist (Multimedia Appendix 6). Thus, PanelGPT can offer students a holistic view of a complex interdisciplinary topic and integrates insights that might be missed from traditional educational settings.

Second, as LLMs become increasingly adapted in education, it is important to educate students on alternative, innovative ways of using chatbots. Compared to conventional communication with a chatbot, PanelGPT is novel in that it focuses the chatbot’s attention on the question and provides critical contexts for responding to the questions. For instance, when the “physical therapy” questions were simply prompted to ChatGPT, the responses quickly drifted toward other topics such as education and mental health (see Multimedia Appendix 7). With PanelGPT, the response involved a discussion between a physiotherapist and an AI expert, and the topic remained in the context of sports rehabilitation.

Third, the multirole-playing feature of ChatGPT through PanelGPT makes learning more interactive and engaging by encouraging active participation from learners. It also helps learners develop critical thinking skills such as synthesizing information from multiple simulated experts from different backgrounds and evaluating their credibility. This is particularly important when addressing the pros and cons of implementing new technologies in health care settings on topics that are interdisciplinary by nature.

Finally, having a panel of experts enables students to form a balanced view on a specific topic. For example, in addressing the “physical therapy” questions, the physiologist’s response highlighted the current limitations of chatbots in text or voice communication, while the AI expert expanded the discussion to the integration of real-time video analysis (Multimedia Appendices 3-5). This balanced view is crucial in medical education, as it allows students to understand both the potential and the limitations of any emerging technology (such as chatbots) that are poised for health care applications.

Limitations

The breadth and depth of a panelist’s response depend on the training dataset in the field. In several discussions, such as the “patient education” and “tracking and other alternatives” topics, where we expected feedback from all panelists, there was a noticeable lack of direct responses from the nutritionist. It could be that the dataset used to train ChatGPT for the nutritionist was underrepresented in the rehabilitation field. Indeed, a combined search for “rehabilitation” (or “rehab”) and “nutritionist” (or “nutrition”) on PubMed yielded 6-8 times fewer hits compared to searches involving the terms “physiotherapist” (or “physiotherapy”) or “psychologist” (or “psychology”) (as of July 7, 2023). To address this limitation, the human operator could send reminders to the nutritionist to elicit a response. In contrast to the nutritionist, the AI expert responded to questions on all topics. This is expected because of the inherent need for AI expertise in creating such chatbot systems.

The data used to train ChatGPT at the time of our experiments only extended up to September 2021. As such, ChatGPT could not provide comments that would reference more recent developments in chatbots such as ChatGPT itself or BARD (now known as Gemini). The feature to activate Bing within ChatGPT does allow for real-time information browsing from the internet. However, in practice, this disrupted the panel discussion’s flow, resulting in a shift back to the regular ChatGPT conversation format and a subsequent loss of the expert identities after several exchanges (as shown in Multimedia Appendices 8-10).

We observed instances where the response to a question from the same expert was vague in one simulation but detailed in another. This observation suggests that conducting multiple simulations could enhance the efficacy of PanelGPT in providing a well-rounded understanding of the knowledge landscape surrounding an interdisciplinary topic. This practice enables self-consistency checking, which has been shown to improve the reasoning performance of language models [17]. Additionally, summarizing diverse responses from multiple simulations facilitates the identification of contrasting viewpoints and emergent trends in the panel discussion.

Hallucination, the generation of unsupported or false information, is a prevalent issue with LLM-based chatbots. The multiperspective approach of PanelGPT allows the chatbot to draw on the strengths and mitigate the weaknesses of each panelist when responding to specific questions. The current model is constrained by the same chatbot simulating all the panelists in a given chat session. With advances in chatbot development, this model could be extended by integrating responses from other LLM chatbots, especially those possessing domain-specific knowledge. In fact, cross-referencing responses from different experts on the panel powered by distinct models helps mitigate hallucination [37]. Nonetheless, it remains crucial to cross-verify the conclusions drawn from the simulation with literature findings or opinions from human experts to ensure the accuracy of the information.

Throughout the simulation, we noted instances where comments from one expert were acknowledged by another. Intriguingly, contradictory comments between experts were not observed. The richness and depth of the discussion can be further enhanced by using additional prompting strategies. For instance, after each response round, panelists could be prompted to critically evaluate each other’s comments to foster consensus or highlight disagreements. Panelists may also be prompted to pose questions to one another, such as seeking clarifications or requesting further details on a given response. Moreover, panelists could prompt the audience to clarify their questions if necessary. These additional prompting tactics make the panel discussion more engaging and mirror a real-life scenario, increasing the likelihood of obtaining a thorough appreciation of the topic.

Conclusions

We presented PanelGPT, an innovative method that capitalizes on the multirole-playing feature of ChatGPT through simulated panel discussions, and applied it to evaluate ChatGPT’s competency in addressing interdisciplinary inquiry. In our case study, ChatGPT adequately addressed the opportunities and challenges on chatbot uses in sports rehabilitation. As a generalizable model, we envision PanelGPT as a supplementary tool in the classroom, aiding students in understanding complex interdisciplinary topics in medical education, such as nursing care, sports rehabilitation, stroke rehabilitation, and the management of recurrent pneumonia in long-term care facilities.

Acknowledgments

This study is supported by the National Institutes of Health (NIH)-National Institute of General Medical Sciences (NIGMS) (grants P20 GM103434 and U54 GM-104942 to GH), National Science Foundation (grant 2125872 to GH and DAA), and NIH-National Library of Medicine (NLM) (grant R01LM013438 to LL and grant 5R01LM013392 to DX). JCM is supported by the West Virginia University Cancer Institute summer undergraduate research program. We thank Zien Cheng and Evelyn Shue from GH’s lab for proofreading. ChatGPT and Grammarly were used to help polish the writing of the manuscript. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Authors' Contributions

GH performed the formal analysis and wrote the original draft of the manuscript. All authors contributed to formal analysis and review and editing of the manuscript.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Questions designed for the simulated panel discussion on “chatbots in sports rehabilitation.”.

DOCX File , 15 KB

Multimedia Appendix 2

Prompts used to steer the simulated panel discussion.

DOCX File , 15 KB

Multimedia Appendix 3

Prompts and scripts for the 1st round of simulation.

PDF File (Adobe PDF File), 2050 KB

Multimedia Appendix 4

Prompts and scripts for the 2nd round of simulation.

PDF File (Adobe PDF File), 2194 KB

Multimedia Appendix 5

Prompts and scripts for the 3rd round of simulation.

PDF File (Adobe PDF File), 2145 KB

Multimedia Appendix 6

Scripts for a direct prompt on "psychological support.".

PDF File (Adobe PDF File), 796 KB

Multimedia Appendix 7

Scripts for a direct prompt on "physical therapy.".

PDF File (Adobe PDF File), 760 KB

Multimedia Appendix 8

Prompts and scripts for the 1st round of simulation with Bing activated.

PDF File (Adobe PDF File), 780 KB

Multimedia Appendix 9

Prompts and scripts for the 2nd round of simulation with Bing activated.

PDF File (Adobe PDF File), 678 KB

Multimedia Appendix 10

Prompts and scripts for the 3rd round of simulation with Bing activated.

PDF File (Adobe PDF File), 1742 KB

  1. 2019 PwC Outlook: At the gate and beyond. PricewaterhouseCoopers. URL: https://www.pwc.com/us/en/industries/tmt/assets/pwc-sports-outlook-2019.pdf [accessed 2023-06-22]
  2. Sheu Y, Chen LH, Hedegaard H. Sports- and recreation-related injury episodes in the United States, 2011-2014. Natl Health Stat Report. Nov 18, 2016;(99):1-12. [FREE Full text] [Medline]
  3. Dhillon H, Dhillon S, Dhillon MS. Current concepts in sports injury rehabilitation. Indian J Orthop. 2017;51(5):529-536. [FREE Full text] [CrossRef] [Medline]
  4. Ramkumar PN, Luu BC, Haeberle HS, Karnuta JM, Nwachukwu BU, Williams RJ. Sports medicine and artificial intelligence: a primer. Am J Sports Med. Mar 26, 2022;50(4):1166-1174. [CrossRef] [Medline]
  5. Fayed AM, Mansur NSB, de Carvalho KA, Behrens A, D'Hooghe P, de Cesar Netto C. Artificial intelligence and ChatGPT in orthopaedics and sports medicine. J Exp Orthop. Jul 26, 2023;10(1):74. [FREE Full text] [CrossRef] [Medline]
  6. Owens B. How Nature readers are using ChatGPT. Nature. Mar 20, 2023;615(7950):20. [CrossRef] [Medline]
  7. Lee H. The rise of ChatGPT: exploring its potential in medical education. Anat Sci Educ. Mar 14, 2024;17(5):926-931. [CrossRef] [Medline]
  8. Kavadella A, Dias da Silva MA, Kaklamanos EG, Stamatopoulos V, Giannakopoulos K. Evaluation of ChatGPT's real-life implementation in undergraduate dental education: mixed methods study. JMIR Med Educ. Jan 31, 2024;10:e51344. [FREE Full text] [CrossRef] [Medline]
  9. Magalhães Araujo S, Cruz-Correia R. Incorporating ChatGPT in medical informatics education: mixed methods study on student perceptions and experiential integration proposals. JMIR Med Educ. Mar 20, 2024;10:e51151. [FREE Full text] [CrossRef] [Medline]
  10. Tangadulrat P, Sono S, Tangtrakulwanich B. Using ChatGPT for clinical practice and medical education: cross-sectional survey of medical students' and physicians' perceptions. JMIR Med Educ. Dec 22, 2023;9:e50658. [FREE Full text] [CrossRef] [Medline]
  11. King RC, Samaan JS, Yeo YH, Peng Y, Kunkel DC, Habib AA, et al. A multidisciplinary assessment of ChatGPT's knowledge of amyloidosis: observational study. JMIR Cardio. Apr 19, 2024;8:e53421. [FREE Full text] [CrossRef] [Medline]
  12. Miao H, Ahn H. Impact of ChatGPT on interdisciplinary nursing education and research. Asian Pac Isl Nurs J. Apr 24, 2023;7:e48136. [FREE Full text] [CrossRef] [Medline]
  13. Zhu W, Geng W, Huang L, Qin X, Chen Z, Yan H. Who could and should give exercise prescription: physicians, exercise and health scientists, fitness trainers, or ChatGPT? J Sport Health Sci. May 2024;13(3):368-372. [FREE Full text] [CrossRef] [Medline]
  14. Dwyer T, Hoit G, Burns D, Higgins J, Chang J, Whelan D, et al. Use of an artificial intelligence conversational agent (chatbot) for hip arthroscopy patients following surgery. Arthrosc Sports Med Rehabil. Mar 16, 2023;5(2):e495-e505. [FREE Full text] [CrossRef] [Medline]
  15. Cheng K, Guo Q, He Y, Lu Y, Gu S, Wu H. Exploring the potential of GPT-4 in biomedical engineering: the dawn of a new era. Ann Biomed Eng. Aug 2023;51(8):1645-1653. [CrossRef] [Medline]
  16. Oh J, Jang S, Kim H, Kim J. Efficacy of mobile app-based interactive cognitive behavioral therapy using a chatbot for panic disorder. Int J Med Inform. Aug 2020;140:104171. [CrossRef] [Medline]
  17. Wang X, Wei J, Schuurmans D, Le Q, Chi E, Narang S, et al. Self-consistency improves chain of thought reasoning in language models. arXiv Preprint. Mar 7, 2023:1-24. [CrossRef]
  18. ChatGPT. URL: https://chat.openai.com [accessed 2023-07-08]
  19. Not Human Subjects Research and Not Research. West Virginia University Research Compliance Administration. URL: https://human.research.wvu.edu/get-started/determine-protocol-type/nhsr [accessed 2024-07-31]
  20. Anthony CA, Rojas EO, Keffala V, Glass NA, Shah AS, Miller BJ, et al. Acceptance and commitment therapy delivered via a mobile phone messaging robot to decrease postoperative opioid use in patients with orthopedic trauma: randomized controlled trial. J Med Internet Res. Jul 29, 2020;22(7):e17750. [FREE Full text] [CrossRef] [Medline]
  21. Bian Y, Xiang Y, Tong B, Feng B, Weng X. Artificial intelligence-assisted system in postoperative follow-up of orthopedic patients: exploratory quantitative and qualitative study. J Med Internet Res. May 26, 2020;22(5):e16896. [FREE Full text] [CrossRef] [Medline]
  22. Haque MDR, Rubya S. An overview of chatbot-based mobile mental health apps: insights from app description and user reviews. JMIR Mhealth Uhealth. May 22, 2023;11:e44838. [FREE Full text] [CrossRef] [Medline]
  23. Shue E, Liu L, Li B, Feng Z, Li X, Hu G. Empowering beginners in bioinformatics with ChatGPT. Quant Biol. Jun 2023;11(2):105-108. [FREE Full text] [CrossRef] [Medline]
  24. Host K, Ivašić-Kos M. An overview of human action recognition in sports based on computer vision. Heliyon. Jun 5, 2022;8(6):e09633. [FREE Full text] [CrossRef] [Medline]
  25. Chaix B, Bibault J, Pienkowski A, Delamon G, Guillemassé A, Nectoux P, et al. When chatbots meet patients: one-year prospective study of conversations between patients with breast cancer and a chatbot. JMIR Cancer. May 02, 2019;5(1):e12856. [FREE Full text] [CrossRef] [Medline]
  26. de Gennaro M, Krumhuber EG, Lucas G. Effectiveness of an empathic chatbot in combating adverse effects of social exclusion on mood. Front Psychol. Jan 23, 2019;10:3061. [FREE Full text] [CrossRef] [Medline]
  27. Abbasi A, Li J, Adjeroh D, Abate M, Zheng W. Don’t mention it? Analyzing user-generated content signals for early adverse event warnings. Inf Syst Res. Sep 2019;30(3):1007-1028. [CrossRef]
  28. Casas J, Mugellini E, Abou Khaled O. Food diary coaching chatbot. 2018. Presented at: 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers - UbiComp '18; October 8-12, 2018:1676-1680; Singapore. [CrossRef]
  29. Calvaresi D, Eggenschwiler S, Calbimonte JP, Manzo G, Schumacher M. A personalized agent-based chatbot for nutritional coaching. 2021. Presented at: WI-IAT '21: IEEE/WIC/ACM International Conference on Web Intelligence; December 14-17, 2021:682-687; Melbourne, Australia. [CrossRef]
  30. Han R, Todd A, Wardak S, Partridge SR, Raeside R. Feasibility and acceptability of chatbots for nutrition and physical activity health promotion among adolescents: systematic scoping review with adolescent consultation. JMIR Hum Factors. May 05, 2023;10:e43227. [FREE Full text] [CrossRef] [Medline]
  31. Niszczota P, Rybicka I. The credibility of dietary advice formulated by ChatGPT: robo-diets for people with food allergies. Nutrition. Aug 2023;112:112076. [FREE Full text] [CrossRef] [Medline]
  32. Tolentino D. National Eating Disorders Association pulls chatbot after users say it gave harmful dieting tips. NBC News. Jun 01, 2023. URL: https://www.nbcnews.com/tech/neda-pulls-chatbot-eating-advice-rcna87231 [accessed 2023-07-04]
  33. Cheng K, Guo Q, He Y, Lu Y, Xie R, Li C, et al. Artificial intelligence in sports medicine: could GPT-4 make human doctors obsolete? Ann Biomed Eng. Aug 2023;51(8):1658-1662. [CrossRef] [Medline]
  34. Priyanshu A, Vijay S, Kumar A, Naidu R, Mireshghallah F. Are chatbots ready for privacy-sensitive applications? An investigation into input regurgitation and prompt-induced sanitization. arXiv Preprint. May 24, 2024. [CrossRef]
  35. Castelvecchi D. Open-source AI chatbots are booming - what does this mean for researchers? Nature. Jun 2023;618(7967):891-892. [CrossRef] [Medline]
  36. Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, et al. AI4People-An ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds Mach. Nov 26, 2018;28(4):689-707. [CrossRef] [Medline]
  37. Jiang D, Ren X, Lin BY. LLM-Blender: Ensembling large language models with pairwise ranking and generative fusion. 2023. Presented at: 61st Annual Meeting of the Association for Computational Linguistics; Jul 9-14, 2023; Toronto, Canada. [CrossRef]


AI: artificial intelligence
LLM: large language model


Edited by T de Azevedo Cardoso; submitted 23.07.23; peer-reviewed by K Chen, MN Shalaby; comments to author 15.08.23; revised version received 21.08.23; accepted 23.07.24; published 07.08.24.

Copyright

©Joseph C McBee, Daniel Y Han, Li Liu, Leah Ma, Donald A Adjeroh, Dong Xu, Gangqing Hu. Originally published in JMIR Medical Education (https://mededu.jmir.org), 07.08.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic information, a link to the original publication on https://mededu.jmir.org/, as well as this copyright and license information must be included.