TY - JOUR AU - Tangadulrat, Pasin AU - Sono, Supinya AU - Tangtrakulwanich, Boonsin PY - 2023 DA - 2023/12/22 TI - Using ChatGPT for Clinical Practice and Medical Education: Cross-Sectional Survey of Medical Students’ and Physicians’ Perceptions JO - JMIR Med Educ SP - e50658 VL - 9 KW - ChatGPT KW - AI KW - artificial intelligence KW - medical education KW - medical students KW - student KW - students KW - intern KW - interns KW - resident KW - residents KW - knee osteoarthritis KW - survey KW - surveys KW - questionnaire KW - questionnaires KW - chatbot KW - chatbots KW - conversational agent KW - conversational agents KW - attitude KW - attitudes KW - opinion KW - opinions KW - perception KW - perceptions KW - perspective KW - perspectives KW - acceptance AB - Background: ChatGPT is a well-known large language model–based chatbot. It could be used in the medical field in many aspects. However, some physicians are still unfamiliar with ChatGPT and are concerned about its benefits and risks. Objective: We aim to evaluate the perception of physicians and medical students toward using ChatGPT in the medical field. Methods: A web-based questionnaire was sent to medical students, interns, residents, and attending staff with questions regarding their perception toward using ChatGPT in clinical practice and medical education. Participants were also asked to rate their perception of ChatGPT’s generated response about knee osteoarthritis. Results: Participants included 124 medical students, 46 interns, 37 residents, and 32 attending staff. After reading ChatGPT’s response, 132 of the 239 (55.2%) participants had a positive rating about using ChatGPT for clinical practice. The proportion of positive answers was significantly lower in graduated physicians (48/115, 42%) compared with medical students (84/124, 68%; P<.001). Participants listed a lack of a patient-specific treatment plan, updated evidence, and a language barrier as ChatGPT’s pitfalls. Regarding using ChatGPT for medical education, the proportion of positive responses was also significantly lower in graduate physicians (71/115, 62%) compared to medical students (103/124, 83.1%; P<.001). Participants were concerned that ChatGPT’s response was too superficial, might lack scientific evidence, and might need expert verification. Conclusions: Medical students generally had a positive perception of using ChatGPT for guiding treatment and medical education, whereas graduated doctors were more cautious in this regard. Nonetheless, both medical students and graduated doctors positively perceived using ChatGPT for creating patient educational materials. SN - 2369-3762 UR - https://mededu.jmir.org/2023/1/e50658 UR - https://doi.org/10.2196/50658 UR - http://www.ncbi.nlm.nih.gov/pubmed/38133908 DO - 10.2196/50658 ID - info:doi/10.2196/50658 ER -