TY - JOUR AU - Baglivo, Francesco AU - De Angelis, Luigi AU - Casigliani, Virginia AU - Arzilli, Guglielmo AU - Privitera, Gaetano Pierpaolo AU - Rizzo, Caterina PY - 2023 DA - 2023/11/1 TI - Exploring the Possible Use of AI Chatbots in Public Health Education: Feasibility Study JO - JMIR Med Educ SP - e51421 VL - 9 KW - artificial intelligence KW - chatbots KW - medical education KW - vaccination KW - public health KW - medical students KW - large language model KW - generative AI KW - ChatGPT KW - Google Bard KW - AI chatbot KW - health education KW - health care KW - medical training KW - educational support tool KW - chatbot model AB - Background: Artificial intelligence (AI) is a rapidly developing field with the potential to transform various aspects of health care and public health, including medical training. During the “Hygiene and Public Health” course for fifth-year medical students, a practical training session was conducted on vaccination using AI chatbots as an educational supportive tool. Before receiving specific training on vaccination, the students were given a web-based test extracted from the Italian National Medical Residency Test. After completing the test, a critical correction of each question was performed assisted by AI chatbots. Objective: The main aim of this study was to identify whether AI chatbots can be considered educational support tools for training in public health. The secondary objective was to assess the performance of different AI chatbots on complex multiple-choice medical questions in the Italian language. Methods: A test composed of 15 multiple-choice questions on vaccination was extracted from the Italian National Medical Residency Test using targeted keywords and administered to medical students via Google Forms and to different AI chatbot models (Bing Chat, ChatGPT, Chatsonic, Google Bard, and YouChat). The correction of the test was conducted in the classroom, focusing on the critical evaluation of the explanations provided by the chatbot. A Mann-Whitney U test was conducted to compare the performances of medical students and AI chatbots. Student feedback was collected anonymously at the end of the training experience. Results: In total, 36 medical students and 5 AI chatbot models completed the test. The students achieved an average score of 8.22 (SD 2.65) out of 15, while the AI chatbots scored an average of 12.22 (SD 2.77). The results indicated a statistically significant difference in performance between the 2 groups (U=49.5, P<.001), with a large effect size (r=0.69). When divided by question type (direct, scenario-based, and negative), significant differences were observed in direct (P<.001) and scenario-based (P<.001) questions, but not in negative questions (P=.48). The students reported a high level of satisfaction (7.9/10) with the educational experience, expressing a strong desire to repeat the experience (7.6/10). Conclusions: This study demonstrated the efficacy of AI chatbots in answering complex medical questions related to vaccination and providing valuable educational support. Their performance significantly surpassed that of medical students in direct and scenario-based questions. The responsible and critical use of AI chatbots can enhance medical education, making it an essential aspect to integrate into the educational system. SN - 2369-3762 UR - https://mededu.jmir.org/2023/1/e51421 UR - https://doi.org/10.2196/51421 UR - http://www.ncbi.nlm.nih.gov/pubmed/37910155 DO - 10.2196/51421 ID - info:doi/10.2196/51421 ER -