@Article{info:doi/10.2196/58801, author="Ozkan, Ecem and Tekin, Aysun and Ozkan, Mahmut Can and Cabrera, Daniel and Niven, Alexander and Dong, Yue", title="Global Health care Professionals' Perceptions of Large Language Model Use In Practice: Cross-Sectional Survey Study", journal="JMIR Med Educ", year="2025", month="May", day="12", volume="11", pages="e58801", keywords="ChatGPT; LLM; global; health care professionals; large language model; language model; chatbot; AI; diagnostic accuracy; efficiency; treatment planning; patient outcome; patient care; survey; physicians; nurses; educators; patient communication; clinical; educational; utilization; artificial intelligence", abstract="Background: ChatGPT is a large language model-based chatbot developed by OpenAI. ChatGPT has many potential applications to health care, including enhanced diagnostic accuracy and efficiency, improved treatment planning, and better patient outcomes. However, health care professionals' perceptions of ChatGPT and similar artificial intelligence tools are not well known. Understanding these attitudes is important to inform the best approaches to exploring their use in medicine. Objective: Our aim was to evaluate the health care professionals' awareness and perceptions regarding potential applications of ChatGPT in the medical field, including potential benefits and challenges of adoption. Methods: We designed a 33-question online survey that was distributed among health care professionals via targeted emails and professional Twitter and LinkedIn accounts. The survey included a range of questions to define respondents' demographic characteristics, familiarity with ChatGPT, perceptions of this tool's usefulness and reliability, and opinions on its potential to improve patient care, research, and education efforts. Results: One hundred and fifteen health care professionals from 21 countries responded to the survey, including physicians, nurses, researchers, and educators. Of these, 101 (87.8{\%}) had heard of ChatGPT, mainly from peers, social media, and news, and 77 (76.2{\%}) had used ChatGPT at least once. Participants found ChatGPT to be helpful for writing manuscripts (n=31, 45.6{\%}), emails (n=25, 36.8{\%}), and grants (n=12, 17.6{\%}); accessing the latest research and evidence-based guidelines (n=21, 30.9{\%}); providing suggestions on diagnosis or treatment (n=15, 22.1{\%}); and improving patient communication (n=12, 17.6{\%}). Respondents also felt that the ability of ChatGPT to access and summarize research articles (n=22, 46.8{\%}), provide quick answers to clinical questions (n=15, 31.9{\%}), and generate patient education materials (n=10, 21.3{\%}) was helpful. However, there are concerns regarding the use of ChatGPT, for example, the accuracy of responses (n=14, 29.8{\%}), limited applicability in specific practices (n=18, 38.3{\%}), and legal and ethical considerations (n=6, 12.8{\%}), mainly related to plagiarism or copyright violations. Participants stated that safety protocols such as data encryption (n=63, 62.4{\%}) and access control (n=52, 51.5{\%}) could assist in ensuring patient privacy and data security. Conclusions: Our findings show that ChatGPT use is widespread among health care professionals in daily clinical, research, and educational activities. The majority of our participants found ChatGPT to be useful; however, there are concerns about patient privacy, data security, and its legal and ethical issues as well as the accuracy of its information. Further studies are required to understand the impact of ChatGPT and other large language models on clinical, educational, and research outcomes, and the concerns regarding its use must be addressed systematically and through appropriate methods. ", issn="2369-3762", doi="10.2196/58801", url="https://mededu.jmir.org/2025/1/e58801", url="https://doi.org/10.2196/58801" }