Published on in Vol 11 (2025)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/71767, first published .
Assessing Pharmacists’ Use and Perception of AI Chatbots in Pharmacy Practice: Cross-Sectional Survey Study

Assessing Pharmacists’ Use and Perception of AI Chatbots in Pharmacy Practice: Cross-Sectional Survey Study

Assessing Pharmacists’ Use and Perception of AI Chatbots in Pharmacy Practice: Cross-Sectional Survey Study

Original Paper

1Regulatory Pharmaceutical Fellow in Drug Information, Purdue University College of Pharmacy, West Lafayette, IN, United States

2Professor of Pharmacy Practice, Purdue University College of Pharmacy, West Lafayette, IN, United States

3Clinical Professor of Pharmacy Practice, Wayne State University Eugene Applebaum College of Pharmacy and Health Sciences, Detroit, MI, United States

4Clinical Pharmacy Specialist, Drug Information, Detroit Receiving Hospital, Detroit, MI, United States

5Clinical Professor Emeritus, University of Michigan College of Pharmacy, Ann Arbor, MI, United States

6Clinical Professor of Pharmacy Practice, Director of Drug Information Group, University of Illinois Chicago College of Pharmacy, Chicago, IL, United States

7Professor of Pharmacy Practice, Butler University College of Pharmacy and Health Sciences, Indianapolis, IN, United States

Corresponding Author:

Anly Li, PharmD

Regulatory Pharmaceutical Fellow in Drug Information

Purdue University College of Pharmacy

575 Stadium Mall Dr

West Lafayette, IN, 47907

United States

Phone: 1 7347943680

Email: anlyl@umich.edu


Background: The use of artificial intelligence (AI)–based large language model chatbots such as ChatGPT has become increasingly popular in many disciplines. However, concerns exist regarding ethics, legal considerations, accuracy, and reproducibility with its use in health care practice, education, and research.

Objective: This study aimed to assess current perceptions and use of AI chatbots in pharmacy practice from the perspective of pharmacist preceptors and determine factors that may influence the use of AI chatbots in practice.

Methods: A cross-sectional survey of pharmacy practice preceptors from Indiana, Illinois, and Michigan was conducted using the validated Technology Acceptance Model Edited to Assess ChatGPT Adoption (TAME-ChatGPT) survey tool to collect information regarding current use of AI chatbots and factors associated with use, including ease of use, perceived risk, technology or social influences, anxiety, and perceived usefulness.

Results: A total of 194 responses (194/1877, 10.34% response rate) were received. Approximately one-third (n=59, 30.4%) of respondents reported having used an AI chatbot, with 51.5% (n=100) indicating that they planned to start or would continue using chatbots in the future. In practice, common uses for AI chatbots included summarizing information (n=90, 46.4%), letter of recommendation writing (n=64, 32.9%), and obtaining disease state information (n=63, 32.5%). The 2 main constructs associated with the use of chatbots identified from the TAME-ChatGPT tool included perceived risk of using AI and attitude toward AI. Factors that predicted pharmacists’ current use of AI chatbots included positive attitude toward technology (odds ratio [OR] 3.64, 95% CI 2.08-6.36), coworker use of AI (OR 7.41, 95% CI 2.64-20.8), and working in academia (OR 5.62, 95% CI 1.30-24.23).

Conclusions: Most pharmacist respondents had not used an AI chatbot and were unlikely to make patient care decisions based on information from a chatbot. The TAME-ChatGPT survey is validated for assessing chatbot use and attitudes among pharmacists, and future studies using this survey tool can guide the implementation of chatbots into pharmacy practice.

JMIR Med Educ 2025;11:e71767

doi:10.2196/71767

Keywords



Artificial Intelligence Chatbots

In November 2022, the artificial intelligence (AI)–based large language model chatbot ChatGPT was launched [1]. It quickly gained popularity, amassing over 100 million monthly users in just 2 months [2]. Since the release of ChatGPT, several other AI chatbots have debuted, including Gemini, Microsoft Bing AI, and Copilot. This expansion in AI chatbots has led to the diversification of their use in fields outside of the technology industry. ChatGPT has been explored for use in health care practice, education, and research, and specialized chatbots such as OpenEvidence and Dougall GPT have emerged, catering specifically to clinicians and health care workers [3-6].

Use of Artificial Intelligence in Healthcare

A survey evaluating the use and perceptions of ChatGPT in health care professionals was conducted at Northwestern University [7]. In addition to gathering information regarding applications of ChatGPT in health care, the survey focused on perceptions related to use in health care research, education, and practice. Participants reported uncertainty about the use of ChatGPT due to its recent release, although many respondents indicated an interest in future use. Similarly, in a survey of health care workers in Saudi Arabia, participants expressed concerns about ChatGPT use in health care, including lack of credibility and concerns about inaccurate medical information [8]. Despite these concerns, most respondents still anticipated a positive impact of ChatGPT on the future of health care practice, including medical decision-making, patient and family support, and medical research appraisal. Studies conducted in Jordan, Saudi Arabia, and the United Arab Emirates have reported similar findings among pharmacists [9-11].

These findings were reinforced in a 2023 systematic review that examined 60 studies focusing on the applications of and concerns about ChatGPT in health care practice, education, and research [6]. Potential uses for ChatGPT in practice included clinical documentation, creation of personalized care plans, improved health literacy, and provision of patient education. In education, ChatGPT was used in the creation of personalized learning tools and writing clinical cases. In research, ChatGPT was found to be beneficial in improving writing efficiency. However, in nearly all the studies, there were reported concerns with ChatGPT, including ethical, legal, and copyright issues; inaccuracy; and limited reproducibility. Specific to pharmacy, the use of ChatGPT has been explored with respect to drug information, medication therapy management, patient education, and adverse drug reaction assessment, with the general consensus that it has potential as a supplementary tool but is not yet able to handle complex problems [12-15].

On the basis of the findings from previously conducted surveys and research studies, it is evident that there is current and future potential for the use of AI technology in health care. However, information about the opinions and perceptions of pharmacists regarding the utility of AI chatbots in practice is limited. Pharmacists have been poorly represented in most surveys of health professionals conducted to date, and most studies regarding pharmacist perceptions have been conducted outside of the United States. Understanding the current perceptions, use, and barriers to use of chatbots will help inform and identify future roles for chatbots in pharmacy practice. Pharmacy preceptors are at the forefront of practice and education in the United States; therefore, this study aimed to assess current perceptions and use of AI chatbots in pharmacy practice from the perspective of pharmacist preceptors and determine factors that may influence the use of AI chatbots according to the Technology Acceptance Model Edited to Assess ChatGPT Adoption (TAME-ChatGPT) tool [16].


Overview

A cross-sectional survey was conducted using a convenience sample of pharmacy practice preceptors from Purdue University College of Pharmacy, University of Michigan College of Pharmacy, Wayne State University Eugene Applebaum College of Pharmacy and Health Sciences, University of Illinois Chicago Retzky College of Pharmacy, and Butler University College of Pharmacy and Health Sciences. Current pharmacist preceptors were recruited through their respective offices of experiential education. Pharmacists were excluded from the study if they had not precepted students in the previous year given the recent introduction of AI chatbots or if they did not practice in one of the affiliated states of Illinois, Indiana, or Michigan. Participants were instructed to only take the survey once.

Questionnaire Design and Administration

The survey instrument was built in Qualtrics XM (Qualtrics International Inc) and was primarily based on the TAME-ChatGPT survey tool, a validated tool adapted from the technology acceptance model (TAM) for assessing health care students’ attitudes toward ChatGPT [16]. Before the first survey item, respondents were asked a screening question to ensure that they met the study inclusion criteria. The first section gathered information regarding pharmacists’ practice setting and current use of AI chatbots. The second section contained questions from the TAME-ChatGPT that were adapted to detect an association with different factors of the TAM (perceived ease of use, perceived risk, technology or social influence, anxiety, perceived usefulness, and behavior) and the use of AI chatbots. The survey concluded with questions regarding demographic information and tasks in which preceptors would recommend the use of AI. The instrument was pilot-tested by 3 faculty members to ensure adequate formatting, comprehension, clarity, and completeness of the survey. The final survey instrument is available in Multimedia Appendix 1.

An email invitation to complete the survey instrument was sent to 1877 pharmacy preceptors on February 12, 2024. The survey link was accessible until April 22, 2024, and 3 reminder emails were sent out before the survey closed. Only fully completed surveys were included in the final analysis.

Data Analysis

Data were described using means and SDs for continuous variables and medians and frequencies for nominal variables. Univariable analysis was conducted using the student 2-tailed t test for continuous variables and chi-square test for nominal variables. Normality was assessed using visual inspection of Q-Q plots.

An exploratory factor analysis was conducted to assess construct validity of the TAME-ChatGPT instrument. First, correlation matrices, the Bartlett test, and the Kaiser-Meyer-Olkin measure of sampling adequacy were assessed to ensure that proceeding with factor analysis was appropriate. Multicollinearity was assessed by examining the determinant of the correlation in the correlation matrix with a <0.00001 cutoff. If multicollinearity was present, items with r values greater than 0.8 were considered for removal. The number of factors included was based on performing parallel analysis in which an eigenvalue cutoff was determined from randomly generated correlation matrices and then compared with eigenvalues from the data. Factors were retained if the dataset eigenvalue was greater than the corresponding random eigenvalue. Promax rotation was used to allow for correlation of factors. The Cronbach α was calculated to assess internal consistency.

Multivariable analysis was conducted using logistic regression to assess the association between preceptor characteristics and the components of the TAME-ChatGPT with the outcome of use of AI or future use of AI. Variables included the TAME-ChatGPT constructs and factors that significantly predicted current or future AI use. The number of variables included in the model was limited based on the number of participants who responded that they used AI. Goodness of fit was evaluated using the Hosmer-Lemeshow test (P>.05 indicating model fit) along with −2 log likelihood ratio, with smaller values indicating improved fit. As a secondary analysis, the model’s ability to discriminate between those who reported AI use and those who did not was evaluated using a receiver operating characteristic curve. The area under the curve was used as a summary measure of the model’s discrimination. SPSS (version 29; IBM Corp) was used to conduct the data analysis. A P value of <.05 was considered statistically significant.

Ethical Considerations

This study was determined to be exempt research by the institutional review boards at all participating institutions—Butler University, Purdue University, University of Illinois Chicago, University of Michigan, and Wayne State University. The survey responses were anonymous and confidential, and all responses were stored without any identifiers. After survey completion, respondents were directed to an optional link to receive a US $10 Amazon gift card as compensation for participation.


Overview

A total of 235 responses were received. Responses were excluded from the analysis if they were incomplete (n=10, 4.3%); if the respondent practiced outside of Indiana, Illinois, or Michigan (n=4, 1.7%); or if the respondent had not been a preceptor in the previous year (n=27, 11.5%). The final response number was 194, which represented an overall response rate of 10.3% (194/1877). Respondent demographics can be found in Table 1, and they were generally representative of pharmacy preceptors with the exception of years and area of practice based on data from the American Association of Colleges of Pharmacy 2024 preceptor survey [17]. Of the 194 responses that met the inclusion criteria, 59 (30.4%) indicated that the respondents had used an AI chatbot before. A total of 51.5% (100/194) of the respondents indicated that they would continue or plan to start using chatbots in the future.

Among those who had used AI chatbots before and those who had not, demographics were similar with respect to age, gender, and years of practice (Table 2). Most respondents (146/194, 75.3%) indicated that they were unlikely to make a patient care decision based on information from a chatbot. Of the respondents who had used AI chatbots (59/194, 30.4%), there was a substantially higher percentage who practiced in academia and a lower percentage who practiced in the community setting than among those who had not used AI chatbots. Additionally, respondents who had coworkers who used AI chatbots or an institutional AI policy were more likely to have used an AI chatbot themselves.

ChatGPT was the most frequently reported chatbot used (55/59, 93%), followed by Bing Chat (10/59, 17%) and Google Bard, now Gemini (7/59, 12%). The most common uses for AI chatbots in practice included summarizing information (31/59, 53%), letter of recommendation writing (20/59, 34%), and obtaining disease state information (14/59, 24%). Among respondents who had not used AI chatbots in practice before (135/194, 69.6%), 63% (85/135) selected not knowing how to use them effectively as the reason for disuse. Other common reasons for disuse included preference for other resources (80/135, 59.3%), lack of credibility or trust (63/135, 46.7%), and concerns of plagiarism (43/135, 31.9%).

All 194 respondents received the survey including questions about the respondents’ recommendations for use of AI chatbots by pharmacists and pharmacy students in practice. The top recommendations for use for pharmacists included administrative purposes (92/194, 47.4%), summarizing information (90/194, 46.4%), creating meeting agendas (74/194, 38.1%), letter of recommendation writing (64/194, 33%), and obtaining disease state information (63/194, 32.5%); 15.5% (30/194) of respondents indicated that they would not recommend that pharmacists use AI chatbots in practice. In comparison, the top uses recommended for students were summarizing information (61/194, 31.4%), creating meeting agendas (45/194, 23.2%), obtaining disease state information (42/194, 21.6%), administrative purposes (42/194, 21.6%), and conducting literature searches (38/194, 19.6%). In total, 40.2% (78/194) of the respondents indicated that they would not recommend that students use AI chatbots in practice.

Table 1. Demographics of pharmacy preceptor respondents.

Survey respondents (n=194), n (%)Respondents to the American Association of Colleges of Pharmacy survey [17] (n=4739), n (%)
Gender

Woman142 (73.2)2609 (56.6)

Man48 (24.7)1665 (36.1)

Nonbinary or third gender1 (0.5)NRa

Preferred not to self-describe3 (1.5)327 (7.1)
Age (years), mean (SD)38.7 (9.4)NR
Years of practice

0-555 (28.4)588 (12.6)

6-1051 (26.3)951 (20.4)

11-1532 (16.5)922 (19.8)

>1556 (28.9)2097 (45.1)
Area of practice

Hospital112 (57.7)1521 (32.9)

Community29 (14.9)1251 (27.1)

Academia16 (8.2)172 (3.7)

Ambulatory16 (8.2)717 (15.5)

Drug information8 (4.1)NR

Managed care7 (3.6)85 (1.8)

Long-term care3 (1.5)NR

Industry1 (0.5)88 (1.9)

Other14 (7.2)604 (13.1)

Specialty6 (3.1)NR

Consulting1 (0.5)NR

Infusion1 (0.5)NR

Research1 (0.5)NR

Medication safety1 (0.5)NR

aNR: not reported.

Table 2. Comparison of demographics between respondents who had and had not used artificial intelligence (AI) chatbots.

Had used AI chatbots (n=59), n (%)Had not used AI chatbots (n=135), n (%)P value
Age (years), mean (SD)38.7 (9.1)38.7 (9.5)>.99
Gender.40

Woman39 (66.1)103 (76.3)

Man19 (32.2)29 (21.5)

Nonbinary or third gender0 (0.0)1 (0.7)

Preferred not to self-describe1 (1.7)2 (1.5)
Years of practice.41

0-515 (25.4)40 (29.6)

6-1016 (27.1)62 (45.9)

11-1511 (18.6)21 (15.6)

>1517 (28.8)39 (28.9)
Area of practice.002

Hospital38 (64.4)74 (54.8)

Community4 (6.8)25 (18.5)

Ambulatory12 (20.3)24 (17.8)

Academia11 (18.6)5 (3.7)

Specialty0 (0.0)6 (4.4)
Coworker use of AI chatbots<.001

Yes25 (42.4)9 (6.7)

No9 (15.3)69 (51.1)

Unsure25 (42.4)57 (42.2)
AI policy at practice site.02

Yes11 (18.6)8 (5.9)

No30 (50.8)82 (60.7)

Unsure18 (30.5)45 (33.3)
How likely would you be to make a health care (ie, patient care or treatment) related recommendation based on the information an AI Chatbot (eg, ChatGPT) provides you?”.33

Extremely unlikely32 (54.2)57 (42.2)

Somewhat unlikely16 (27.1)41 (30.4)

Neither likely nor unlikely9 (15.3)25 (18.5)

Somewhat likely2 (3.4)12 (8.9)

Extremely likely0 (0.0)0 (0.0)
How likely would you be to make a policy related decision based on the information an AI Chatbot (eg, ChatGPT) provides you?”.25

Extremely unlikely17 (28.8)49 (36.3)

Somewhat unlikely15 (25.4)37 (27.4)

Neither likely nor unlikely16 (27.1)34 (25.2)

Somewhat likely11 (18.6)12 (8.9)

Extremely likely0 (0.0)3 (2.2)

Factors Associated With AI Use

The first 13 items of the TAME-ChatGPT were included in the primary exploratory factor analysis. These items included all survey respondents, with an overall rate of AI use of 30.4% (59/194). The overall data were appropriate for conducting factor analysis (Bartlett test χ278=1288.7; P<.001), and the Kaiser-Meyer-Olkin value (0.82) indicated that sampling was adequate. There was no concern for multicollinearity, and parallel analysis identified 2 factors as the optimal number, with an eigenvalue cutoff of 1.31. The eigenvalues for the 2 factors were 4.49 and 1.97, which explained 49.7% of the cumulative variance. These 2 factors were classified as attitude toward the technology and perceived risk. Descriptive statistics for these constructs are provided in Table 3, and the pattern matrix is shown in Table 4. Including a third factor (eigenvalue=1.155), similar to the original TAME-ChatGPT validation study, resulted in an explanation of 57.1% of the cumulative variance. However, factors 2 and 3 were largely correlated (r=0.57), and many of the items were correlated with both factors 2 and 3. Cronbach α values for the 2 constructs were good at 0.86 for both comfort with technology and perceived risk.

In the secondary exploratory analysis, all items were included for the 30.4% (59/194) of respondents who had previously used AI. The overall data were appropriate for conducting factor analysis (Bartlett test χ2528=1301.5; P<.001), and the Kaiser-Meyer-Olkin value (0.70) indicated that sampling was adequate. There was concern for multicollinearity, and therefore, items 4, 14, and 31 were removed. Parallel analysis identified 4 factors as the optimal number, with an eigenvalue cutoff of 2.08. The eigenvalues for the 4 factors were 9.33, 4.36, 2.96, and 2.26, which explained 57.3% of the cumulative variance. The 4 factors identified included attitude toward the technology, perceived usefulness, perceived risk, and ease of use. Descriptive statistics for these constructs are provided in Table 3, and the pattern matrix is shown in Table 5. Cronbach α values for the 4 constructs were very reliable for attitude toward the technology (0.92), perceived usefulness (0.92), perceived risk (0.84), and ease of use (0.83).

Table 3. Technology Acceptance Model Edited to Assess ChatGPT Adoption constructs.

Score, mean (SD)P value
Perceived riska<.001

Had used AIb chatbots before27.92 (6.32)

Had not used AI chatbots before30.59 (6.01)
Attitude toward technologyc<.001

Had used AI chatbots before19.19 (3.63)

Had not used AI chatbots before15.07 (4.06)
Attitude toward technology among usersd44.53 (10.02)e
Perceived usefulness among usersf14.08 (5.80)
Perceived ease of use among usersg11.29 (2.54)
Perceived risk of use among usersh33.75 (7.98)

aPossible range from 8 to 40, with higher scores indicating lower perceived risk and a score of 24 indicating a neutral attitude.

bAI: artificial intelligence.

cPossible range from 5 to 25, with higher scores indicating positive attitude and a score of 15 indicating a neutral attitude.

dPossible range from 13 to 65, with higher scores indicating positive attitude and a score of 39 indicating a neutral attitude.

eNot applicable.

fPossible range from 6 to 30, with higher scores indicating higher perceived usefulness and a score of 18 indicating a neutral attitude.

gPossible range from 3 to 15, with higher scores indicating perceived ease of use and a score of 9 indicating a neutral attitude.

hPossible range from 5 to 50, with higher scores indicating lower perceived risk and a score of 27.5 indicating a neutral attitude.

Table 4. Pattern matrix of the 2 inferred factors for all respondents irrespective of previous artificial intelligence (AI) chatbot use.
ItemPerceived riskAttitude toward technology
“I am concerned about the reliability of the information provided by AI chatbots.”0.435<0.400
“I am concerned that using AI chatbots is considered plagiarism.”0.617<0.400
“I fear relying too much on AI chatbots may decrease my critical thinking skills.”0.585<0.400
“I am concerned about the potential security risks of using AI chatbots.”0.798<0.400
“I am afraid of becoming too dependent on technology like AI chatbots.”0.618<0.400
“I am afraid that using AI chatbots would result in a lack of originality in my work.”0.688<0.400
“I am afraid that the use of the AI chatbots would be a violation of workplace policies.”0.655<0.400
“I am concerned about the potential privacy risks that might be associated with using AI chatbots.”0.786<0.400
“I am enthusiastic about using technology, such as AI chatbots for learning, practice, and research.”<0.4000.887
“I believe technology, such as AI chatbots is an important tool for workplace success.”<0.4000.844
“I think that technology like AI chatbots is attractive and fun to use.”<0.4000.871
“I am always open to learning about new technologies like AI chatbots.”<0.4000.672
“I trust the opinions of my friends or colleagues about using AI chatbots.”<0.4000.444

Logistic regression was conducted to predict current and future use of AI (Table 6). Factors that remained significant in the model that predicted current AI use included positive attitude toward technology, coworker use of AI, and the respondent practicing in academia. Factors that predicted future use of AI included perceived risk, positive attitude toward technology, and coworker use.

Table 5. Pattern of the 4 inferred factors for only those who had used artificial intelligence (AI) chatbots before.
ItemAttitude toward technologyPerceived usefulnessEase of usePerceived risk
“I am concerned about the reliability of the information provided by AI chatbots.”<0.400<0.400<0.4000.530
“I am concerned that using AI chatbots is considered plagiarism.”<0.400<0.400<0.4000.652
“I fear relying too much on AI chatbots may decrease my critical thinking skills.”<0.400<0.400<0.4000.457
“I am afraid of becoming too dependent on technology like AI chatbots.”<0.400<0.400<0.4000.746
“I am afraid that using AI chatbots would result in a lack of originality in my work.”<0.400<0.400<0.4000.762
“I am afraid that the use of the AI chatbots would be a violation of workplace policies.”<0.400<0.400<0.4000.582
“I am concerned about the potential privacy risks that might be associated with using AI chatbots.”<0.400<0.400<0.4000.750
“I am enthusiastic about using technology, such as AI chatbots for learning, practice, and research.”0.514<0.400<0.400<0.400
“I believe technology, such as AI chatbots is an important tool for workplace success.”0.611<0.400<0.400<0.400
“I think that technology like AI chatbots is attractive and fun to use.”0.609<0.400<0.400<0.400
“I am always open to learning about new technologies like AI chatbots.”0.498<0.400<0.400<0.400
“I trust the opinions of my friends or colleagues about using AI chatbots.”<0.400<0.400<0.400−0.571
“For me, AI chatbots are a convenient method for accessing medical information.”<0.4000.913<0.400<0.400
“For me, AI chatbots are a reliable source of accurate medical information.”<0.4000.855<0.400<0.400
“AI chatbots help me in better understanding of difficult medical topics and concepts.”<0.4000.878<0.400<0.400
“AI chatbots make it easier for me to complete tasks in my workplace.”0.784<0.400<0.400<0.400
“I recommend AI chatbots to my colleagues to facilitate their work.”0.807<0.400<0.400<0.400
“AI chatbots are more useful than other sources of medical information that I have used previously.”<0.4000.843<0.400<0.400
“I think that using AI chatbots has helped to improve my overall workplace performance.”0.737<0.400<0.400<0.400
“I have used tools similar to AI chatbots in the past in my workplace.”0.679<0.400<0.400<0.400
“I spontaneously find myself using AI chatbots when I need medical information for my work.”<0.4000.731<0.400<0.400
“I often use AI chatbots as a source of medical information in my workplace.”<0.4000.766<0.400<0.400
“I appreciate the convenience and efficiency that AI chatbots provide for my work.”0.690<0.400<0.400<0.400
“I think that relying on technology like AI chatbots can disrupt my critical thinking skills.”<0.400<0.400<0.4000.674
“I appreciate the accuracy and reliability of the medical information provided by AI chatbots.”<0.4000.821<0.400<0.400
“I believe that using AI chatbots can save time and effort in my workplace.”0.828<0.400<0.400<0.400
“It does not take a long time to learn how to use AI chatbots.”<0.400<0.4000.782<0.400
“Using AI chatbots does not require extensive technical knowledge.”<0.400<0.4000.828<0.400
“I do not face many difficulties when using AI chatbots.”<0.400<0.4000.787<0.400
“The positive experiences of others have encouraged me to use AI chatbots.”0.486<0.400<0.400<0.400
“I believe that people I know have improved their workplace performance as a result of using AI chatbots.”0.796<0.400<0.400<0.400
“I think using AI chatbots is important for me to keep up with my peers professionally.”0.542<0.400<0.400<0.400
Table 6. Predictors of current and future artificial intelligence (AI) chatbot use.

ORa (95% CI)P value
Current use of AIb

Perceived risk0.98 (0.61-1.56).94

Attitude toward technology3.64 (2.08-6.36)<.001

Coworker use of AI7.41 (2.64-20.80)<.001

AI policy present2.72 (0.80-9.20).11

Academia5.62 (1.30-24.23).02

Community0.52 (0.14-1.90).32
Future use of AIc

Perceived risk0.63 (0.41-0.96).03

Attitude toward technology4.11 (2.42-6.97)<.001

Coworker use of AI33.00 (5.02-216.76)<.001

AI policy present2.24 (0.60-8.43).23

Academia2.06 (0.42-10.21).38

Community1.09 (0.41-2.89).86

aOR: odds ratio.

bHosmer and Lemeshow P=.15; area under the curve 0.85.

cHosmer and Lemeshow P=.71; area under the curve 0.87.


Approximately one-third of pharmacy preceptors (59/194, 30.4%) reported use of an AI chatbot, with approximately half (100/194, 51.5%) indicating that they planned to start or would continue using chatbots in the future. Consistent with findings from other studies, we found that most respondents were unlikely to make patient care decisions based on information provided by an AI chatbot. However, they did report use for administrative tasks such as summarizing information and writing letters of recommendation. To our knowledge, this is the first study using the TAME-ChatGPT assessment tool among pharmacists. The findings show that this tool is valid and reliable for assessing pharmacists’ attitudes toward chatbots and their use in pharmacy practice. Pharmacists’ attitudes toward chatbots were largely influenced by their attitude toward the technology and their perceived risk related to use of the technology. Furthermore, among pharmacists who had used chatbots, attitudes toward use of chatbots were affected by the same 2 factors plus their perceived usefulness and ease of use. Positive attitudes toward technology, having coworkers who use AI, and working in academia predicted current use of AI chatbots, whereas factors predicting future use of the technology included perceived risk, positive attitudes toward technology, and coworker use of AI chatbots. Our results affirm that, when adopting ChatGPT and other AI chatbots, it is important to consider perceptions of risk, usefulness, and ease of use, as well as the users’ attitudes toward technology.

Significantly more pharmacists practicing in academia have used chatbots than pharmacists in other practice settings, perhaps because these pharmacists are often at the forefront of exploring new technologies and their applications in pharmacy practice and may be more comfortable with adopting new technologies. While academic pharmacists may be at the forefront of chatbot adoption, the use of AI chatbots is expected to increase across all pharmacy practice settings as the technology becomes more refined and its benefits become more apparent. Since the time of our data collection, more advanced AI chatbots have become available, such as OpenEvidence, which may provide more targeted information for pharmacists to use in practice.

However, most pharmacists surveyed (135/194, 69.6%) had not used an AI chatbot, with nearly two-thirds of them (85/135, 63%) stating that their reason for not using chatbots was that they did not know how to use them effectively; 46.7% (63/135) cited lack of credibility or trust in chatbots as the reason for lack of use. AI chatbots are relatively new technologies in health care, and many pharmacists may not have had sufficient exposure or training to use them effectively. Furthermore, there is currently no standardized approach to training pharmacists or other health care providers on the use of AI chatbots; thus, knowledge and skills across the profession may be inconsistent. Knowledge on how to use AI chatbots will likely improve as they become more prevalent in health care settings. Pharmacists will gain hands-on experience with these tools, become more comfortable using them, and learn to integrate them into their daily workflows. Preceptors are in an ideal position to help guide students who are just learning about AI capabilities and limitations on the optimal use of this new technology. Health care organizations, educational institutions, and postgraduate training programs should consider including training on the appropriate use of AI and risks of inappropriate use.

Our findings differ somewhat from those of the work by Sallam et al [16], who validated the TAME-ChatGPT in a sample of Jordanian health care students and identified factors affecting their attitudes toward ChatGPT and use of ChatGPT. Attitudes of Jordanian students toward ChatGPT were influenced by an additional third factor, anxiety related to fear of ChatGPT; attitudes toward use of ChatGPT were affected by behavior as a fourth factor instead of attitudes toward technology [16]. The differences are likely due to differences in the populations studied; US pharmacists and Jordanian students represent very different populations based on culture, age, practice experience, and other characteristics. Different approaches to factor analysis may also contribute to differences.

Risk perception is known to be a key factor affecting decision-making, and perception of risks associated with chatbots significantly influenced pharmacists’ attitudes toward chatbots and their use [18]. The credibility of AI chatbots in health care is a complex and evolving issue. Reliability and accuracy can vary significantly between chatbots, and not all AI chatbots are reliable sources of information. Inaccuracies, hallucinations, potential for biased responses, and the inability of chatbots to provide nuanced or context-specific information have been documented [12,19-21]. These potential barriers highlight the need for health care professional scrutiny and oversight of chatbot responses. As the technologies continue to develop, ongoing research and validation will be crucial to establish and maintain the credibility of AI chatbots in health care applications and minimize their potential for harm.

Some of the limitations of this study include the sample size as only pharmacist preceptors in the Midwest who were affiliated with the participating colleges were surveyed, so the results may not be representative of pharmacy preceptors across the United States. However, based on comparison with the 2024 American Association of Colleges of Pharmacy preceptor survey, outside of years and area of practice, the surveyed sample seems to be fairly representative of preceptors in the United States [17]. Our response rate was low, which may be due to technology-related factors (ie, the email going to the spam folder) and may have resulted in selection bias in the sense that preceptors with experience using AI may have been more likely to respond. As the survey was anonymous, we could not explore differences between respondents and nonrespondents. It is possible that the use of AI at the time of our survey overestimated use in the target population. Additionally, the AI and chatbot space is constantly evolving, and although the survey was conducted recently, perceptions can change rapidly as more people are exposed to chatbots and start using them in their daily work. Future studies could expand the surveyed population to include pharmacist preceptors nationwide or include all pharmacists, along with following up on respondents over time to evaluate changes in responses. Additionally, the validated survey can be readministered following training and education on AI chatbots to determine how perceptions of AI technology have changed.

At the time of our study, it had been almost 2 years since AI chatbots were introduced, and pharmacist preceptors were still hesitant to use the new technology, with only approximately one-third of respondents (59/194, 30.4%) indicating that they had used a chatbot in practice. Pharmacist preceptors were hesitant to use the technology for clinical decisions and were uncertain about their place in practice. This study also demonstrated that the TAME-ChatGPT survey is a reliable and validated tool that can be used to assess pharmacists’ attitude toward and use of chatbots. Constructs from the TAM, including attitude toward technology, perceived usefulness, ease of use, and perceived risk of use, as well as practice setting and coworker use, can determine and predict pharmacist use of AI chatbots. Future studies with this validated tool can be used to guide the implementation of chatbots into pharmacy practice and help inform policymakers and organization leaders on the education and training needed to promote the safe and effective use of AI chatbots in pharmacy practice.

Acknowledgments

The authors would like to acknowledge Dr Margie E Snyder, Dr Darren Covington, Dr Sarah E Vordenberg, and Dr Faria Munir for their assistance in this project.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Survey instrument.

PDF File (Adobe PDF File), 140 KB

  1. Introducing ChatGPT. OpenAI. URL: https://openai.com/blog/chatgpt [accessed 2024-07-17]
  2. Vogels E. A majority of Americans have heard of ChatGPT, but few have tried it themselves. Pew Research Center. May 24, 2023. URL: https:/​/www.​pewresearch.org/​short-reads/​2023/​05/​24/​a-majority-of-americans-have-heard-of-chatgpt-but-few-have-tried-it-themselves/​ [accessed 2024-07-17]
  3. Welcome to Dougall GPT. Dougall GPT. URL: https://dougallgpt.com/user/login [accessed 2024-07-17]
  4. Home page. OpenEvidence. URL: https://www.openevidence.com/ [accessed 2024-07-17]
  5. Singhal K, Azizi S, Tu T, Mahdavi SS, Wei J, Chung HW, et al. Large language models encode clinical knowledge. Nature. Jul 12, 2023;620(7972):172-180. [FREE Full text] [CrossRef] [Medline]
  6. Sallam M. ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthcare. Mar 19, 2023;11(6):887. [FREE Full text] [CrossRef] [Medline]
  7. Hosseini M, Gao CA, Liebovitz DM, Carvalho AM, Ahmad FS, Luo Y, et al. An exploratory survey about using ChatGPT in education, healthcare, and research. PLoS ONE. Oct 5, 2023;18(10):e0292216. [FREE Full text] [CrossRef]
  8. Temsah MH, Aljamaan F, Malki KH, Alhasan K, Altamimi I, Aljarbou R, et al. ChatGPT and the future of digital health: a study on healthcare workers' perceptions and expectations. Healthcare (Basel). Jun 21, 2023;11(13):1812. [CrossRef] [Medline]
  9. Abu Hammour K, Alhamad H, Al-Ashwal FY, Halboup A, Abu Farha R, Abu Hammour A. ChatGPT in pharmacy practice: a cross-sectional exploration of Jordanian pharmacists' perception, practice, and concerns. J Pharm Policy Pract. Oct 03, 2023;16(1):115. [FREE Full text] [CrossRef] [Medline]
  10. Alghitran A, AlOsaimi HM, Albuluwi A, Almalki E, Aldowayan A, Alharthi R, et al. Integrating ChatGPT as a tool in pharmacy practice: a cross-sectional exploration among pharmacists in Saudi Arabia. Integr Pharm Res Pract. Mar 17, 2025;14:31-43. [FREE Full text] [CrossRef]
  11. Jairoun AA, Al-Hemyari SS, Shahwan M, Alnuaimi GR, Ibrahim N, Jaber AA. Capturing pharmacists' perspectives on the value, risks, and applications of ChatGPT in pharmacy practice: a qualitative study. Explor Res Clin Soc Pharm. Dec 2024;16:100518. [FREE Full text] [CrossRef] [Medline]
  12. Lima TD, Bonafé M, Baby AR, Visacri MB. ChatGPT in pharmacy practice: disruptive or destructive innovation? A scoping review. Sci Pharm. Oct 21, 2024;92(4):58. [FREE Full text] [CrossRef]
  13. Khatri S, Sengul A, Moon J, Jackevicius CA. Accuracy and reproducibility of ChatGPT responses to real‐world drug information questions. J Am Coll Clin Pharm. Apr 22, 2025;8(6):432-438. [CrossRef]
  14. Roosan D, Padua P, Khan R, Khan H, Verzosa C, Wu Y. Effectiveness of ChatGPT in clinical pharmacy and the role of artificial intelligence in medication therapy management. J Am Pharm Assoc (2003). 2024;64(2):422-8.e8. [FREE Full text] [CrossRef] [Medline]
  15. Huang X, Estau D, Liu X, Yu Y, Qin J, Li Z. Evaluating the performance of ChatGPT in clinical pharmacy: a comparative study of ChatGPT and clinical pharmacists. Br J Clin Pharmacol. Jan 2024;90(1):232-238. [FREE Full text] [CrossRef] [Medline]
  16. Sallam M, Salim NA, Barakat M, Al-Mahzoum K, Al-Tammemi AB, Malaeb D, et al. Assessing health students' attitudes and usage of ChatGPT in Jordan: validation study. JMIR Med Educ. Sep 05, 2023;9:e48254. [FREE Full text] [CrossRef] [Medline]
  17. American Association of Colleges of Pharmacy 2024 preceptor survey. American Association of Colleges of Pharmacy. URL: https://www.aacp.org/sites/default/files/2024-10/2024-preceptor-survey-national-summary-report.pdf [accessed 2024-07-17]
  18. Dergaa I, Ben Saad H, Glenn JM, Amamou B, Ben Aissa M, Guelmami N, et al. From tools to threats: a reflection on the impact of artificial-intelligence chatbots on cognitive health. Front Psychol. 2024;15:1259845. [FREE Full text] [CrossRef] [Medline]
  19. Williams DJ, Noyes JM. How does our perception of risk influence decision-making? Implications for the design of risk information. Theor Issues Ergonomics Sci. Jan 23, 2007;8(1):1-35. [FREE Full text] [CrossRef]
  20. Deng J, Lin Y. The benefits and challenges of ChatGPT: an overview. Front Comput Intell Syst. Jan 05, 2023;2(2):81-83. [FREE Full text] [CrossRef]
  21. Xue J, Wang Y, Wei C, Liu X, Woo J, Kuo CJ. Bias and fairness in chatbots: an overview. APSIPA Trans Signal Inf Process. Sep 16, 2023;13(2):e102. [FREE Full text] [CrossRef]


AI: artificial intelligence
OR: odds ratio
TAM: technology acceptance model
TAME-ChatGPT: Technology Acceptance Model Edited to Assess ChatGPT Adoption


Edited by R Pellegrino; submitted 26.Jan.2025; peer-reviewed by C Wang, I Akpan; comments to author 13.Jun.2025; revised version received 04.Aug.2025; accepted 21.Aug.2025; published 21.Nov.2025.

Copyright

©Anly Li, Amy Heck Sheehan, Christopher Giuliano, Paul Dobry, Paul Walker, Jennifer Philips, Joseph Jordan. Originally published in JMIR Medical Education (https://mededu.jmir.org), 21.Nov.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic information, a link to the original publication on https://mededu.jmir.org/, as well as this copyright and license information must be included.