Published on in Vol 10 (2024)

Preprints (earlier versions) of this paper are available at, first published .
Patients, Doctors, and Chatbots

Patients, Doctors, and Chatbots

Patients, Doctors, and Chatbots

Authors of this article:

Thomas C Erren1 Author Orcid Image


Institute and Policlinic for Occupational Medicine, Environmental Medicine and Prevention Research, University Hospital of Cologne, University of Cologne, Köln (Zollstock), Germany

Corresponding Author:

Thomas C Erren, MPH, MD

Institute and Policlinic for Occupational Medicine, Environmental Medicine and Prevention Research

University Hospital of Cologne

University of Cologne

Berlin-Kölnische Allee 4

Köln (Zollstock), 50937


Phone: 49 022147876780

Fax:49 022147876795


Medical advice is key to the relationship between doctor and patient. The question I will address is “how may chatbots affect the interaction between patients and doctors in regards to medical advice?” I describe what lies ahead when using chatbots and identify questions galore for the daily work of doctors. I conclude with a gloomy outlook, expectations for the urgently needed ethical discourse, and a hope in relation to humans and machines.

JMIR Med Educ 2024;10:e50869



While I strive to provide accurate and helpful information, I am not a substitute for medical advice or professional judgment, and it’s always important for patients and healthcare providers to work together to develop a personalized treatment plan that takes into account a patient’s individual needs and circumstances.
[ChatGPT, 2023]

Medical advice (MA) is key to the relationship between doctor and patient. The question I will address is “how may chatbots affect the interaction between patients and doctors in regards to medical advice?” To this end, I shall consider—and go beyond—what was recently outlined regarding MA in “A Conversation With ChatGPT” [1].

Advances in artificial intelligence (AI) and chatbots are changing the world, including medicine [2-4]. ChatGPT is a generative pretrained transformer model based on GPT-3 from OpenAI. Based on word correlations in its 175 billion–parameter database, ChatGPT floods us with meaningful but also nonsensical information.

Concerning the interaction between patients, doctors, and chatbots, I describe what lies ahead when using chatbots and identify many questions for the daily work of doctors. I conclude with a gloomy outlook, expectations for urgently needed ethical discourse [5,6], and a hope in relation to humans and machines [3,7].

How ChatGPT describes its role [1]—“I am not a substitute for medical advice”—should be a fact. Doctors, as the only authoritative providers of professional MA, must always be in the driver’s seat. Chatbots have the potential to help with the task of contributing general information to an information chain. Importantly, doctors need to review and question all AI output and see if and how it contributes to a patient’s understanding and fits within MA. Depending on the expectations and hopes that ChatGPT raises in patients, this task could become an unprecedented challenge.

With their up-to-date knowledge and medical experience and expertise, doctors need to integrate personal, specific, and general information into their comprehensive MA to the patients. Chatbots are limited to general information stored in databases. Concerningly, ChatGPT invents facts, called a hallucination in AI [3]. Moreover, ChatGPT can produce nonsensical or “bullshit” [8] information, nicely worded and seemingly justified but disregarding truth and facts—disconcertingly, we do not readily know how often and when ChatGPT offers “bullshit” or nonsense responses.

Nevertheless, ChatGPT will be used by many simply because it is there and seemingly easy and, importantly, free to use.

Is it, therefore, likely that we can do without chatbots? No, because society will not abandon ChatGPT or other advanced chatbot tools [3]. The sooner we understand chatbot information for patients, the better. Realistically, ChatGPT is just the tip of an AI iceberg. The “Godfather of AI” [9] Hinton and OpenAI’s chief executive officer Altman [10] have warned forcefully about the speed, impact, and inevitability of AI developments.

Doctors routinely deal with both informed and misinformed patients, fuelled by online health searches (eg, “Dr Google” [11]). Indeed, the internet has become the starting point for many to ask questions about health, disrupting traditional doctor-patient relationships [12] and leading to potential harm from online misinformation [11]. Importantly, neither patients nor doctors should give away too much information when using AI. Even if MA could get better with more details, can we know if this information is being used beyond MA? Indeed, to what extent may creating MA be used as an AI Trojan horse to extract information for other purposes, including business benefits? Which biases go into AI-based medical information, for instance, through training data that neither represent the ethnicity nor the financial options of diverse patients? That medically advanced AI may become expensive raises questions of equity: who will have access to these technologies?

What knowledge do doctors need to understand medical AI advice? How can AI-based medical information be used [13], and how do you deal with medical information that AI cannot explain [14]? Could doctors working with chatbot-provided diagnoses and AI-recommended treatments miss the true picture and become overreliant on AI? Who is liable when doctors use AI medical information, and to come full circle, who is liable when they do not [2,15]? Could there come a time when not considering AI such as ChatGPT constitutes less than adequate advice and nonstandard care [15]? Doctors should ask their liability insurer how (ie, under what conditions) and to what extent the insurer covers the use, or nonuse, of AI in practice [15].

Key orientation for interactions between patients, doctors, and chatbots regarding MA can come from physicians’ professional organizations and the US Food and Drug Administration. Similar to practice guidelines [15], recommendations and guardrails for practice-specific medical information via chatbots may have to be developed.

That ChatGPT “strive(s) to provide accurate and helpful information” [1] has a stale empirical aftertaste. In fact, according to OpenAI, advanced AI [16] will make reviewing chatbot information even more difficult. GPT-4 (eg, in Microsoft Bing and ChatGPT Plus), with 571 times as many learned parameters as GPT-3, has “learned” to deliver incorrect work more convincingly than earlier models. Such mistakes will pose severe problems even if “[ChatGPT] admits these when challenged” [1].

PubMed-listed comparisons between GPT-3 and GPT-4 suggest that the latter may provide more accurate patient information in nuclear medicine [17]. Another study suggested that both free and paid versions of ChatGPT risk providing misleading responses when used without expert MA [18]. Chatbot medical information written at a college reading level suggested that such AI devices may be used supplementarily but not as a primary source for medical information [19], emphasizing the doctor’s key role in MA. More research is needed on MA in numerous medical fields and settings, for numerous applications, and for various populations.

Overall, when AI experts at the University of California, Berkeley explored and discussed the implications of ChatGPT and AI and future challenges in the spring of 2023, there was an explicit call for more ethical considerations [6,20]. Priority safety measures include strict regulations for patient privacy and ethical practices [21]. While the questions above are not exhaustive, it is time to systematically answer them regarding MA and the unavoidable interaction of patients, doctors, and chatbots.

Ultimately, we can only hope that the boundaries between humans and machines [3] will never become so blurred that patients cannot distinguish the MA of a human doctor from the general information provided by ChatGPT [22] or other AI.


TCE acknowledges stimulating working conditions as a visiting scholar at the University of California, Berkeley. Support is acknowledged for the article processing charge from the DFG (Deutsche Forschungsgemeinschaft / German Research Foundation, 491454339).

Conflicts of Interest

None declared.

  1. Eysenbach G. The role of ChatGPT, generative language models, and artificial intelligence in medical education: a conversation with ChatGPT and a call for papers. JMIR Med Educ. Mar 06, 2023;9:e46885. [FREE Full text] [CrossRef] [Medline]
  2. Haupt CE, Marks M. AI-generated medical advice-GPT and beyond. JAMA. Apr 25, 2023;329(16):1349-1350. [CrossRef] [Medline]
  3. Shaw D, Morfeld P, Erren T. The (mis)use of ChatGPT in science and education: Turing, Djerassi, "athletics" & ethics. EMBO Rep. Jul 05, 2023;24(7):e57501. [CrossRef] [Medline]
  4. Coghlan S, Leins K, Sheldrick S, Cheong M, Gooding P, D'Alfonso S. To chat or bot to chat: ethical issues with using chatbots in mental health. Digit Health. 2023;9:20552076231183542. [FREE Full text] [CrossRef] [Medline]
  5. Akerson M, Andazola M, Moore A, DeCamp M. More than just a pretty face? Nudging and bias in chatbots. Ann Intern Med. Jul 2023;176(7):997-998. [CrossRef] [Medline]
  6. Erren TC, Lewis P, Shaw DM. Brave (in a) new world: an ethical perspective on chatbots for medical advice. Front Public Health. 2023;11:1254334. [FREE Full text] [CrossRef] [Medline]
  7. Turing AM. I.—Computing machinery and intelligence. Mind. Oct 1950;LIX(236):433-460. [CrossRef]
  8. Frankfurt HG. On Bullshit. Princeton, NJ. Princeton University Press; 2005.
  9. Metz C. ‘The Godfather of A.I.’ leaves google and warns of danger ahead. The New York Times. May 02, 2023. URL: [accessed 2023-10-18]
  10. Fung B. Mr. ChatGPT goes to Washington: OpenAI CEO Sam Altman testifies before Congress on AI risks. CNN. May 16, 2023. URL: [accessed 2023-10-18]
  11. Hyman I. The risks of consulting Dr. Google. Psychology Today. Apr 29, 2020. URL: [accessed 2023-10-19]
  12. Freckelton I. Internet disruptions in the doctor-patient relationship. Med Law Rev. Aug 01, 2020;28(3):502-525. [CrossRef] [Medline]
  13. Zhu Y, Wang R, Pu C. "I am chatbot, your virtual mental health adviser." What drives citizens' satisfaction and continuance intention toward mental health chatbots during the COVID-19 pandemic? An empirical study in China. Digit Health. 2022;8:20552076221090031. [FREE Full text] [CrossRef] [Medline]
  14. Wang F, Kaushal R, Khullar D. Should health care demand interpretable artificial intelligence or accept "Black Box" medicine? Ann Intern Med. Jan 07, 2020;172(1):59-60. [CrossRef] [Medline]
  15. Price WN, Gerke S, Cohen IG. Potential liability for physicians using artificial intelligence. JAMA. Nov 12, 2019;322(18):1765-1766. [CrossRef] [Medline]
  16. GPT-4 technical report. OpenAI. Mar 27, 2023. URL: [accessed 2023-10-18]
  17. Currie G, Robbie S, Tually P. ChatGPT and patient information in nuclear medicine: GPT-3.5 versus GPT-4. J Nucl Med Technol. Dec 05, 2023;51(4):307-313. [CrossRef] [Medline]
  18. Deiana G, Dettori M, Arghittu A, Azara A, Gabutti G, Castiglia P. Artificial intelligence and public health: evaluating ChatGPT responses to vaccination myths and misconceptions. Vaccines (Basel). Jul 07, 2023;11(7):1217. [FREE Full text] [CrossRef] [Medline]
  19. Pan A, Musheyev D, Bockelman D, Loeb S, Kabarriti AE. Assessment of artificial intelligence chatbot responses to top searched queries about cancer. JAMA Oncol. Oct 01, 2023;9(10):1437-1440. [CrossRef] [Medline]
  20. Manke K. AI lectures at Berkeley to explore possibilities, implications of ChatGPT. Berkeley News. Mar 10, 2023. URL: https:/​/news.​​2023/​03/​10/​ai-lectures-at-berkeley-to-explore-possibilities-implications-of-chatgpt/​ [accessed 2023-10-19]
  21. The Lancet. AI in medicine: creating a safe and equitable future. Lancet. Aug 12, 2023;402(10401):503. [CrossRef] [Medline]
  22. Nov O, Singh N, Mann D. Putting ChatGPT's medical advice to the (Turing) test: survey study. JMIR Med Educ. Jul 10, 2023;9:e46939. [FREE Full text] [CrossRef] [Medline]

AI: artificial intelligence
MA: medical advice

Edited by K Venkatesh; submitted 14.07.23; peer-reviewed by A Mihalache, N Patil, I Mircheva; comments to author 14.10.23; revised version received 19.10.23; accepted 08.11.23; published 04.01.24.


©Thomas C Erren. Originally published in JMIR Medical Education (, 04.01.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic information, a link to the original publication on, as well as this copyright and license information must be included.