Published on in Vol 9 (2023)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/47532, first published .
The Accuracy and Potential Racial and Ethnic Biases of GPT-4 in the Diagnosis and Triage of Health Conditions: Evaluation Study

The Accuracy and Potential Racial and Ethnic Biases of GPT-4 in the Diagnosis and Triage of Health Conditions: Evaluation Study

The Accuracy and Potential Racial and Ethnic Biases of GPT-4 in the Diagnosis and Triage of Health Conditions: Evaluation Study

Journals

  1. Li J, Dada A, Puladi B, Kleesiek J, Egger J. ChatGPT in healthcare: A taxonomy and systematic review. Computer Methods and Programs in Biomedicine 2024;245:108013 View
  2. Meral G, Ateş S, Günay S, Öztürk A, Kuşdoğan M. Comparative analysis of ChatGPT, Gemini and emergency medicine specialist in ESI triage assessment. The American Journal of Emergency Medicine 2024;81:146 View
  3. Micali G, Corallo F, Pagano M, Giambò F, Duca A, D’Aleo P, Anselmo A, Bramanti A, Garofano M, Mazzon E, Bramanti P, Cappadona I. Artificial Intelligence and Heart-Brain Connections: A Narrative Review on Algorithms Utilization in Clinical Practice. Healthcare 2024;12(14):1380 View
  4. Wan P, Huang Z, Tang W, Nie Y, Pei D, Deng S, Chen J, Zhou Y, Duan H, Chen Q, Long E. Outpatient reception via collaboration between nurses and a large language model: a randomized controlled trial. Nature Medicine 2024;30(10):2878 View
  5. Mathis W, Zhao S, Pratt N, Weleff J, De Paoli S. Inductive thematic analysis of healthcare qualitative interviews using open-source large language models: How does it compare to traditional methods?. Computer Methods and Programs in Biomedicine 2024;255:108356 View
  6. Currie G, Currie J, Anderson S, Hewis J. Gender bias in generative artificial intelligence text-to-image depiction of medical students. Health Education Journal 2024;83(7):732 View
  7. Currie G, John G, Hewis J. Gender and ethnicity bias in generative artificial intelligence text-to-image depiction of pharmacists. International Journal of Pharmacy Practice 2024;32(6):524 View
  8. Young C, Enichen E, Rao A, Succi M. Racial, ethnic, and sex bias in large language model opioid recommendations for pain management. Pain 2024 View
  9. Agrawal A. Fairness in AI-Driven Oncology: Investigating Racial and Gender Biases in Large Language Models. Cureus 2024 View
  10. Currie G, Hewis J, Hawk E, Rohren E. Gender and Ethnicity Bias of Text-to-Image Generative Artificial Intelligence in Medical Imaging, Part 1: Preliminary Evaluation. Journal of Nuclear Medicine Technology 2024:jnmt.124.268332 View
  11. Le K, Chen J, Mai D, Le K. An Evaluation on the Potential of Large Language Models for Use in Trauma Triage. Emergency Care and Medicine 2024;1(4):350 View
  12. Yeo Y, Peng Y, Mehra M, Samaan J, Hakimian J, Clark A, Suchak K, Krut Z, Andersson T, Persky S, Liran O, Spiegel B. Evaluating for Evidence of Sociodemographic Bias in Conversational AI for Mental Health Support. Cyberpsychology, Behavior, and Social Networking 2024 View
  13. Tovar-Arriaga S, Pérez-Soto G, Camarillo-Gómez K, Aviles M, Rodríguez-Reséndiz J. Perspectives, Challenges, and the Future of Biomedical Technology and Artificial Intelligence. Technologies 2024;12(11):212 View

Books/Policy Documents

  1. Di Ieva A, Stewart C, Suero Molina E. Computational Neurosurgery. View