Search Articles

View query in Help articles search

Search Results (1 to 6 of 6 Results)

Download search results: CSV END BibTex RIS


Examining the Role of Large Language Models in Orthopedics: Systematic Review

Examining the Role of Large Language Models in Orthopedics: Systematic Review

In addition to Chat GPT, applications include Bard (upgraded to Gemini in December 2023) based on Language Model for Dialogue Applications (Google LLC); Med-Pa LM 2 (Google LLC); ERNIE Bot (Baidu); and MOSS (Fudan University). GPT-4 can approach or achieve human-level performance in cognitive tasks across various fields, including medical domains [4].

Cheng Zhang, Shanshan Liu, Xingyu Zhou, Siyu Zhou, Yinglun Tian, Shenglin Wang, Nanfang Xu, Weishi Li

J Med Internet Res 2024;26:e59607

Comparing the Efficacy and Efficiency of Human and Generative AI: Qualitative Thematic Analyses

Comparing the Efficacy and Efficiency of Human and Generative AI: Qualitative Thematic Analyses

Of these 7 themes identified by human coders, 5 (71%) were also consistent with the themes derived by both Chat GPT and Bard. Multimedia Appendix 1 presents a complete mapping of the inductive thematic analysis codebooks (including theme, description, and example text messages) generated by human coders, Chat GPT, and Bard.

Maximo R Prescott, Samantha Yeager, Lillian Ham, Carlos D Rivera Saldana, Vanessa Serrano, Joey Narez, Dafna Paltin, Jorge Delgado, David J Moore, Jessica Montoya

JMIR AI 2024;3:e54482

Assessing the Reproducibility of the Structured Abstracts Generated by ChatGPT and Bard Compared to Human-Written Abstracts in the Field of Spine Surgery: Comparative Analysis

Assessing the Reproducibility of the Structured Abstracts Generated by ChatGPT and Bard Compared to Human-Written Abstracts in the Field of Spine Surgery: Comparative Analysis

Furthermore, the differences between Chat GPT and Bard in abstract generation have not yet been studied. This study aimed to evaluate the reproducibility of abstracts generated by Chat GPT and Bard compared with human-written abstracts in the field of spinal surgery.

Hong Jin Kim, Jae Hyuk Yang, Dong-Gune Chang, Lawrence G Lenke, Javier Pizones, René Castelein, Kota Watanabe, Per D Trobisch, Gregory M Mundis Jr, Seung Woo Suh, Se-Il Suk

J Med Internet Res 2024;26:e52001

Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis

Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis

In this study, we aim to address these concerns by systematically evaluating the reliability of Chat GPT and Bard (subsequently rebranded Gemini; Google AI) [8] in the context of searching for and synthesizing peer-reviewed literature for systematic reviews. We will compare their performance to that of traditional methods used by researchers, investigate the extent of the “hallucination” phenomenon, and discuss potential ethical and practical considerations for using Chat GPT and Bard in academic publishing.

Mikaël Chelli, Jules Descamps, Vincent Lavoué, Christophe Trojani, Michel Azar, Marcel Deckert, Jean-Luc Raynier, Gilles Clowez, Pascal Boileau, Caroline Ruetsch-Chelli

J Med Internet Res 2024;26:e53164

The Role of Large Language Models in Transforming Emergency Medicine: Scoping Review

The Role of Large Language Models in Transforming Emergency Medicine: Scoping Review

LLMs used in the reviewed papers (Table 1) included versions of GPT (Open AI; eg, Chat GPT, GPT-4, and GPT-2), Pathways Language Model (Bard; Google AI), Embeddings from Language Model, XLNet, and BERT (Google; eg, Bio BERT, Clinical BERT, and decoding-enhanced BERT with disentangled information).

Carl Preiksaitis, Nicholas Ashenburg, Gabrielle Bunney, Andrew Chu, Rana Kabeer, Fran Riley, Ryan Ribeira, Christian Rose

JMIR Med Inform 2024;12:e53787

Assessing the Alignment of Large Language Models With Human Values for Mental Health Integration: Cross-Sectional Study Using Schwartz’s Theory of Basic Values

Assessing the Alignment of Large Language Models With Human Values for Mental Health Integration: Cross-Sectional Study Using Schwartz’s Theory of Basic Values

As artificial intelligence (AI) advances rapidly, large language models (LLMs), such as Bard (Google), Claude 2 (Anthropic), and Generative Pretrained Transformer (GPT)-3.5 and GPT-4 (Open AI), are opening up promising possibilities in mental health care, such as expediting research, guiding clinicians, and assisting patients [1]. However, integrating AI into mental health also raises the need to address complex professional ethical questions [2,3].

Dorit Hadar-Shoval, Kfir Asraf, Yonathan Mizrachi, Yuval Haber, Zohar Elyoseph

JMIR Ment Health 2024;11:e55988