e.g. mhealth
Search Results (1 to 8 of 8 Results)
Download search results: CSV END BibTex RIS
Skip search results from other journals and go to results- 6 JMIR Medical Education
- 1 JMIR Dermatology
- 1 JMIR Medical Informatics
- 0 Journal of Medical Internet Research
- 0 Medicine 2.0
- 0 Interactive Journal of Medical Research
- 0 iProceedings
- 0 JMIR Research Protocols
- 0 JMIR Human Factors
- 0 JMIR Public Health and Surveillance
- 0 JMIR mHealth and uHealth
- 0 JMIR Serious Games
- 0 JMIR Mental Health
- 0 JMIR Rehabilitation and Assistive Technologies
- 0 JMIR Preprints
- 0 JMIR Bioinformatics and Biotechnology
- 0 JMIR Cancer
- 0 JMIR Challenges
- 0 JMIR Diabetes
- 0 JMIR Biomedical Engineering
- 0 JMIR Data
- 0 JMIR Cardio
- 0 JMIR Formative Research
- 0 Journal of Participatory Medicine
- 0 JMIR Pediatrics and Parenting
- 0 JMIR Aging
- 0 JMIR Perioperative Medicine
- 0 JMIR Nursing
- 0 JMIRx Med
- 0 JMIRx Bio
- 0 JMIR Infodemiology
- 0 Transfer Hub (manuscript eXchange)
- 0 JMIR AI
- 0 JMIR Neurotechnology
- 0 Asian/Pacific Island Nursing Journal
- 0 Online Journal of Public Health Informatics
- 0 JMIR XR and Spatial Computing (JMXR)

The questions derived from the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) checklist for observational study and answers.
JMIR Med Inform 2024;12:e59258
Download Citation: END BibTex RIS

These system roles influence the direction of Chat GPT’s answers and may affect its reliability. However, the impact of these system roles on Chat GPT’s performance in medical field has not yet been investigated. As a professional chatbot tool, Chat GPT uses sampling to predict the next token with varying distribution probabilities, ensuring responses are varied and natural in real-world applications.
JMIR Med Educ 2024;10:e52784
Download Citation: END BibTex RIS

While the percentage of correct answers for questions based on radiological images was relatively high, this percentage was low for questions based on graphs, such as physiological tests. In the English translation and prompts, the percentage of correct answers for questions based on radiological images was 51.5%, while that for questions based on graphs was 29.2%.
Results for image-based questions discriminated according to the type of image.
JMIR Med Educ 2024;10:e57054
Download Citation: END BibTex RIS

Capability of GPT-4V(ision) in the Japanese National Medical Licensing Examination: Evaluation Study
The questions and correct answers of the 117th Japanese National Medical Licensing Examination are publicly available for download on the official website of the Ministry of Health, Labour and Welfare [16]. All the questions are in a format in which a specified number of choices, typically 1, are to be selected from 5 options. Of the questions that had images, 2 were officially excluded from scoring because they were either too difficult or inappropriate.
JMIR Med Educ 2024;10:e54393
Download Citation: END BibTex RIS

Among answers that used explicit script information (n=578, 67.7%), 218 (37.7%) were “plausible, highly specific for the case,” 161 (27.9%) were “plausible, relevant for the case,” and 197 (34.1%) were “plausible, not case specific,” with a mere 2 (0.3%) answers being rather implausible and none very implausible.
JMIR Med Educ 2024;10:e53961
Download Citation: END BibTex RIS

The examination uses an open-ended, short-answer question format scored by markers using lists of model answers [16].
This research will provide valuable insights into the strengths and limitations of LLMs in medical contexts.
JMIR Med Educ 2024;10:e49970
Download Citation: END BibTex RIS

The difficulty level of each question was established based on the percentage of correct answers received by JAMEP. Questions with less than 41.0% correct answers were classified as hard, those with between 41.1% and 72.1% correct answers as normal, and those with more than 72.1% correct answers as easy. The exclusion criteria were questions with images that GPT-4 could not recognize (n=55), questions containing videos (n=22), or both (n=6). The final analysis included 137 questions.
JMIR Med Educ 2023;9:e52202
Download Citation: END BibTex RIS

Across both diseases, 78% (50/64) of Chat GPT responses were correct but inadequate (score ≤2), with 45% (29/64) of answers being fully comprehensive (score 1). No responses were completely inaccurate (score 4). For AD and acne specifically, 88% (28/32) and 66% (21 of 32) of responses were correct but inadequate (score ≤2), and 53% (17/32) and 34% (11/32) were fully comprehensive (score 1), respectively. This broadly indicates acceptable performance of Chat GPT across both conditions.
JMIR Dermatol 2023;6:e50409
Download Citation: END BibTex RIS