JMIR Medical Education
Technology, innovation and openess in medical education in the information age
JMIR Medical Education (JME) is a Pubmed-indexed, peer-reviewed journal with focus on technology, innovation and openess in medical education. Another focus is on how to train health professionals in the use of digital tools. We publish original research, reviews, viewpoint and policy papers on innovation and technology in medical education. As an open access journal we have a special interest in open and free tools and digitial learning objects for medical education, and urge authors to make their tools and learning objects freely available (we may also publish them as Multimedia Appendix). We also invite submissions of non-conventional articles (e.g. open medical education material and software resources that are not yet evaluated but free for others to use/implement).
In our "Students' Corner", we invite students and trainees in the health professions to submit short essays and viewpoints on all aspects of medical education, but in particular suggestions on how to improve medical education, and suggestions for new technologies, applications and approaches (no article processing fees).
A sister journal of the Journal of Medical Internet Research (JMIR), a leading eHealth journal (Impact Factor 2016: 5.175), the scope of JME is broader and includes non-Internet approaches to improve education, training and assessment for medical professionals and allied health professions.
Articles published in JME will be submitted to PubMed and Pubmed Central. JME is open access.
Nov 9, 2017
Nov 2, 2017
Oct 31, 2017
Oct 20, 2017
Oct 17, 2017
Oct 2, 2017
Sep 27, 2017
Sep 11, 2017
Sep 5, 2017
Aug 4, 2017
Jul 13, 2017
Jun 12, 2017
Citing this Article
Right click to copy or hit: ctrl+c (cmd+c on mac)
Latest Submissions Open for Peer-Review:View All Open Peer Review Articles
Date Submitted: Nov 12, 2017
Open Peer Review Period: Nov 13, 2017 - Jan 8, 2018
Background: The importance of assessment in the educational process is well emphasized in medical education. The system of continuous assessment (CA) used in the College of medicine, KKU can be descri...
Background: The importance of assessment in the educational process is well emphasized in medical education. The system of continuous assessment (CA) used in the College of medicine, KKU can be described as frequent summative assessments in each course since there was no regular feedback. The CA adopted carries 50% of the total marks. Achievement of students in CA is critical to their pass or failure of any course. Excessive use of summative assessment was identified as problematic by some research work. But, at the same time a single terminal summative assessment is not recommended. The objective of this study was to examine the relation between each of gender, feedback and students' perception of learning with performance in CA. Objective: To get the views of medical students about their performance in continuous assessment and the factors affecting the continuous assessment. Methods: The target population of this study was the 4th, 5th and 6th year students of the college of medicine, KKU. Non-probability convenience sampling was used aiming at 25% - 30% of the total. A correlational design was adopted. A structured self-administered questionnaire was developed. This was based on four constructs: performance on CA: 3 items, feedback: 6 items, students' perception of learning: 12 items and gender. A 5-point Likert scale was used ranging from strongly agree to strongly disagree to the stated statements in the instrument. The questionnaire was validated before use. Pearson’s correlation coefficient (r) was computed using SPSS. P value of <0.05 was considered significant. Results: This is a cross sectional study . The total number of respondents was 128 with 58% of them males and 42% females. The computed r for the perception of learning with performance in CA was .741 and for feedback with performance in CA was .766. This clearly indicated a significant positive correlation between them. Gender had no significant correlation with performance in CA. Although profound evidence does exist on the positive effect of CA on academic performance and motivation of students, this effect seems to be dependent on how the assessment system is used. In one experimental study, it was found that CA had positive effect on students' academic performance, learning and satisfaction compared to summative assessment. On the other hand, when continuous assessment was done in form of frequent summative assessment, the positive effect was lost and in fact a negative effect was evident Conclusions: The respondents viewed their perception of learning and feedback strongly and positively correlated with their performance in CA, while gender had no significant correlation.
Date Submitted: Nov 8, 2017
Open Peer Review Period: Nov 8, 2017 - Jan 3, 2018
Background: The progressive use of e-learning in postgraduate medical education calls for proper quality indicators. Currently many evaluation tools exist. However, these are diversely used and their...
Background: The progressive use of e-learning in postgraduate medical education calls for proper quality indicators. Currently many evaluation tools exist. However, these are diversely used and their empirical foundation is often lacking. Objective: We aimed to identify an empirically founded set of quality indicators to set the bar for “good enough” e-learning. Methods: We performed a Delphi procedure with a group of 13 international education experts and 10 experienced users of e-learning. The questionnaire started with 57 items. These items were the result of a previous literature review and focus group study performed with experts and users. We used a Rate of Agreement (RoA) of less than two thirds resulted in its rejection. Results: In the first round, 37 items of the 57 were accepted as important, there was no consensus on 20, and 15 new items were added by the participants. In the second round, we added the comments of the first round to the items on which there was no consensus, and added the 15 new items. After this round, a total of 72 items were questioned and of these, 37 items were accepted and 35 were rejected due to lack of consensus. Conclusions: This study provides a list of 37 items which can form the basis of an evaluation tool to evaluate postgraduate medical e-learning. This is the first time that quality indicators for postgraduate medical e-learning have been defined and validated. The next step is to create and validate an e-learning evaluation tool from these items.
Date Submitted: Oct 28, 2017
Open Peer Review Period: Oct 31, 2017 - Dec 26, 2017
Background: Multiple choice questions represent one of the commonest methods of assessment in medical education. They believed to be reliable and efficient. Their quality depends on good item construc...
Background: Multiple choice questions represent one of the commonest methods of assessment in medical education. They believed to be reliable and efficient. Their quality depends on good item construction. Item analysis is used to assess their quality by computing difficulty index, discrimination index, distractor efficiency and test reliability. Objective: The aim of this study was to evaluate the quality of MCQs used in the college of medicine, King Khalid University, Saudi Arabia. Methods: Cross sectional Study design. Item analysis data of 21 MCQs exams were collected. Values for difficulty index, discrimination index, distractor efficiency and reliability coefficient were entered in MS excel 2010. Descriptive statistic parameters were computed. Results: Twenty one tests were analyzed. Overall, 7% of the items among all the tests were difficult, 35% were easy and 58% were acceptable. The mean difficulty of all the tests was in the acceptable range of 0.3-0.85. Items with acceptable discrimination index among all tests were 39%-98%. Negatively discriminating items were identified in all tests except one. All distractors were functioning in 5%-48%. The mean functioning distractors ranged from 0.77 to 2.25. The KR-20 scores lie between 0.47 and 0.97. Conclusions: Overall, the quality of the items and tests was found to be acceptable. Some items were identified to be problematic and need to be revised. The quality of few tests of specific courses was questionable. These tests need to be revised and steps taken to improve this situation.
Date Submitted: Oct 16, 2017
Open Peer Review Period: Oct 18, 2017 - Dec 13, 2017
Background: Wiki platform use has potential to improve student learning by improving engagement with course material. A student-created wiki was established to serve as a repository of study tools for...
Background: Wiki platform use has potential to improve student learning by improving engagement with course material. A student-created wiki was established to serve as a repository of study tools for students in a medical school curriculum. There is a scarcity of information describing student-led creation of wikis in medical education. Objective: To describe the creation of a student-centered wiki, characterize website traffic and evaluate student usage via a short anonymous online survey. Methods: Website analytics were used to track visitation statistics to the Wiki and a survey was distributed to assess ease of use, interest in contributing to the Wiki, and suggestions for improvement. Results: Site traffic data indicated high usage, averaging 316 pageviews per day from July 2011 to March 2013 and 74,317 total user sessions. The average session duration was 2 min 18s. Comparing Fall 2011 to Fall 2012 revealed a large increase in returning visitors (65.7%) and sessions via mobile devices (87.7%). The survey received 164 responses, 88% of whom were aware of the Wiki at the time of the survey. On average, respondents felt that the Wiki was more useful in the pre-clinical years (2.73 ± 1.25) than in the clinical years (1.88 ± 1.12; P < .001). Perceived usefulness correlated with the percent of studying for which the respondent used electronic resources (Spearman’s = 0.414, P < .001). Conclusions: Overall, the Wiki was a highly utilized, though informal part of the curriculum with much room for improvement and future exploration.
Date Submitted: Oct 3, 2017
Open Peer Review Period: Oct 4, 2017 - Nov 29, 2017
Background: The adoption of the flipped classroom in undergraduate medical education (UME) calls on students to learn from various self-paced tools – including online lectures – before attending i...
Background: The adoption of the flipped classroom in undergraduate medical education (UME) calls on students to learn from various self-paced tools – including online lectures – before attending in-class sessions. Hence, the design of online lectures merits special attention, given that applying multimedia design principles has been shown to enhance learning outcomes. Objective: To understand how online lectures have been integrated into medical school curricula, and whether published literature employs well-accepted principles of multimedia design. Methods: This scoping review followed the methodology outlined by Arksey and O'Malley (2005). MEDLINE, PsycINFO, Education Source, Francis, and ProQuest were searched to find articles from 2006 to 2016 related to online lecture use in UME. Results: 45 articles met inclusion criteria. Online lectures are used in preclinical and clinical years, covering basic sciences, clinical medicine, and clinical skills. The use of multimedia design principles is seldom reported. Almost all studies describe high student satisfaction and improvement on knowledge tests following online lecture use. Conclusions: Integration of online lectures into UME is well-received by students and appears to improve learning outcomes. Future studies should apply established multimedia design principles to the development of online lectures to maximize their educational potential.