We are scheduled to perform a server upgrade on Thursday, November 30, 2017 between 4 and 6 PM Eastern Time.
Please refrain from submitting support requests related to server downtime during this window.
Technology, innovation and openess in medical education in the information age
JMIR Medical Education (JME) is a Pubmed-indexed, peer-reviewed journal with focus on technology, innovation and openess in medical education. Another focus is on how to train health professionals in the use of digital tools. We publish original research, reviews, viewpoint and policy papers on innovation and technology in medical education. As an open access journal we have a special interest in open and free tools and digitial learning objects for medical education, and urge authors to make their tools and learning objects freely available (we may also publish them as Multimedia Appendix). We also invite submissions of non-conventional articles (e.g. open medical education material and software resources that are not yet evaluated but free for others to use/implement).
In our "Students' Corner", we invite students and trainees in the health professions to submit short essays and viewpoints on all aspects of medical education, but in particular suggestions on how to improve medical education, and suggestions for new technologies, applications and approaches (no article processing fees).
A sister journal of the Journal of Medical Internet Research (JMIR), a leading eHealth journal (Impact Factor 2016: 5.175), the scope of JME is broader and includes non-Internet approaches to improve education, training and assessment for medical professionals and allied health professions.
Articles published in JME will be submitted to PubMed and Pubmed Central. JME is open access.
Right click to copy or hit: ctrl+c (cmd+c on mac)
Background: Colo-Rectal cancer is the third most common type of cancer and the second leading cause of cancer death in the United States. About one in three adults in US is not getting the CRC screeni...
Background: Colo-Rectal cancer is the third most common type of cancer and the second leading cause of cancer death in the United States. About one in three adults in US is not getting the CRC screening as recommended. Internal medicine residents are deficient in CRC screening knowledge. Objective: To improve internal medicine residents' CRC screening knowledge via novel approach of developing and publishing a smart phone application. Methods: A questionnaire was designed based on CRC screening guidelines of ACS, ACG and USPSTF. The questionnaire was emailed via SurveyMonkey link to all the residents of internal medicine department. The responses were analyzed after 4 weeks. Then a smart phone application was designed and was published on Play Store and App Store for android users and i phone users respectively. The survey was repeated, and the responses were compared with the previous one. Pearson Chi square test and the fisher exact test was applied to look for statistical significance. Results: 50 residents completed the first survey and 41 completed the second survey after publication of the application. Some of the areas of CRC screening which showed statistically significant improvement (P value < 0.05) included age to start CRC screening in African Americans, ordering preventive tests first, identification of CRC screening tests, identification of preventive and detection methods, positive tests to be followed by colonoscopy, follow up after colonoscopy findings and CRC surveillance in diseases. Conclusions: In this modern era of smart phones and gadgets, developing a smartphone based application or educational tool is a novel idea and can help in improving the knowledge of residents about colorectal cancer screening.
Background: The importance of assessment in the educational process is well emphasized in medical education. The system of continuous assessment (CA) used in the College of medicine, KKU can be descri...
Background: The importance of assessment in the educational process is well emphasized in medical education. The system of continuous assessment (CA) used in the College of medicine, KKU can be described as frequent summative assessments in each course since there was no regular feedback. The CA adopted carries 50% of the total marks. Achievement of students in CA is critical to their pass or failure of any course. Excessive use of summative assessment was identified as problematic by some research work. But, at the same time a single terminal summative assessment is not recommended. The objective of this study was to examine the relation between each of gender, feedback and students' perception of learning with performance in CA. Objective: To get the views of medical students about their performance in continuous assessment and the factors affecting the continuous assessment. Methods: The target population of this study was the 4th, 5th and 6th year students of the college of medicine, KKU. Non-probability convenience sampling was used aiming at 25% - 30% of the total. A correlational design was adopted. A structured self-administered questionnaire was developed. This was based on four constructs: performance on CA: 3 items, feedback: 6 items, students' perception of learning: 12 items and gender. A 5-point Likert scale was used ranging from strongly agree to strongly disagree to the stated statements in the instrument. The questionnaire was validated before use. Pearson’s correlation coefficient (r) was computed using SPSS. P value of <0.05 was considered significant. Results: This is a cross sectional study . The total number of respondents was 128 with 58% of them males and 42% females. The computed r for the perception of learning with performance in CA was .741 and for feedback with performance in CA was .766. This clearly indicated a significant positive correlation between them. Gender had no significant correlation with performance in CA. Although profound evidence does exist on the positive effect of CA on academic performance and motivation of students, this effect seems to be dependent on how the assessment system is used. In one experimental study, it was found that CA had positive effect on students' academic performance, learning and satisfaction compared to summative assessment. On the other hand, when continuous assessment was done in form of frequent summative assessment, the positive effect was lost and in fact a negative effect was evident Conclusions: The respondents viewed their perception of learning and feedback strongly and positively correlated with their performance in CA, while gender had no significant correlation.
Background: The progressive use of e-learning in postgraduate medical education calls for proper quality indicators. Currently many evaluation tools exist. However, these are diversely used and their...
Background: The progressive use of e-learning in postgraduate medical education calls for proper quality indicators. Currently many evaluation tools exist. However, these are diversely used and their empirical foundation is often lacking. Objective: We aimed to identify an empirically founded set of quality indicators to set the bar for “good enough” e-learning. Methods: We performed a Delphi procedure with a group of 13 international education experts and 10 experienced users of e-learning. The questionnaire started with 57 items. These items were the result of a previous literature review and focus group study performed with experts and users. We used a Rate of Agreement (RoA) of less than two thirds resulted in its rejection. Results: In the first round, 37 items of the 57 were accepted as important, there was no consensus on 20, and 15 new items were added by the participants. In the second round, we added the comments of the first round to the items on which there was no consensus, and added the 15 new items. After this round, a total of 72 items were questioned and of these, 37 items were accepted and 35 were rejected due to lack of consensus. Conclusions: This study provides a list of 37 items which can form the basis of an evaluation tool to evaluate postgraduate medical e-learning. This is the first time that quality indicators for postgraduate medical e-learning have been defined and validated. The next step is to create and validate an e-learning evaluation tool from these items.