Published on in Vol 5, No 2 (2019): Jul-Dec

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/13386, first published .
The Impact of Medical Students’ Individual Teaching Format Choice on the Learning Outcome Related to Clinical Reasoning

The Impact of Medical Students’ Individual Teaching Format Choice on the Learning Outcome Related to Clinical Reasoning

The Impact of Medical Students’ Individual Teaching Format Choice on the Learning Outcome Related to Clinical Reasoning

Original Paper

1Department of Haematology and Medical Oncology, University Medical Centre Göttingen, Göttingen, Germany

2Department of Anaesthesiology, University Medical Centre Göttingen, Göttingen, Germany

3Department of Legal Medicine, University Medical Centre Hamburg-Eppendorf, Hamburg, Germany

4Division of Medical Education Research and Curriculum Development, Study Deanery of University Medical Centre Göttingen, Göttingen, Germany

Corresponding Author:

Nikolai Schuelper, MD, MME

Department of Haematology and Medical Oncology

University Medical Centre Göttingen

Robert-Koch-Straße 40

Göttingen, 37075

Germany

Phone: 49 551 3910820

Fax:49 551 39927

Email: nikolai.schuelper@med.uni-goettingen.de


Background: Repeated formative assessments using key feature questions have been shown to enhance clinical reasoning. Key feature questions augmented by videos presenting clinical vignettes may be more effective than text-based questions, especially in a setting where medical students are free to choose the format they would like to work with. This study investigated learning outcomes related to clinical reasoning in students using video- or text-based key feature questions according to their individual preferences.

Objective: The aim of this study was to test the hypothesis that repeated exposure to video-based key feature questions enhances clinical reasoning to a greater extent than repeated exposure to text-based key feature questions if students are allowed to choose between those different formats on their own.

Methods: In this monocentric, prospective, nonrandomized trial, fourth-year medical students attended 12 computer-based case seminars during which they worked on case histories containing key feature questions. Cases were available in a text- and a video-based format. Students chose their preferred presentation format at the beginning of each case seminar. Student performance in key feature questions was assessed in formative entry, exit, and retention exams and was analyzed with regard to preceding exposure to video- or text-based case histories.

Results: Of 102 eligible students, 75 provided written consent and complete data at all study exams (response rate=73.5%). A majority of students (n=52) predominantly chose the text-based format. Compared with these, students preferring the video-based format achieved a nonsignificantly higher score in the exit exam (mean 76.2% [SD 12.6] vs 70.0% [SD 19.0]; P=.15) and a significantly higher score in the retention exam (mean 75.3% [SD 16.6] vs 63.4% [SD 20.3]; P=.02). The effect was independent of the video- or text-based presentation format, which was set as default in the respective exams.

Conclusions: Despite students’ overall preference for text-based case histories, the learning outcome with regard to clinical reasoning was higher in students with higher exposure to video-based items. Time-on-task is one conceivable explanation for these effects as working with video-based items was more time-consuming. The baseline performance levels of students do not account for the results as the preceding summative exam results were comparable across the 2 groups. Given that a substantial number of students chose a presentation format that was less effective, students might need to be briefed about the beneficial effects of using video-based case histories to be able to make informed choices about their study methods.

JMIR Med Educ 2019;5(2):e13386

doi:10.2196/13386

Keywords



Teaching Clinical Reasoning

One of the most challenging aims in undergraduate medical education is to teach students about how to arrive at a correct diagnosis and to initiate adequate therapeutic steps. Even for experienced physicians, clinical decision making is a critical aspect of their performance and different theories trying to elucidate the underlying cognitive mechanisms have been put forward [1]. Clinical reasoning reflects the involved aspects for decision making in the clinical context, and case-based learning turned out to be both effective for teaching clinical reasoning and is preferred by undergraduate medical students [2,3]. Among other assessment formats, key feature questions can be used to measure student performance in this particular area of expertise [4-6]. However, this type of assessment may not only be used to serve a summative purpose but also be used in a formative manner, taking advantage of the so-called direct testing effect [7]. Research published in the past 10 years supports the hypothesis that repeated testing enhances long-term retention of knowledge [8], skills [9], and—perhaps most importantly—the clinical application of knowledge [10]. We recently reported superior long-term retention of clinical reasoning performance in students who had repeatedly been exposed to formative key feature questions compared with students who had restudied the same content without being prompted to answer questions [11]. In that study, all study-related material was presented in written form. After 9 months, students scored significantly higher on intervention items trained with key feature questions compared with control items (mean 56.0% [SD 25.8] vs 48.8% [SD 24.7]; P<.001). In a further study comparing key feature cases with text-based case histories with video-based ones, these results were confirmed in a postintervention exam (mean 76.2% [SD 19.4] vs 72.4% [SD 19.1], P=.03) but not in a retention exam 9 months later (mean 69.2% [SD 20.2] vs 66.4% [SD 20.3], P=.11) [12].

Presenting Formats

Case histories can be presented in different formats including text-based and video-based displays or even in a simulated clinical setting using standardized patients. It might be hypothesized that greater authenticity of the learning material entails more favorable learning outcomes. In contrast, a prospective, randomized study with 133 students did not yield any significant differences between those 3 presenting formats with regard to improvement of clinical reasoning performance [13]. Another study with 256 students showed preference for video cases versus paper cases arguing that videos preserve the original language, avoid depersonalization of patients, and facilitate direct observation of clinical consultations [14]. Despite the reported preference for video-based case presentations in a study nested in a problem-based learning setting, the same study showed that the use of videos might be associated with a reduction of the depth of thinking by analyzing 5224 transcripted student utterances by a blinded coder [15]. Conversely, an analysis of student critical thinking skills following exposure to different case modalities suggested that video-based material was particularly effective in fostering these skills [16]. Thus, the available evidence on the effectiveness of video-based instructional material for the training of clinical reasoning is equivocal.

Learning Styles

One approach to understanding these conflicting data is the concept of learning styles, according to which characteristics of the way students learn predict the extent to which an individual student will benefit from specific teaching modalities [17]. Despite an ongoing debate on the usefulness of this approach [18], this concept is still underlying a considerable number of medical education research projects. Some of these studies refer to a model that distinguishes between different learning strategies, that is, visual, auditory, read and write, and kinesthetic [19]. In one study, individual learning styles of 62 applicants to general surgery were analyzed with respect to previous exam performance. Most applicants had a multimodal learning style, but aural and visual preferences were associated with significantly higher United States Medical Licensing Examination scores compared with read and write and kinesthetic preferences [20]. Owing to the lack of data supporting the idea that matching learning activities to individual learning styles does in fact lead to better learning outcomes, most intervention studies in the field of medical education did not assess the learning style, let alone account for it in their main analyses. At the same time, letting students choose their preferred learning modality (regardless of the learning style) may impact on the learning outcome, and this hypothesis has rarely been tested [13,21,22].

In summary, the available evidence supports the repetitive use of case-based key feature questions for teaching clinical reasoning. Furthermore, limited data indicate that medical students have individual preferences with regard to teaching modalities and that a higher degree of the authenticity of case presentations might foster the learning outcome in some students. However, it is unclear who will benefit most from using rich media and whether students are capable of identifying the method that works best for them. This study was designed to test the hypothesis that repeated exposure to video-based key feature questions enhances clinical reasoning to a greater extent than repeated exposure to text-based key feature questions if students are allowed to choose between those different formats on their own.


Study Design

This monocentric, prospective, nonrandomized intervention study investigated the impact of letting students choose their preferred learning format on the learning outcome with regard to clinical reasoning. The study consisted of a 3-month intervention phase followed by a nonintervention phase of 4 months. During the intervention phase, students attended 45-min weekly computer-based seminars (electronic case seminars [ECSs]) during which they worked on predefined patient case histories that were aligned to the learning objectives addressed in concurrent curricular teaching sessions. In the first and final weeks of the intervention phase as well as in the retention exam, students took identical formative key feature examinations.

Figure 1. Timeline of study design and assessments. After 1 electronic case seminar (ECS) introducing text- and video-based case presentations (ECS 0), 8 weekly intervention ECSs with the free choice of learning format were conducted (ECS 1-8).
View this figure

Students sat the unannounced retention exam following the 4-month nonintervention phase (see Figure 1).

All patient case histories were available in a text-based and video-based format (eg, Multimedia Appendix 1). During the first ECS, 4 cases were presented, 2 of which were video-based whereas the other 2 were text-based. Following this, students had the free choice of attending the learning format they preferred at the beginning of each ECS. In the entry, exit, and retention exam, an equal number of items were presented in a text- and video-based format. ECS attendance was mandatory for students enrolled in general medicine teaching modules of the fourth year.

Student Recruitment and Ethics Approval

Fourth-year medical students at Göttingen Medical School were informed about the study 4 weeks ahead via email and during the first lecture of term. Students enrolled in all modules in winter term 2015 were eligible for study participation. The study was approved by the local ethics committee (Ethik-Kommission der Universitätsmedizin Göttingen, application number 10/12/15), and all participants provided written consent.

Study Procedure

A total of 31 case histories were selected to be presented in the ECSs. All of these had been piloted and used in a previous research project [11]. Learning objectives and the content of cases were identical regardless of the video- or text-based presentation format. Patient case histories were broken up into 5 to 8 sections with key feature questions at the end of each section. All items that were used in the entry, exit, and retention exam occurred in 2 different ECSs during the intervention phase. Patient case histories differed regarding the particular story, but the key feature items were identical. During the intervention phase, each of the 9 ECSs consisted of 3 case histories with 5 key feature questions each. Thus, students answered a total of 135 original key feature questions addressing specific learning objectives during the 9 ECSs between the entry and exit exam. The entry, exit, and retention exam were made up of 4 case histories with a total number of 28 items, 14 of which were text based with the other 14 being presented as videos. Notably, for the 3 exams, the presenting format was set as default. As corresponding learning objectives to those 28 intervention items were taught twice during the intervention phase, and students had the choice between 2 different teaching formats at each time; there were 4 possible ways any one student could learn any of the 28 intervention items during the ECSs: text-text (sequence #1), text-video (sequence #2), video-text (sequence #3), and video-video (sequence #4).

Data Analysis

The primary outcome of this study was the difference in percent scores in the exit and retention exam for students preferring text-based case presentations during the intervention phase compared with those preferring video-based case presentations. Having a total number of 8 ECSs with a free choice, the cutoff for allocation to the video-preference group was set to having chosen the video format at least four times (ie, ≥50% exposure to the video format). According to this, 2 groups of students were compared with each other by means of an independent t test. Data are presented as mean (standard deviation) or percentages (n) as appropriate. Significance levels were set to 5%.

Statistical analysis was performed using IBM SPSS Statistics, version 24.00 (SPSS Inc) and GraphPad Prism, version 5.0 (GraphPad Software Inc).


Student Recruitment and Characteristics

A total of 100 out of 102 eligible students for study inclusion provided written consent. Of these, 25 students missed at least one study-related formative exam, resulting in a total number of 75 students with complete data for analysis (effective response rate=73.5%). According to their most frequent choice, 52 students were allocated to the text-preferring group and 23 to the video-preferring group. There were no statistical differences between both groups regarding age at entry exam, attendance at intervention ECS, and percent score achieved in exams during the previous term, taking into account the number of points scored by a particular student as well as the maximum of available points for that same student in the preceding term (see Table 1).

Format Attendance

The proportion of students choosing either format was calculated for each ECS. For text-based ECSs, this proportion ranged from 41.9% (n=31) to 87.7% (n=57), and for video-based ECSs, it ranged from 12.3% (n=9) to 58.1% (n=43; see Figure 2). The number of students preferring text-based over video-based items increased during the intervention phase.

For all items, the predominant learning sequence was text-text. The least common learning sequence for all items was text-video (see Table 2 for detailed results).

Table 1. Characteristics of text- and video-preference groups at entry exam.
CharacteristicsPreference for text (n=52), mean (SD)Preference for video (n=23), mean (SD)P value
Age at entry exam (years)24.87 (3.40)24.04 (1.70).27
Number of attended intervention electronic case seminars8.31 (0.64)8.43 (0.59).42
Score achieved in exams of previous semester82.40 (5.90)83.50 (7.50).56
Figure 2. Format attendance. Proportion of students choosing either presentation format during electronic case seminars. ECSs: electronic case seminars.
View this figure
Table 2. Sequences of learning condition. Each item was learned in one of 4 sequences according to students’ choice of presenting format. For study assessment at exit and retention exam, 28 items were assessed in a fixed format listed here.
ItemSequences of learning condition for each assessment itemItem assessment format
text-text (#1)text-video (#2)video-text (#3)video-video (#4)
Students,
n (%)
Mean score
at retention
exam, %
Students,
n (%)
Mean score
at retention
exam, %
Students,
n (%)
Mean score
at retention
exam, %
Students,
n (%)
Mean score
at retention
exam, %
139 (63)822 (3)1005 (8)4016 (26)69Text-based
239 (63)742 (3)1005 (8)8016 (26)88Video-based
339 (63)362 (3)505 (8)8016 (26)63Text-based
439 (63)462 (3)505 (8)4016 (26)56Video-based
539 (63)542 (3)505 (8)6016 (26)50Video-based
637 (69)762 (4)1009 (17)896 (11)100Video-based
737 (69)812 (4)509 (17)896 (11)100Text-based
837 (69)862 (4)1009 (17)896 (11)100Text-based
931 (52)842 (3)10010 (17)9017 (28)100Text-based
1031 (52)552 (3)5010 (17)7017 (28)71Video-based
1141 (63)440 (0)a16 (25)508 (12)88Text-based
1241 (63)850 (0)16 (25)758 (12)100Video-based
1331 (52)582 (3)5010 (17)7017 (28)53Video-based
1431 (52)482 (3)10010 (17)3017 (28)47Text-based
1525 (35)526 (8)8319 (26)5322 (31)68Text-based
1625 (35)326 (8)6719 (26)1622 (31)23Video-based
1724 (38)963 (5)10021 (33)10015 (24)100Video-based
1837 (64)654 (7)507 (12)8610 (17)60Video-based
1937 (64)924 (7)1007 (12)10010 (17)90Text-based
2037 (64)544 (7)757 (12)5710 (17)50Text-based
2139 (56)725 (7)607 (10)4319 (27)63Text-based
2236 (58)752 (3)1008 (13)6316 (26)94Video-based
2337 (64)814 (7)1007 (12)8610 (17)90Text-based
2444 (59)910 (0)0 (0)30 (41)90Video-based
2544 (59)640 (0)0 (0)30 (41)67Text-based
2637 (69)782 (4)1009 (17)1006 (11)100Video-based
2737 (69)842 (4)1009 (17)896 (11)100Text-based
2837 (69)272 (4)09 (17)676 (11)17Video-based

aNot applicable as no student chose this sequence for this item.

Figure 3. Exam scores. Mean percent scores in the entry, exit, and retention exams for the text-preferring and video-preferring group.
View this figure
Figure 4. Exam scores by presentation format in the formative exams. Mean percent scores in the entry, exit, and retention exams for the text-preference and video-preference groups. Data are presented as a function of exposure during the intervention phase (column texture) and item format in the formative exams (text vs video). T: text; V: video.
View this figure

Learning Outcome

In the entry and exit exam, there was no significant difference in percent scores between students preferring video-based items and students preferring text-based items (entry exam: 31.1% [SD 12.3] vs 29.4% [SD 12.3]; P=.59; exit exam: 76.2% [SD 12.6] vs 70.0% [SD 19.0]; P=.15). In the retention exam, students who had preferred videos during the intervention phase scored significantly higher than students preferring text-based items (75.3% [SD 16.6] vs 63.4% [SD 20.3]; P=.02; see Figure 3).

Exam performance was further analyzed according to the way items were presented in the formative exams. As described above, 14 of the 28 items were displayed as videos whereas the other half were presented in written form. The main effect of preferring videos during the intervention phase persisted, regardless of presentation format in the formative exams (see Figure 4): Mean percent scores in text-based items in the exit exam were 77.6% (SD 14.0; students preferring video) versus 72.4% (SD 20.2; students preferring text; P=.26). For video-based items, these figures were 74.8% (SD 12.7) versus 67.6% (SD 20.0); P=.11). Differences were significant in the retention test (text-based items: 77.0% (SD 18.8) vs 65.8% (SD 21.2); P=.03; video-based items: 73.6% (SD 16.6) vs 61.0% (SD 21.2); P=.01).


This study yielded 2 principal findings: First, the presentation format preferences of students changed in favor of the less time-consuming written format over the course of the intervention phase. Second, students preferring the video-based format outperformed students preferring text-based items in the retention exam, regardless of the item presentation format.

Student Preferences

Several studies reported that students had a positive attitude toward videos for case presentations and that they preferred video- compared with text-based learning [14,15,23,24]. Thus, the current finding of a shift toward text-based items and the fact that almost 70% of enrolled students had to be allocated to the text-preference group is somewhat surprising. However, this finding is in accordance with results from a recently published study reporting a preference for text-based learning material in 65% of undergraduate medical students [25]. A detailed analysis of the differences between the 2 formats seems warranted as they relate to various aspects of the student experience that may well impact on the learning outcome. The most obvious differences relate to time, learner engagement, the amount of context given and the presence of virtual patients. With regard to time, the aforementioned study [25] concluded that one of the drawbacks of video use is that it slows down the pace of the seminar and does not allow students to review and critically appraise the presented information. Yet, students acknowledged that videos provide more detailed and contextual information than written material does. In fact, videos provide more complex information.

According to the cognitive load theory [26], medical students in one particular year of undergraduate education can still be regarded as a heterogeneous group of learners. Some may find it easier to deal with more complex material whereas learners lacking experience or exposure to clinical content might be overwhelmed by the wealth of audio-visual information contained in videos [27]. This might be the reason why some students appeared to prefer video-based case presentations at the beginning but switched to the text-based format in the course of the study. In addition, one recent study found that learner engagement was reduced in video-based training compared with other educational approaches [28], and video-based patient cases may even disrupt deep critical thinking [15]. Thus, the provision of more contextual information and a more realistic presence of virtual patients in the learning environment does not guarantee better learning outcomes. A qualitative approach may be warranted to explore learner experience when exposed to video- or text-based material. On the basis of the data collected in this study, we cannot comment on these aspects. Yet, findings in the field of learning in general [29,30] and specifically in medical education [31,32] strongly suggest that learner experience moderates learning outcome.

Another potential explanation for the shift in preferences observed in this study is that working with text-based case histories took less time than working with video-based case histories. In any case, the difference in time-on-task between the 2 preference groups might account for the net finding of superior retention exam performance in students preferring video-based case presentations.

Learning Outcome

The findings of this study confirm previous results regarding a positive effect of test-enhanced learning on clinical reasoning by using key feature questions for case-based learning [11]. Both study groups achieved a sustained performance gain compared with the entry exam.

The more favorable learning outcome observed in the video-preference group is in concordance with other studies [20]. Notably this advantage was independent of the way items were presented in the formative exams as students preferring video-based case presentations during ECSs also achieved higher scores in retention exam items that were assessed in written form. This is in line with the dual-coding theory which posits that as images and words are processed in different parts of the brain, the use of visualization with sound enhances learning and recall [33]. On the basis of this notion, Kamin et al demonstrated the superiority of video-enhanced learning material for the acquisition of critical thinking [16].

The importance of context for learning outcome was demonstrated over 40 years ago [34], and it could be argued that increased authenticity of the learning environment might help students achieve a better learning outcome. In fact, in a randomized study with 288 medical students, there was no overall advantage for more authentic formats, but in a subanalysis, authors showed that this effect was driven by a strong benefit observed in the top tertile whereas all other students scored fewer points following exposure to the more authentic format [21]. This supports the conclusion that video-based case presentations may only be more effective than text-based presentations for a specific subset of students who may or may not be aware of this.

Implications and Perspectives

This study adds to the literature in that it helps curriculum planners, medical teachers, and students make informed choices about the design of instructional material. There is a strong rationale for using video-based case presentations combined with key feature questions for teaching clinical reasoning, but it has to be considered that not all students benefit in the same way and at the same time. About one-third of medical students seem to benefit from video-based case presentations. This might be explained by students having an individual preference for audio-visual learning, although other mechanisms cannot be ruled out and should be addressed in future studies. Giving students the opportunity to choose the presentation format they prefer at each single seminar seems to be a reasonable and feasible approach to avoid disadvantages for anyone and to take advantage of the potential of a more authentic format. Furthermore, this would also add up to the described use of mixed methods by being allowed to learn both text and video-based in the course of a curriculum [35]. In the context of computer-based learning, it should not be a huge challenge to implement such formats, and it could help each student use an appropriate format. One important question is how students who did not benefit from the intervention in this study may be helped to capitalize on the merits of repeated testing. One earlier trial suggested that the effectiveness of the method can be enhanced by informing students about the effects of test-enhanced learning [8]. Apart from this, the role of assessments has to be reconsidered especially in the light of recent studies regarding test-enhanced learning and the important role of assessments on students’ learning strategies [36,37]. However, students may need to be briefed about the pros and cons of each format [8]. Ideally, future studies will identify short test instruments providing students with individual feedback regarding the presentation format that is likely to be most beneficial to them. In addition, further studies should explore why the effect of different learning modalities might only become apparent after some time and not directly following exposure to the teaching material.

Strength and Limitations

To the best of our knowledge this is the first prospective study using case-based key feature questions for teaching clinical reasoning, allowing students to select their individual learning material. The formative exit and retention exams contained both text- and video-based items to minimize potential effects of training to any format. The items themselves referred to relevant problems of general medicine, and the response rate was favorable.

However, this is a monocentric study with a selected group of students as only fourth-year medical students were allowed to participate. Thus, findings of our study are not generalizable to other student groups and subject areas other than general medicine. Regarding ethical aspects, it was not possible to establish a study design without free choice of format as this study was conducted in the official curriculum and there was no way of knowing whether students randomized to either group would be disadvantaged. Hence, self-selection as a potential bias has to be taken into account when interpreting the findings of this study. Furthermore, we did not collect any quantitative or qualitative data on student experience during ECSs. However, as differences between the 2 presentation formats in terms of time, engagement, context, and the presence of virtual patients may impact on learning outcome, these aspects should be addressed in future studies. Finally, it was technically not feasible to measure the time individual students spent on every single item. However, it can be assumed that reading was less time consuming than watching the respective video.

Conclusions

Although about two-thirds of medical students preferred text-based case presentations, those students who self-selected to work on video-based presentations achieved better long-term retention of procedural knowledge as assessed with key feature questions. As clinical reasoning is one of the most complex but important objective in medical education, more research is needed to identify the most effective approach to teaching and learning related skills.

Acknowledgments

The authors would like to thank all medical students who participated in this study and dedicated their time. The datasets used and/or analyzed during this study are available from the corresponding author on reasonable request.

Authors' Contributions

NS provided comments on all video scripts, participated in filming sessions, analyzed the data, and wrote the manuscript. SL coordinated and facilitated video productions, facilitated electronic case seminars, and helped to collect data and commented on various versions of the manuscript. SA helped to design the study, provided advice on data presentation, and commented on various versions of the manuscript. TR conceived of the study, developed its design, drafted key feature cases, and contributed to the manuscript. All authors read and approved the final manuscript.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Presentation format. Screenshot of a text- (A) and a video-based (B) format. For reasons of privacy, faces are blurred out.

PNG File, 487KB

  1. Croskerry P. A universal model of diagnostic reasoning. Acad Med 2009 Aug;84(8):1022-1028. [CrossRef] [Medline]
  2. Kassirer JP. Teaching clinical reasoning: case-based and coached. Acad Med 2010 Jul;85(7):1118-1124. [CrossRef] [Medline]
  3. Ibrahim NK, Banjar S, Al-Ghamdi A, Al-Darmasi M, Khoja A, Turkistani J, et al. Medical students preference of problem-based learning or traditional lectures in King Abdulaziz university, Jeddah, Saudi Arabia. Ann Saudi Med 2014;34(2):128-133 [FREE Full text] [CrossRef] [Medline]
  4. Hrynchak P, Takahashi SG, Nayer M. Key-feature questions for assessment of clinical reasoning: a literature review. Med Educ 2014 Sep;48(9):870-883. [CrossRef] [Medline]
  5. Page G, Bordage G. The medical council of Canada's key features project: a more valid written examination of clinical decision-making skills. Acad Med 1995 Feb;70(2):104-110. [Medline]
  6. Page G, Bordage G, Allen T. Developing key-feature problems and examinations to assess clinical decision-making skills. Acad Med 1995 Mar;70(3):194-201. [CrossRef] [Medline]
  7. Roediger HL, Karpicke JD. The power of testing memory: basic research and implications for educational practice. Perspect Psychol Sci 2006 Sep;1(3):181-210. [CrossRef] [Medline]
  8. Dobson JL, Linderholm T. Self-testing promotes superior retention of anatomy and physiology information. Adv Health Sci Educ Theory Pract 2015 Mar;20(1):149-161. [CrossRef] [Medline]
  9. Kromann CB, Bohnstedt C, Jensen ML, Ringsted C. The testing effect on skills learning might last 6 months. Adv Health Sci Educ Theory Pract 2010 Aug;15(3):395-401. [CrossRef] [Medline]
  10. Larsen DP, Butler AC, Lawson AL, Roediger 3rd HL. The importance of seeing the patient: test-enhanced learning with standardized patients and written tests improves clinical application of knowledge. Adv Health Sci Educ Theory Pract 2013 Aug;18(3):409-425. [CrossRef] [Medline]
  11. Raupach T, Andresen JC, Meyer K, Strobel L, Koziolek M, Jung W, et al. Test-enhanced learning of clinical reasoning: a crossover randomised trial. Med Educ 2016 Jul;50(7):711-720. [CrossRef] [Medline]
  12. Ludwig S, Schuelper N, Brown J, Anders S, Raupach T. How can we teach medical students to choose wisely? A randomised controlled cross-over study of video- versus text-based case scenarios. BMC Med 2018 Dec 6;16(1):107 [FREE Full text] [CrossRef] [Medline]
  13. La Rochelle JS, Durning SJ, Pangaro LN, Artino AR, van der Vleuten CP, Schuwirth L. Authenticity of instruction and student performance: a prospective randomised trial. Med Educ 2011 Aug;45(8):807-817. [CrossRef] [Medline]
  14. Chan LK, Patil NG, Chen JY, Lam JC, Lau CS, Ip MS. Advantages of video trigger in problem-based learning. Med Teach 2010;32(9):760-765. [CrossRef] [Medline]
  15. Basu RR, McMahon GT. Video-based cases disrupt deep critical thinking in problem-based learning. Med Educ 2012 Apr;46(4):426-435. [CrossRef] [Medline]
  16. Kamin C, O'Sullivan P, Deterding R, Younger M. A comparison of critical thinking in groups of third-year medical students in text, video, and virtual PBL case modalities. Acad Med 2003 Feb;78(2):204-211. [CrossRef] [Medline]
  17. Rassool GH, Rawaf S. The influence of learning styles preference of undergraduate nursing students on educational outcomes in substance use education. Nurse Educ Pract 2008 Sep;8(5):306-314. [CrossRef] [Medline]
  18. Newton PM, Miah M. Evidence-based higher education - is the learning styles 'myth' important? Front Psychol 2017;8:444 [FREE Full text] [CrossRef] [Medline]
  19. Fleming N, Mills C. Not another inventory, rather a catalyst for reflection. Int J Educ Dev 1992;11(1):137-155. [CrossRef]
  20. Kim RH, Gilbert T. Learning style preferences of surgical residency applicants. J Surg Res 2015 Sep;198(1):61-65. [CrossRef] [Medline]
  21. LaRochelle JS, Durning SJ, Pangaro LN, Artino AR, van der Vleuten C, Schuwirth L. Impact of increased authenticity in instructional format on preclerkship students' performance: a two-year, prospective, randomized study. Acad Med 2012 Oct;87(10):1341-1347. [CrossRef] [Medline]
  22. Buch SV, Treschow FP, Svendsen JB, Worm BS. Video- or text-based e-learning when teaching clinical procedures? A randomized controlled trial. Adv Med Educ Pract 2014;5:257-262 [FREE Full text] [CrossRef] [Medline]
  23. Kim KJ, Kee C. Evaluation of an e-PBL model to promote individual reasoning. Med Teach 2013;35(3):e978-e983. [CrossRef] [Medline]
  24. de Leng B, Dolmans D, van de Wiel MW, Muijtjens A, van der Vleuten C. How video cases should be used as authentic stimuli in problem-based medical education. Med Educ 2007 Feb;41(2):181-188. [CrossRef] [Medline]
  25. Woodham LA, Ellaway RH, Round J, Vaughan S, Poulton T, Zary N. Medical student and tutor perceptions of video versus text in an interactive online virtual patient for problem-based learning: a pilot study. J Med Internet Res 2015 Jun 18;17(6):e151 [FREE Full text] [CrossRef] [Medline]
  26. Sweller J, Ayres P, Kalyuga S. Cognitive Load Theory. New York: Springer; 2011.
  27. Mayer R, Sims V. For whom is a picture worth a thousand words? Extensions of a dual-coding theory of multimedia learning. J Educ Psychol 1994;86(3):389-401. [CrossRef]
  28. Bukoski A, Uhlich R, Tucker J, Cooper C, Barnes S. Recognition and treatment of nerve agent casualties: evidence of reduced learner engagement during video-based training. Mil Med 2016;181(5 Suppl):169-176. [CrossRef] [Medline]
  29. Miller GA. The magical number seven plus or minus two: some limits on our capacity for processing information. Psychol Rev 1956 Mar;63(2):81-97. [CrossRef] [Medline]
  30. Ericsson KA, Charness N. Expert performance: its structure and acquisition. Am Psychol 1994;49(8):725-747. [CrossRef]
  31. Fraser K, Ma I, Teteris E, Baxter H, Wright B, McLaughlin K. Emotion, cognitive load and learning outcomes during simulation training. Med Educ 2012 Nov;46(11):1055-1062. [CrossRef] [Medline]
  32. Fraser KL, Ayres P, Sweller J. Cognitive load theory for the design of medical simulations. Simul Healthc 2015 Oct;10(5):295-307. [CrossRef] [Medline]
  33. Godden DR, Baddeley AD. Context-dependent memory in two natural environments: on land and underwater. Br J Psychol 1975;66(3):325-331. [CrossRef]
  34. Paivio A. Dual coding theory: retrospect and current status. Can J Psychol 1991;45(3):255-287. [CrossRef]
  35. Yeung AS, Jin P, Sweller J. Cognitive load and learner expertise: split-attention and redundancy effects in reading with explanatory notes. Contemp Educ Psychol 1998;23(1):1-21. [CrossRef] [Medline]
  36. Sennhenn-Kirchner S, Goerlich Y, Kirchner B, Notbohm M, Schiekirka S, Simmenroth A, et al. The effect of repeated testing vs repeated practice on skills learning in undergraduate dental education. Eur J Dent Educ 2018 Feb;22(1):e42-e47. [CrossRef] [Medline]
  37. Raupach T, Schuelper N. Reconsidering the role of assessments in undergraduate medical education. Med Educ 2018;52(5):464-466. [CrossRef] [Medline]


ECS: electronic case seminar


Edited by G Eysenbach; submitted 13.01.19; peer-reviewed by M Davis, A Crisafio, PRG Cunningham; comments to author 28.02.19; revised version received 02.04.19; accepted 16.05.19; published 22.07.19

Copyright

©Nikolai Schuelper, Sascha Ludwig, Sven Anders, Tobias Raupach. Originally published in JMIR Medical Education (http://mededu.jmir.org), 22.07.2019.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic information, a link to the original publication on http://mededu.jmir.org/, as well as this copyright and license information must be included.