Background: The effectiveness of peer learning in clinical skill development is well recognized and researched, given the many benefits gained such as enhanced learning, alleviation of the burden on faculty, and early development of teaching skills for future doctors. However, little is known in terms of its effectiveness as an assessment tool and the extent to which peer assessment can be relied upon in the absence of faculty support.
Objective: This study was conducted to assess medical students’ perception toward peer learning, which is based on self-regulated learning as a tool of assessment, and to compare peer evaluation with faculty evaluation of clinical skill performance.
Methods: A cohort of 36 third-year medical students were exposed to peer learning (same-level) in clinical skills education for 3 months. A convergent mixed methods approach was adapted to collect data from 3 sources, namely, students’ perception of peer learning, performance scores, and reflective observational analysis. A 5-point Likert-type scale was used to assess students’ (n=28) perception on the value of peer learning. The students were asked to assess their peers by using a preset checklist on clinical skill performance, and scores were compared to faculty assessment scores. Reflective observational data were collected from observing video recordings of some of the peer learning sessions. The findings from all 3 sources were integrated using joint display analysis.
Results: Out of 28 students, 25 students completed the survey and 20 students perceived peer learning as valuable in clinical skills education. The mean score of peer assessment was higher than that of faculty assessment. There was a significant difference in student performance between supervised teaching and peer learning groups (P=.003). Most students focused on the mastery of skill with little attention to the technique’s quality. Further, students were unable to appreciate the relevance of the potential clinical findings of physical examination.
Conclusions: Peer learning in clinical skills education, based on self-regulated learning, empowers students to develop a more responsible approach toward their education. However, peer assessment is insufficient to evaluate clinical skill performance in the absence of faculty support. Therefore, we recommend that peer learning activities be preceded by supervised faculty-taught sessions.
Peer learning is defined as “people from similar social groupings, who are not professional teachers, helping each other learn, and by so doing, learning themselves” . Peer learning can be categorized into (1) same-level peer learning where students at equal academic levels discus and study the materials together and (2) cross-level peer learning where students’ academic levels diverge [ ]. Peer learning is rapidly gaining acceptance and there are supporting evidences for peer learning in clinical skill development worldwide. An objective structured clinical examination is a complex competency assessment that assesses the cognitive knowledge as well as the psychomotor skills of clinicians. The clinician needs to recall all the steps in the correct and most efficient order and thereafter skillfully perform each step of the investigation. Finally, the clinician needs to cognitively interpret the findings of each step independently as well as all of them together to reach a better understanding of the patient’s condition. Peer learning has various benefits in clinical skill settings, including enhanced learning, cost-effectiveness [ ], and alleviation of the burden on the teaching faculty [ ], where some have even proposed that it might offer a solution to the global increase in the medical student numbers in the face of faculty shortage [ ]. Peer learning fosters self-regulated learning. A recent study showed that students’ ability to learn with peers has a significant positive impact on their academic achievements and significantly influences their self-regulated learning strategies [ ]. This study also highlights the importance of facilitating the development of students’ self-regulated learning and peer learning competencies in blended learning courses.
Self-regulated learning is defined as the degree to which students are metacognitively, motivationally, and behaviorally active participants in their own learning processes . There are many validated theoretical models that conceptualize self-regulated learning, an example of which is the “dual processing self-regulatory model” formulated by Boekaerts and colleagues [ ], which describes the various purposes of self-regulated learning, namely, (1) expanding one’s knowledge and skills, (2) protecting one’s commitment to the learning activity, and (3) preventing threat and harm to oneself. Another example is the “triadic social cognitive model” described by Zimmerman [ ] where he introduces the interplay between the environment, behavior, and person. He also conceptualized the virtuous cyclical phases of self-regulated learning that start with forethought, followed by performance, and finally, self-reflection. By fostering self-regulated learning, peer learning provides the students with a sense of ownership [ ]. This offers them an opportunity to develop the skills and professional attributes needed for teaching, assessment, and feedback. These skills and attributes are essential to nurturing a life-long culture of learning and teaching that is vital to their future roles as clinicians, especially if they decide to work in academic contexts [ , ].
Although peer learning in formative settings has been widely explored in the literature, there has been less focus on peer assessment. Despite the benefits that peer assessment offers, in terms of the development of self-regulation and self-monitoring in lifelong learning , it is still unclear to which extent it can be relied upon in the absence of faculty support. Accordingly, this study investigates, from the self-regulated learning perspective, the experience of undergraduate medical students concerning the application of peer learning in acquiring clinical skills. The research questions in this study are as follows:
- How does students’ assessments of their peers compare to that of faculty?
- How does the performance of the students receiving supervised learning compare to that of the students receiving peer learning (as assessed by the faculty in both cases)?
- How do students perceive peer learning?
- How does faculty perceive peer learning and what constitutes outstanding observations, from their perspective, in terms of students’ individual level attitudes and behaviors and interactions among each other?
- What were the highlights and limitations of the intervention under investigation and how can other similar health profession educators leverage the lessons learned to effectively integrate peer learning?
Context of This Study
The Foundations of Clinical Medicine is a 2-credit course offered to undergraduate medical students at the Mohammed Bin Rashid University of Medicine and Health Sciences (MBRU). It runs horizontally across the first 3 years of the Bachelor of Medicine, Bachelor of Surgery degree (MBBS), complementary to the basic sciences courses. This course introduces students to history taking, physical examination, and communication skills that are necessary to conduct a successful patient investigation, where simulation is the mainstay of learning and teaching. This study focuses on a specific intervention that was implemented in the third course of the respective horizontally integrated module. The third-year MBBS students attend a class for the corresponding course every Thursday for a duration of 15 weeks covering the first semester of the respective academic year. The cohort is usually divided into 2 groups. One group would receive supervised teaching in the morning while the other would undertake a self-study session with the option to consult from a selection of relevant resources available on their learning management system. The two groups would then switch for the alternate arrangement in the afternoon, each session lasting for a 2-hour duration.
The MBRU institutional review board approved this study (Reference# MBRU-IRB- 2019-017). Participation in this study was voluntary with written consent in accordance with the general regulation of the College of Medicine-Human Research Ethics Committee. The survey utilized to capture the perception of the participants was anonymous.
The Foundations of Clinical Medicine course delivery was modified for this study where some of the self-study was substituted by peer learning sessions. The rationale for this modification was based on the feedback received from previous cohorts that self-study sessions were not of much benefit, and most of the students would rather dedicate the time toward more practice of the clinical skills. Accordingly, the core of the learning and teaching in the respective course was modified with the objective of enabling and empowering students to leverage peer learning for practicing and, in turn, improving their clinical skills. These modifications were in alignment with the 3 phases of the cyclic model of self-regulated learning proposed by Zimmerman , and therefore, these peer learning sessions were designed in a way to foster self-regulated learning. compares the old and postintervention arrangements of the course delivery for a given group of students.
|Timing||Old arrangement||New arrangement|
|8 AM-10 AM||Supervised teaching||Peer learning|
|11 AM-1 PM||Break||Break|
|2 PM-4 PM||Self-study||Supervised teaching|
All students were initially exposed to a video demonstration of the physical examination in the form of flipped learning material. The cohort was then randomly assorted into 2 groups. The first group initially underwent peer learning as the primary modality for 1 physical examination session, which was recorded using camera videos already installed in the simulation center where the intervention was conducted, after which the first group received traditional supervised teaching. As for the second group, they initially received traditional supervised teaching, and thereafter, they underwent peer learning as the secondary modality.
A convergent mixed methods approach [- ] to research was adapted with triangulation of quantitative and qualitative data from 3 sources: performance scores, students’ perception of peer learning, and reflective observational analysis ( ). The integration was conducted through joint display analysis [ ].
Data Collection and Analyses
Performance Scores (Quantitative)
A preset checklist was used where students provided quantitative scores to assess the performance of their peers (and ). The checklist is composed of 2 sections: (1) patient centeredness (to assess soft skills, including confidentiality and communication), and (2) technique performance consisting of a list of psychomotor steps to be performed by the students in the sequence outlined, including systematic reporting of findings by using appropriate medical terminology. The students were asked to assess their peers’ performance against a dichotomous variable (done or not done). Two researchers, SAZ and MN (referred to as Faculty 1 and Faculty 2, respectively), used the same checklist to evaluate the students’ performance both for supervised learning and when the students were undergoing peer learning as the primary modality (the same researchers did the latter, each independently, while observing the abovementioned recordings each at their own pace over 2 weeks). One of the researchers is an Internal Medicine physician and a faculty member at MBRU, and she coordinated and led the learning and teaching of the course under investigation. The other researcher is an Emergency Department physician who is specialized in the continuous learning and development of health care professionals. The scores were collated and analyzed using SPSS statistics version 25 (IBM Corp) to address the abovementioned research questions. Kruskal-Wallis test was used to compare faculty and peer assessment scores, and a two-tailed t test was performed to compare traditional supervised teaching method and peer learning method. A P value less than .05 was used as the level of significance in both tests.
Students’ Perception of Peer Learning (Quantitative)
A 5-point Likert-type scale (1: strongly disagree, 2: disagree, 3: not sure, 4: agree, and 5: strongly agree) survey composed of 8 components () was used to anonymously assess students’ perception on the value of peer learning in clinical skills education. Quantitative analyses of the data collected using the respective questionnaires were performed using SPSS statistics version 25. Cronbach alpha was used to test the reliability of the questionnaire. Factorial analysis was used to test the validity of the questionnaire. The score of the agreement was assessed by cross bonding calculation. Interitem correlation with the percentage of agreement was calculated. Mann-Whitney U test was used to compare the mean scores between the 2 groups. P values less than .05 were considered significant.
|Questions||Likert scale score|
|Question 1: The objectives were covered in the peer learning sessions||□ Strongly disagree||□ Disagree||□ Not sure||□ Agree||□ Strongly |
|Question 2: Peer learning has improved my clinical skills||□ Strongly disagree||□ Disagree||□ Not sure||□ Agree||□ Strongly |
|Question 3: Peer learning sessions created a safe learning environment||□ Strongly disagree||□ Disagree||□ Not sure||□ Agree||□ Strongly |
|Question 4: I feel peer learning is useful for my Objective Structured Clinical Examination preparation||□ Strongly disagree||□ Disagree||□ Not sure||□ Agree||□ Strongly |
|Question 5: Time allotted for the peer learning sessions was adequate||□ Strongly disagree||□ Disagree||□ Not sure||□ Agree||□ Strongly |
|Question 6: Content and quality of the handout was good||□ Strongly disagree||□ Disagree||□ Not sure||□ Agree||□ Strongly |
|Question 7: I recommend the continuation of the same method in the following years||□ Strongly disagree||□ Disagree||□ Not sure||□ Agree||□ Strongly |
|Question 8: I recommend using the same method for other courses||□ Strongly disagree||□ Disagree||□ Not sure||□ Agree||□ Strongly |
Reflective Observational Data (Quantitative and Qualitative)
This component of the study relied on an ethnographic approach to research by using direct and unobtrusive observations. Along with quantitatively rating each student against the checklist (referred to in the quantitative performance scores), the abovementioned researchers also evaluated the students’ performances and noted down all outstanding observations (ie, qualitative data), including the attitudes and behaviors of the students on an individual level and their interactions with each other. After the completion of the data collection, quantitative data were analyzed descriptively using SPSS statistics version 25. As for the qualitative, data, researchers adapted the 6-step framework for thematic analysis initially introduced by Braun and Clarke . It is recommended to use this technique in research on health professionals’ education [ ], and it is frequently put into practice in this realm [ - ]. Accordingly, the researchers (independently) familiarized themselves with the data and then generated the initial codes. Thereafter, the researchers convened 2 consecutive 1-hour meetings to present the noted observations to each other, reflect upon them, and develop a common ground (in relation to the surfacing codes), which enabled effective collaboration around the searching for themes and their review. The researchers then defined and named the themes and reported upon them.
Joint Display Analysis
Findings from all 3 concurrent analyses were merged using joint display analysis . The findings from those analyses were compared (and contrasted). The areas where those findings confirmed or built upon each other were identified. The integration also created the space for contradictory findings in any one area to be highlighted and considered in conjunction with each other when undergoing meta-inferences to weave a consistent narrative out of this study’s findings [ , ].
Performance Scores (Quantitative)
Comparison of Peer and Faculty Assessment
A comparison of peer assessments with faculty assessments of clinical examination skills showed that the mean score of peer evaluation was significantly higher at 18.05 (SD 2.15) out of 20 compared to 12.67 (SD 2.63) for Faculty 1 and 11.89 (SD 4.80) for Faculty 2. Both faculty members had comparable means.
Comparison of Supervised Teaching and Peer Learning
There was a significant difference between the assessment scores of students who received traditional supervised teaching compared to those who received peer learning as the primary teaching modality. Scores were significantly higher in the supervised groups compared to the peer learning group with a mean of 17.33 (SD 2.57) for the former and 14.20 (SD 3.25) for the latter (P=.003).
Students’ Perception of Peer Learning (Quantitative)
Of the total student population, 89% (32/36) completed and returned the questionnaire. The participants were third-year medical students aged between 19 and 29 years, and there were more female participants (24/36, 67%) than male participants (8/36, 22%). Around 47% (17/36) of the cohort’s grade point average lay between 3 and 4. The questionnaire was reliable at a Cronbach alpha score of .895. The mean score of the agreement calculated by cross bonding calculation was 31.56 (SD 6.58), corresponding to a total score of 79%, which shows “agreement.” The majority of the items’ score means were high, ranging from 3.91 to 4.22 toward “Agree,” except for question 5: “Time allotted for peer learning sessions was adequate” () demonstrating the lowest mean at 3.03 (SD 1.492), voluntarily elaborated upon by some students with comments such as “…2 hours is too long a time for peer learning sessions…” In addition, question 8 “I recommend using the same method for other courses” had the second lowest mean at 3.75 (SD 1.136). The average interitem correlation for questions 1, 2, 3, 4, 6, and 7 ( ) were between 0.338 and 0.853. However, question 5 “Time allotted for the peer learning session was adequate” displayed a consistently low correlation with most questions having an interitem correlation at or below 0.300. Question 8 “I recommend using the same method for other courses” correlated poorly only with question 4 “I feel peer learning is useful for my Objective Structured Clinical Examination preparation” while correlating well with the rest of the questions. Factorial analysis of the questionnaire confirmed construct validity across all items.
Reflective Observational Data (Qualitative)
The qualitative analysis conducted by the 2 abovementioned researchers resulted in a conceptual framework (). This conceptual framework consists of 2 themes: favorable and unfavorable observations.
Theme 1: Favorable Observations
This theme includes the researchers’ observations that explain students’ attitudes and behaviors and ways of relating to one another that were desirable for attaining the intervention’s objective (). This theme consists of 3 categories: noticeable comfort or ease, high level of cohesion and teamwork among the students while undergoing the intervention, and urge to master skills, where students appeared to be purposefully revisiting the checklist (in a repetitive manner).
|Ease in pinpointing personal and team shortcomings||Learning in a safe and relaxed environment|
|Evident cohesion and seeking support||Willingness and capacity to work in a team|
|Purposeful revisiting of the checklist||Proactiveness and urge to master skills|
Theme 2: Unfavorable Observations
This theme includes observations that were counterproductive to attaining the objective of the intervention (). This theme encapsulated 3 categories: an overall lack of enthusiasm, inability to appreciate the relevance of potential physical findings, and poor technique while performing the set of skills.
|Lack of enthusiasm||Students do not appreciate the value of these peer learning sessions|
|Inability to appreciate relevance||Cases where students struggle to interpret potential findings of physical examination|
|Poor technique||Incidences where students appeared to perform the examination, but the quality of the core technique is suboptimal|
The output of the 3 concurrent analyses generated findings that were all holistically considered in the iterative process of joint display analysis. Most of the findings complemented each other (as illustrated in). To start with, in terms of highlights of the experience, the students expressed appreciation of this particular peer learning experience. Along these lines, the instructors observed that the students appeared comfortable and seemed to appreciate the safety and comfort of the encapsulating environment. The instructors also perceived that students exhibited teamwork and proactiveness. As for the limitations of the experience, the students seemed to overrate each other. Further, the students expressed dissatisfaction with the allotted time; they perceived it as too long for the purpose of the exercise. Despite their self-reported positive perception toward the exercise, the students highlighted that they would not like to replicate it in other courses. Moreover, the instructors developed the impression that the students were not enthusiastic during the experience, and (in some instances) there were aspects that seemed to challenge the students. These aspects include the core technique and the interpretation of findings.
Overview of This Study
In this study, we described the implementation of peer learning as an assessment tool in clinical skills education for undergraduate medical students. The findings of this study showed how the peer learning experience under investigation is characterized by certain highlights and limitations in relation to self-regulated learning. The concept of social and cognitive congruence underlies the dynamics of peer learning and is explained by the similarity of thinking, reasoning, and social roles, which account for the successful outcomes of peer learning . While participating students had an overall positive perception of peer learning, they were less objective (relative to their instructors) when evaluating their colleagues’ performance. In addition, students’ clinical skills and quality of performing the clinical examination were better under faculty supervision. However, there was clear evidence that peer learning in clinical skills fostered self-regulated learning through creation of a safe learning environment in line with the “scaffolding strategy” described by Zimmerman [ ]. Accordingly, the findings of this study recommend other similar programs to integrate peer learning into undergraduate clinic skills education. Such an intervention should be designed in a way to leverage this technique’s highlights while circumventing its limitations.
Comparing peer learning with faculty assessment of clinical skill performance is essential for establishing the concurrent validity of peer assessment. Our results showed that the mean score for peer assessment was higher than that of faculty assessment. This lack of alignment between peer and faculty evaluation could be due to the assessment of a different dimension even when using the same checklist where students tend to assess recall of steps while the faculty consider the techniques in the execution of every step of the skill set to be of equal importance. The peer assessors considered the face value of the checklist, where it solely outlined the steps. As for the instructors, their expertise automatically sets them at an advantage where they “look beyond the obvious.” From this perspective, the simplistic structure (ie, design) and content of the checklist, where there is no emphasis on the expected quality of the technique, might partially account for the occurrence of this discrepancy. Another possible explanation for this misalignment could be due to the potential bias associated with students taking on the assessor’s role . Moreover, our results demonstrated that students subjected to supervised teaching as the primary modality attained higher scores than the peer learning group. This was evident from the scores recorded by the faculty through direct observation of the former group and observation of video recordings for the latter group. This was further supported and can be explained by 2 unfavorable observations noted from the qualitative analysis of peer learning, namely, suboptimal quality of technique and inability to interpret the potential findings of the examination. These findings highlight the need for the faculty to support and guide students on appropriate techniques when conducting physical examinations as this is the key to eliciting physical signs in real life patient encounters. In addition, even though feedback from peers is anticipated to be much more efficient if a checklist was used [ ], we believe that students require proper training before they develop the necessary competency in assessing and guiding one another. This is consistent with findings from previous studies, which show that students cannot be reliable assessors unless they receive sufficient “training and familiarity with rating criteria, resulting in higher rater agreement and internal consistency” [ ]. Accordingly, it is recommended for such interventions to be designed in a way where learners go through the supervised teaching offered by experts in the subject matter and thereafter engage in peer learning. As such, supervised teaching will precede peer learning, and the benefits of peer learning, in terms of practicing and refining clinical skills, can be leveraged after covering the technical bases.
Perception of the Students With Regard to Peer Learning
Findings from the questionnaire demonstrated that most students had a positive perception toward peer learning, which is in line with most research findings [, ]. However, it seems that the benefits of peer learning were limited as most students were reluctant to recommend implementing peer learning in other courses. This may be due to the students perceiving 2 hours as too long of a duration for the exercise. Another possible explanation is that all other courses are knowledge-based, and therefore, such a peer activity may not be relevant or of much benefit.
A general lack of enthusiasm was observed among students during the peer learning sessions. A possible explanation could be that there is not much at stake as this team activity was considered part of their formative rather than summative assessment. This attitude toward formative assessments is most probably accounted for by the lack of maturity in terms of self-regulated learning skills among our preclinical students, which is one of the many disadvantages of an exam-oriented culture . Moreover, this explanation may justify the students’ underappreciation of the time dedicated to the peer learning sessions, as highlighted from their comments in the questionnaire. Pintrich [ ] highlighted the importance of motivation in self-regulated learning. He perceives the steps of self-regulated learning to be (1) forethought, planning, and activation, (2) monitoring, (3) control, and (4) reaction and reflection. Each of those steps, in his opinion, has 4 different areas for regulation (cognition, motivation, behavior, and context). From this perspective, a possible solution to the observed lack of enthusiasm among participating students in this study could be to dedicate some time at the beginning of the course to better orient students to the short-term and long-term benefits of peer learning as a way of motivating them (ie, increase the perceived benefits of engaging in the exercise).
A selective learning approach dominated students’ learning behavior during peer learning sessions. The majority focused on attaining a sequential mastery of the examination steps, regardless of the quality of performance. This behavior is consistent with the predictions of the cognitive load theory . The differential use of the checklist, however, between students and faculty was evident as students would tend to use it as a learning tool while the faculty would consider it more of an assessment tool. This reflects the cultural norms here in the United Arab Emirates, where there is a high level of cooperation and uncertainty avoidance. This trait was further demonstrated by competent students supporting others through repetition of skills toward mastery, which is in line with the social learning theory that describes the cooperative nature of students’ learning from each other through “modelling, instructing, and feedback” [ ]. Moreover, Hadwin et al [ ] discuss self-regulated learning in the context of collaborative learning, where they differentiate between coregulation and socially shared regulated learning. In the former, the regulatory actions are guided by a particular group member. As for the latter, regulatory actions emerge through a series of transactive exchanges among group members, which were clearly observed during our peer learning sessions. Another interesting observation was that the importance of relevance of any potential clinical finding did not seem to be a priority of the learning experience to most students. Consequently, despite the efforts some students invested into interpreting potential findings of the physical examination, they still, on many occasions, ended up providing misguided peer correction.
Given these findings, we feel that a subject matter expert in clinical skills education is essential for assisting students in executing the correct clinical techniques, for enabling them to appreciate the potential findings of a physical examination, and for attending to their questions and uncertainties. On a positive note, it seems that peer learning creates a safe environment for the students, which is in line with evidence from studies where students reported comfort in interacting without the pressure of competition  and simply feel more at ease with a peer [ ]. This kind of safety in learning falls under the scaffolding strategy described by Zimmerman [ ], which he considers to be a key factor in the performance phase of self-regulation, and he elaborates on the fact that “it may also help to enrich the learning experience by allowing students to dig deeper into the content and further explore.”
Strengths and Limitations of This Study
Our study’s strength lies in 3 main features: (1) performance of the study in a live educational setting, (2) integration of data through the use of a convergent mixed methods approach to research, and (3) randomization in the cross-over part of the study.
One of this study’s main limitations is that the generalizability of the findings is limited due to the small sample size of the participants. Another limitation is that the peer assessment scores obtained were not a pure reflection of performance as students mostly used the checklist as a learning tool rather than as an assessment tool. This is, of course, in addition to the fact that peer assessment may lack objectivity due to the abovementioned potential bias, which questions its reliability in terms of assessment. Moreover, although decided for simplicity purposes, the score divisions of 1 for “done” and 0 for “not done” on the evaluation checklist did not reflect the quality of performance that is usually assessed in a broader spectrum. Finally, this intervention’s outcomes were limited to step 1 “reaction” and to a lesser degree, step 2 “learning” of Kirkpatrick’s model of evaluation .
Comparison With Prior Work
Our findings of students’ positive perception toward peer learning are in line with findings of most research studies in this area [, ]. However, it seems that our decision for students to undertake peer learning as a primary modality with no previous training might have been miscalculated as most studies ensured that peer-assisted tutors were subjected to some amount of training [ ]. This might, in part, account for the misguided peer correction mentioned earlier and perhaps even the misalignment between peer and faculty evaluation of clinical skill performance. In addition, most peer learning studies focused on cross-level peer learning. In contrast, in our study, we investigated same-level peer learning to make use of advantages such as informality and practicality in terms of timetabling compared to cross-level peer learning [ ]. With regards to the comparison of outcomes of clinical skill teaching by peers compared to faculty as a primary modality, the evidence in the literature is controversial as some studies reported no significant difference [ ] while others concluded that students in the faculty-led teaching group required lesser time to reach the desirable outcomes [ ].
The long-term effects of peer learning in medical education are poorly understood ; therefore, more robust outcome measure tools need to be developed that would go beyond the first and second levels of Kirkpatrick’s model of evaluation [ ]. Moreover, we recommend future studies to tackle a larger sample size of participants for a more reliable statistical analysis and more representative findings.
Our study’s findings provided evidence of acceptability and benefits of peer learning in the clinical skills education of undergraduate medical students that includes but is not limited to promoting interactive social learning. The intervention under investigation also constituted a safe learning environment for students to exercise self-regulated learning. However, peer learning is insufficient as a standalone strategy. Therefore, it needs to be preceded by supervised teaching provided by a subject matter expert for the maximum benefit to be gained. In summary, we recommend incorporating peer leaning as a secondary modality into the design of medical curricula to empower students to exercise self-regulated learning and enable them to acquire teaching and assessment skills early on in their learning trajectory that will foster a lifelong culture of teaching .
The authors would like to extend their gratitude to the following MBRU stakeholders: Mari-Lynn Sanggalang, Safeeja Abdul Rahuman, Jolly Isaac, and Meghana Sudhir.
Conflicts of Interest
- Topping K. Peer-Assisted Learning First Edition. New Jersey: Lawrence Erlbaum Associates Publishers; 2009.
- Engels D, Kraus E, Obirei B, Dethleffsen K. Peer teaching beyond the formal medical curriculum. Adv Physiol Educ 2018 Sep 01;42(3):439-448 [FREE Full text] [CrossRef] [Medline]
- House J, Choe C, Wourman H, Berg K, Fischer J, Santen S. Efficient and Effective Use of Peer Teaching for Medical Student Simulation. West J Emerg Med 2017 Jan;18(1):137-141 [FREE Full text] [CrossRef] [Medline]
- Ten Cate O, Durning S. Dimensions and psychology of peer teaching in medical education. Med Teach 2007 Sep;29(6):546-552. [CrossRef] [Medline]
- Yu T, Wilson NC, Singh PP, Lemanu DP, Hawken SJ, Hill AG. Medical students-as-teachers: a systematic review of peer-assisted teaching during medical school. Adv Med Educ Pract 2011;2:157-172 [FREE Full text] [CrossRef] [Medline]
- Lawton B, Macdougall C. Developing clinical skills: a simple and practical tool. Med Educ 2004 Nov;38(11):1198-1199. [CrossRef] [Medline]
- Zimmerman BJ. A social cognitive view of self-regulated academic learning. Journal of Educational Psychology 1989;81(3):329-339. [CrossRef]
- Boekaerts M, Zeidner M, Pintrich P. General theories and models of self-regulation. In: Boekaerts M, editor. Handbook of Self-Regulation. First edition. Cambridge, Massachusetts: Academic Press; 1999:1-8.
- Zimmerman BJ. From Cognitive Modeling to Self-Regulation: A Social Cognitive Career Path. Educational Psychologist 2013 Jul;48(3):135-147. [CrossRef]
- Speyer R, Pilz W, Van Der Kruis J, Brunings JW. Reliability and validity of student peer assessment in medical education: A systematic review. Medical Teacher 2011 Oct 24;33(11):e572-e585. [CrossRef]
- Burgess A, McGregor D, Mellis C. Medical students as peer tutors: a systematic review. BMC Med Educ 2014 Jun 09;14:115 [FREE Full text] [CrossRef] [Medline]
- Zimmerman B. Attaining self-regulation: A social cognitive perspective. In: Handbook of Self-Regulation. San Diego: Academic Press; 2010.
- Fetters MD, Curry LA, Creswell JW. Achieving integration in mixed methods designs-principles and practices. Health Serv Res 2013 Dec;48(6 Pt 2):2134-2156 [FREE Full text] [CrossRef] [Medline]
- Guetterman TC, Fetters MD, Creswell JW. Integrating Quantitative and Qualitative Results in Health Science Mixed Methods Research Through Joint Displays. Ann Fam Med 2015 Nov;13(6):554-561 [FREE Full text] [CrossRef] [Medline]
- Moseholm E, Fetters MD. Conceptual models to guide integration during analysis in convergent mixed methods studies. Methodological Innovations 2017 Dec 14;10(2):205979911770311-205979911770311. [CrossRef]
- Fetters MD. The Mixed Methods Research Workbook: Activities for Designing, Implementing, and Publishing Projects. California, USA: Sage Publications; 2019.
- Braun V, Clarke V. Using thematic analysis in psychology. Qualitative Research in Psychology 2006 Jan;3(2):77-101. [CrossRef]
- Kiger ME, Varpio L. Thematic analysis of qualitative data: AMEE Guide No. 131. Medical Teacher 2020 May 01;42(8):846-854. [CrossRef]
- Amir Rad FA, Otaki F, AlGurg R, Khan E, Davis D. A qualitative study of trainer and trainee perceptions and experiences of clinical assessment in post-graduate dental training. Eur J Dent Educ 2021 May;25(2):215-224. [CrossRef] [Medline]
- Mascarenhas S, Al-Halabi M, Otaki F, Nasaif M, Davis D. Simulation-based education for selected communication skills: exploring the perception of post-graduate dental students. Korean J Med Educ 2021 Mar;33(1):11-25 [FREE Full text] [CrossRef] [Medline]
- Amir Rad F, Otaki F, Baqain Z, Zary N, Al-Halabi M. Rapid transition to distance learning due to COVID-19: Perceptions of postgraduate dental learners and instructors. PLoS One 2021 Jun 17;16(6):1-22 [FREE Full text] [CrossRef] [Medline]
- Haynes-Brown TK, Fetters MD. Using Joint Display as an Analytic Process: An Illustration Using Bar Graphs Joint Displays From a Mixed Methods Study of How Beliefs Shape Secondary School Teachers’ Use of Technology. International Journal of Qualitative Methods 2021 Feb 11;20:160940692199328. [CrossRef]
- Seaman J, Dettweiler U, Humberstone B, Martin B, Prince H, Quay J. Joint Recommendations on Reporting Empirical Research in Outdoor, Experiential, Environmental, and Adventure Education Journals. Journal of Experiential Education 2020 Oct 21;43(4):348-364. [CrossRef]
- Stonewall J, Dorneich M, Dorius C, Rongerude J. A review of bias in peer assessment. 2018 Presented at: The Collaborative Network for Engineering and Computing Diversity Conference; 2018 Apr 29; Virginia URL: https://peer.asee.org/29510
- Daud NM, Noor AL. Students as assessors. In: Language Studies in the Muslim World. Malaysia: Malaysia Press; 2011:1-18.
- Khaw C, Raw L. The outcomes and acceptability of near-peer teaching among medical students in clinical skills. Int J Med Educ 2016 Jun 12;7:188-194 [FREE Full text] [CrossRef] [Medline]
- Pintrich PR. A Motivational Science Perspective on the Role of Student Motivation in Learning and Teaching Contexts. Journal of Educational Psychology 2003;95(4):667-686. [CrossRef]
- Young JQ, Van Merrienboer J, Durning S, Ten Cate O. Cognitive Load Theory: Implications for medical education: AMEE Guide No. 86. Medical Teacher 2014 Mar 04;36(5):371-384. [CrossRef]
- Vygotsky LS, Cole M. Mind in Society: Development of Higher Psychological Processes. Cambridge, USA: Harvard University Press; 1978.
- Hadwin A, Järvelä S, Miller M. Self-regulated, co-regulated, and socially shared regulation of learning. In: Handbook of Self-Regulation of Learning and Performance. Oxfordshire: Taylor & Francis Group; 2011:65-84.
- Tai J, Molloy E, Haines T, Canny B. Same-level peer-assisted learning in medical clinical placements: a narrative systematic review. Med Educ 2016 Apr;50(4):469-484. [CrossRef] [Medline]
- Kirkpatrick R, Zang Y. The Negative Influences of Exam-Oriented Education on Chinese High School Students: Backwash from Classroom to Child. Lang Test Asia 2011;1(3):36-45. [CrossRef]
- Wadoodi A, Crosby JR. Twelve tips for peer-assisted learning: a classic concept revisited. Med Teach 2002 May;24(3):241-244. [CrossRef] [Medline]
|MBRU: Mohammed Bin Rashid University of Medicine and Health Sciences|
Edited by G Eysenbach; submitted 19.11.20; peer-reviewed by P Dattathreya, J Kimmerle; comments to author 01.03.21; revised version received 05.04.21; accepted 19.05.21; published 13.07.21Copyright
©Shaikha Alzaabi, Mohammed Nasaif, Amar Hassan Khamis, Farah Otaki, Nabil Zary, Sharon Mascarenhas. Originally published in JMIR Medical Education (https://mededu.jmir.org), 13.07.2021.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic information, a link to the original publication on https://mededu.jmir.org/, as well as this copyright and license information must be included.