Published on in Vol 12 (2026)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/71338, first published .
Application of Mixed Reality for Ophthalmic Clinical Skills and Diagnosis: Prospective Study

Application of Mixed Reality for Ophthalmic Clinical Skills and Diagnosis: Prospective Study

Application of Mixed Reality for Ophthalmic Clinical Skills and Diagnosis: Prospective Study

1Department of Ophthalmology, National University Hospital, 5 Lower Kent Ridge Road, Singapore, Singapore

2Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore

3Biostatistics Unit, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore

4Division of Hepatobiliary & Pancreatic Surgery, National University Hospital, Singapore, Singapore

5Division of General Surgery (Thyroid & Endocrine Surgery), National University Hospital, Singapore, Singapore

6Engineering Design and Innovation Centre, National University of Singapore, Singapore, Singapore

*these authors contributed equally

Corresponding Author:

Chun Jin Marcus Tan, MBBS, MMed, MTech


Background: Mixed reality has the potential to transform delivery of medical education. With tools such as HoloLens 2, educators can create immersive, interactive simulations that enable students to practice and engage with real-world scenarios in a controlled environment.

Objective: We postulated that a hybrid ophthalmology curriculum incorporating EyelearnMR (a simulation application) would be noninferior to traditional teaching. We compared learning outcomes and obtained user feedback.

Methods: This was a single-blind, cluster-randomized prospective study. Fourth-year medical students were organized into batches and then assigned to 2 groups: EyelearnMR and control arms. We used a quasi-randomized design with alternation allocation based on clinical grouping. The intervention group had an additional 2 hours of practice with the EyelearnMR devices. During the second week of their posting, a video assessment (5 scenarios with 17 multiple-choice questions) was conducted for both groups—mid-posting for the intervention group and at the end of the posting for the control group. The rationale for assessing the intervention group earlier, in addition to setting a higher bar for EyelearnMR, was to allow for the provision of outcomes showing noninferiority between both groups. In the event of noninferiority, we could demonstrate that EyelearnMR can replace some degree of traditional clinical teaching, even with a shorter total clinical exposure time. Students in the control group were allowed to experience the Eyelearn MR modules for 2 hours at the end of the posting. Both groups were asked to complete the User Experience Questionnaire.

Results: This study was funded in February 2023, and recruitment took place from July 2023 to January 2024. A total of 54 students were recruited—24 (44.4%) in the control arm and 30 (55.6%) in the EyelearnMR arm. The EyelearnMR group performed significantly better than the control group (median scores of 16, IQR 15-17, and 15, IQR 14-15, respectively; P=.03; Mann-Whitney U test). A total of 100% (30/30) of the students in the EyelearnMR group scored full marks (3/3) for the technique portion, compared to 70.8% (17/24) of the students in the control group (P=.002). There was no statistically significant difference between the groups for the examination (P=.13) and pathology (P=.33) portions. This was despite the EyelearnMR group having a reduced clinical time of 7 days compared to 10 days in the control group. The User Experience Questionnaire showed positive evaluations for attractiveness (mean 1.413, SD 0.969), efficiency (mean 0.822, SD 1.068), dependability (mean 1.087, SD 0.801), stimulation (mean 1.577, SD 0.845), and novelty (mean 1.606, SD 0.967).

Conclusions: EyelearnMR with traditional teaching was noninferior to traditional teaching alone. It provided a comparable experience and supported learning objectives equally. It is an effective supplementary teaching tool in ophthalmic education and may confer additional learning benefits beyond a traditional clinical posting.

JMIR Med Educ 2026;12:e71338

doi:10.2196/71338

Keywords



Mixed reality (MR) is a new concept that merges the real and virtual worlds [1,2], having the potential to transform the delivery of medical education [3]. This technology could address challenges faced in the delivery of high-quality medical education, including accessibility, consistency, quality, and cost [4,5]. The Microsoft HoloLens 2 is a commercially available MR headset that allows for bidirectional communication with multiple remote users via video, voice, and MR composites [6]. This technology has been used previously in various clinical and educational scenarios, including anatomical teaching, perioperative planning, and surgical training [4,5,7-16]. This technology has also been used to augment and supplement medical student clinical skill teaching and examination [17-19], allowing for the creation of immersive multisensory (audio, visual, and tactile) content to replicate real-world scenarios. MR allowed students to practice techniques and refine their skills in a controlled environment, leading to fewer errors and increased confidence in real-world scenarios.

The traditional methods of teaching students how to perform an eye movement examination and detect abnormalities of ocular motility include clinical demonstrations, slide and video presentations, and case-based teaching in the wards and clinics. The last scenario, while the most realistic, is also opportunistic and inconsistent.

EyelearnMR is an MR simulation application designed to teach and assess ocular examination via a wearable headset. The application consists of a hologram 3D patient, a Lang fixation cube for ocular motility examination, and an animation logic that models eye movement when the patient’s eyes are fixating on the cube. It is deployed on the Microsoft HoloLens 2 device and offers a 3D superimposition of simulated patients in a physical space (Figure 1). MR simulation enables technology-enhanced learning, which provides opportunities for trainees to acquire, develop, and maintain knowledge, skills, and behavior via experiential learning. MR simulation in medical education has several advantages, including precise tracking of quantifiable performance parameters in a clinical examination and consistent reproduction of clinical signs without fatiguing a volunteer patient. It can also be readily deployable in large class sizes outside of a traditional classroom with fewer physical constraints.

Figure 1. Screen capture of (A) the instructor’s view from EyelearnMR and (B) the user’s view from EyelearnMR.

A review by Cook et al [20] showed that simulation-based learning can achieve comparable outcomes to those of traditional methods in other fields of medical education. As such, we came up with a study that allowed us to evaluate the feasibility and reception of an MR-based ophthalmology curriculum in honing ophthalmic skills, providing evidence to support the wider adoption of MR in medical curricula. Moreover, the introduction of the MR-based ophthalmology curriculum could help address challenges in ophthalmology training posed by an increasing number of students, limited faculty tutors, and reduced patient exposure.

In this study, we aimed to compare students’ performance on a bespoke video-based assessment between a curriculum incorporating EyelearnMR and a traditional clinical teaching–only curriculum. The assessment evaluated 3 key dimensions of ocular motility learning: examination technique, recognition of clinical signs, and diagnostic reasoning. We postulated that a hybrid ophthalmology curriculum incorporating EyelearnMR would be noninferior to traditional teaching. In addition, we evaluated student perceptions of the MR platform through the validated User Experience Questionnaire (UEQ).


Overview

This was a single-blind, cluster randomized prospective study. We included undergraduate medical students (fourth year and 10 clinical groups in total; age range 23 to 26 years) posted to the Department of Ophthalmology, National University Hospital, Singapore, for their 2-week ophthalmology rotation. The clinical groups (each group comprised 6-7 students) were randomized into 2 arms (5 clinical groups randomized into the EyelearnMR arm and the other 5 clinical groups randomized into the control arm). We used the quasi-randomized design using alternation allocation. For feasibility, clinical groups were randomized, and all students in the same clinical group who agreed to participate in the study were in the same arm. The EyelearnMR learning module consists of 6 different scenarios of ocular motility simulated in a model patient, which the students were free to peruse within the practice session of 2 hours. The module was presented to the students in a structured and guided fashion in accordance with pedagogical best practices.

Both groups underwent the usual 2-week ophthalmology rotation with 6 preset tutorials for various topics and 9 days of clinical exposure. The intervention group had an additional 2 hours of practice with the EyelearnMR devices. Six HoloLens units were available for the study. During the second week of their posting, a 30-minute assessment (5 videos were played, and students had to answer 17 multiple-choice questions related to ocular motility in the domains of examination technique, pathology, and examination signs) was conducted for both the control and intervention groups. To our knowledge, no standardized assessments using psychometric data and benchmarks are available. As such, a bespoke assessment was created. We used “remember,” “understand,” and “apply” from Bloom’s taxonomy hierarchy to come up with the questions for the scenarios. The clinical assessment was intentionally designed to focus on core competencies relevant to ocular motility—specifically, examination technique, recognition of clinical signs, and diagnostic reasoning. A total of 17 structured multiple-choice questions were administered across 5 video-based clinical scenarios, each designed to assess different facets of performance. The questions were vetted by 2 senior neuro-ophthalmologists in clinical practice, ensuring clinical validity and educational relevance. The following is a breakdown of the scenarios:

  • Scenario 1: normal patient to test ocular motility assessment technique
  • Scenario 2: patient with left partial cranial nerve 3 palsy
  • Scenario 3: patient with right cranial nerve 6 palsy
  • Scenario 4: patient with thyroid eye disease (lid retraction and ophthalmoplegia)
  • Scenario 5: patient with left internuclear ophthalmoplegia

The assessment was conducted mid-posting for the intervention group and at the end of the posting for the control group. The assessment questions were not shared with the groups beforehand. The rationale for assessing the intervention group earlier, in addition to setting a higher bar for EyelearnMR, was to allow for the provision of outcomes showing noninferiority between both groups. In the event of noninferiority, we could demonstrate that EyelearnMR is able to replace some degree of traditional clinical teaching even with a shorter total clinical exposure time. For the EyelearnMR group, the 2-hour EyelearnMR session and the 30-minute video-based assessment were scheduled on days 8 and 9 of the 2-week posting (Figure 2). For the control group, the same assessment was conducted on days 11 and 12, coinciding with the usual end-of-posting evaluation. This schedule reflected how EyelearnMR was intended to be implemented in practice: as a mid-rotation consolidation session that could be integrated into the existing timetable without displacing core teaching activities or overloading the final days of the posting.

Figure 2. User study timeline.

Students in both groups were asked to fill out the UEQ [21]. Students in the control group were given the opportunity to experience the EyelearnMR modules for 2 hours at the end of the posting, following which they were also asked to fill out the UEQ.

A sample of the assessment questions and the UEQ can be found in Multimedia Appendix 1.

Statistical Analysis

Data were analyzed descriptively first. Means and SDs or medians and ranges were reported for the numerical variables, whereas frequencies and percentages were reported for the categorical or ordinal variables. Comparison of scores between groups was analyzed using the Mann-Whitney U test. The Fisher exact test was used to compare the test results on technique, examination, and pathology-related knowledge. P values of <.05 were considered statistically significant. Results from the UEQ were evaluated using the data analysis tools provided by the UEQ team [22]. The range of the scales is between −3 (“horribly bad”) and +3 (“extremely good”). Values between −0.8 and 0.8 represent a neutral evaluation of the corresponding scale, values of >0.8 represent a positive evaluation, and values of <−0.8 represent a negative evaluation [21]. Statistical analysis was carried out using SPSS (version 29.0.0; IBM Corp).

Ethical Considerations

This study was approved by the National University of Singapore Learning and Analytics Committee on Ethics (L2022-09-02). Informed consent was obtained and participants’ confidentiality was maintained by not having any personal data recorded.


Recruitment was conducted from July 2023 to January 2024 and was completed as of submission of this manuscript.

A total of 54 fourth-year medical students (age range 23-26 years) were recruited for this study: 24 (44.4%) in the control arm and 30 (55.6%) in the EyelearnMR arm. The median assessment score was 15 (IQR 14-15) for the control group and 16 (IQR 15-17) for the EyelearnMR group (P=.03). The assessment questionnaire consisted of items about testing techniques, examination, and pathology-related knowledge (as shown in Multimedia Appendix 1). As shown in Table 1, a total of 100% (30/30) of the students scored full marks (3/3) for the technique portion in the EyelearnMR group compared to 70.8% (17/24) of the students in the control group, and this was statistically significant (P=.002). There was no statistically significant difference between the groups for the examination and pathology portions.

Table 1. Fisher exact test results according to subgroup analysis.
QuestionControl group (n=24), n (%)MRa group (n=30), n (%)Fisher exact test P value
Technique (out of 3 points).002
2 points7 (29.2)0 (0.0)
3 points17 (70.8)30 (100.0)
Examination (out of 8 points).13
4 points1 (4.2)1 (3.3)
5 points3 (12.5)3 (10.0)
6 points9 (37.5)9 (30.0)
7 points9 (37.5)11 (36.7)
8 points2 (8.3)11 (36.7)
Pathology (out of 6 points).33
4 points1 (4.2)1 (3.3)
5 points7 (29.2)4 (13.3)
6 points16 (66.7)25 (83.3)

aMR: mixed reality.

The results from the UEQ are shown in Tables 2 and 3, as well as in Figure 3. Regarding the combined evaluation, attractiveness, efficiency, dependability, stimulation, and novelty received a positive evaluation. Perspicuity received a neutral evaluation. Statistical analysis using a simple 2-tailed t test showed no statistically significant difference between the HoloLens and control groups for attractiveness, perspicuity, efficiency, dependability, stimulation, and novelty (P=.37, P=.26, P=.57, P=.38, P=.79, and P=.12, respectively).

Table 2. Mean scores and variances of the User Experience Questionnaire items.
ItemScore (–3 to +3), mean (SD)VarianceLower boundUpper boundScale
1a1.0 (1.6)2.6“Annoying”“Enjoyable”Attractiveness
2a1.6 (1.0)0.9“Not understandable”“Understandable”Perspicuity
3a1.6 (1.6)2.6“Creative”“Dull”Novelty
4b0.5 (1.6)2.4“Easy to learn”“Difficult to learn”Perspicuity
5a1.5 (1.2)1.4“Valuable”“Inferior”Stimulation
6a1.7 (1.0)1.0“Boring”“Exciting”Stimulation
7a1.9 (1.0)1.0“Not interesting”“Interesting”Stimulation
8b0.2 (1.4)2.0“Unpredictable”“Predictable”Dependability
9b0.0 (1.5)2.3“Fast”“Slow”Efficiency
10a2.0 (1.0)1.1“Inventive”“Conventional”Novelty
11a1.6 (1.1)1.1“Obstructive”“Supportive”Dependability
12a1.8 (1.0)1.0“Good”“Bad”Attractiveness
13b0.0 (1.4)1.9“Complicated”“Easy”Perspicuity
14a1.3 (1.1)1.3“Unlikable”“Pleasing”Attractiveness
15a1.3 (1.2)1.5“Usual”“Leading edge”Novelty
16a1.2 (1.2)1.4“Unpleasant”“Pleasant”Attractiveness
17a1.0 (1.2)1.5“Secure”“Not secure”Dependability
18a1.2 (1.2)1.4“Motivating”“Demotivating”Stimulation
19a1.5 (1.1)1.2“Meets expectations”“Does not meet expectations”Dependability
20b0.7 (1.5)2.2“Inefficient”“Efficient”Efficiency
21a0.8 (1.6)2.6“Clear”“Confusing”Perspicuity
22a1.1 (1.3)1.7“Impractical”“Practical”Efficiency
23a1.5 (1.1)1.2“Organized”“Cluttered”Efficiency
24a1.6 (1.0)1.1“Attractive”“Unattractive”Attractiveness
25a1.6 (1.0)0.9“Friendly”“Unfriendly”Attractiveness
26a1.5 (1.4)1.9“Conservative”“Innovative”Novelty

aPositive evaluation.

bNeutral evaluation.

Table 3. Mean scores and variances of the User Experience Questionnaire (UEQ) scales overall, for the HoloLens group, and for the control group.
UEQ scaleScore (–3 to +3), mean (SD)VarianceCronbach α coefficient
Overall
Attractivenessa1.413 (0.969)0.940.92
Perspicuityb0.736 (1.094)1.20.8
Efficiencya0.822 (1.068)1.140.8
Dependabilitya1.087 (0.801)0.640.61
Stimulationa1.577 (0.845)0.710.78
Noveltya1.606 (0.967)0.930.72
sEyelearnMR group
Attractivenessa1.311 (1.01)1.020.92
Perspicuityb0.592 (1.15)1.310.82
Efficiencyb0.75 (1.13)1.280.86
Dependabilitya1 (0.77)0.60.6
Stimulationa1.55 (0.86)0.740.76
Noveltya1.433 (1.04)1.090.72
Control group
Attractivenessa1.553 (0.915)0.840.93
Perspicuitya0.932 (1.012)1.020.76
Efficiencya0.92 (0.992)0.980.71
Dependabilitya1.205 (0.84)0.710.61
Stimulationa1.614 (0.841)0.710.83
Noveltya1.841 (0.818)0.670.71

aPositive evaluation.

bNeutral evaluation.

Figure 3. Graphs of mean User Experience Questionnaire scales: (A) combined control and EyelearnMR groups; (B) EyelearnMR group; and (C) control group.

Principal Findings

In this noninferiority study, a curriculum incorporating EyelearnMR performed at least as well as the traditional clinical teaching–only curriculum and overall supported comparable learning objectives. Our study demonstrates that EyelearnMR on the HoloLens 2 is an effective teaching tool for ocular motility examination. Students who used EyelearnMR achieved significantly higher scores on the video-based assessment compared with those who underwent traditional clinical teaching alone (median score 16 vs 15 out of 17; P=.03). Although the difference was statistically significant, the absolute gap of 1 point on a 17-point scale is small, and its practical or clinical relevance should be interpreted in context. There was a particularly marked difference in examination technique (P=.002). This was despite the EyelearnMR group having reduced clinical exposure time at the time of testing. The improved performance in the EyelearnMR group likely reflects the combined impact of structured deliberate practice, standardized exposure to key motility disorders, and consistent reproduction of clinical signs that are not guaranteed in opportunistic encounters with real patients.

Positive evaluations were received for attractiveness, efficiency, dependability, stimulation, and novelty. Efficiency, dependability, stimulation, and perspicuity are parameters that are relevant for learning, and positive evaluations were received for the first 3. However, neutral evaluations were received for perspicuity (quality of being clear and easy to understand). It is postulated that many students had difficulty with the pinching gesture required to select specific options when operating the HoloLens. Despite demonstrating the technique to the students a few times, a few of them were unable to grasp it. This was deemed to be a technical limitation of the device and is expected to improve in subsequent updates. Very positive evaluations were received for novelty, which is not surprising in view of the fact that students are not exposed to MR in their usual clinical rotations. When comparing the UEQ results of the 2 groups, there was no statistically significant difference in the evaluation. As such, we can conclude that there was no translatable interaction between the study design and perceptions of the software.

Implications of the Findings

Deliberate Practice and Experiential Learning

EyelearnMR, being a type of simulation learning, facilitates deliberate practice in a controlled environment and allows for learning through problem-solving [23]. A systematic review by Cook et al [20] found that simulation training in health profession education was consistently associated with large positive effects on outcomes of knowledge, skills, and behaviors and moderate positive effects for patient-related outcomes, cementing its role in health care education.

In the EyelearnMR group, the learning module was presented to the students in a structured and guided fashion in accordance with pedagogical best practices, and they were allowed to practice their examinations at their own pace, repeating specific cases if necessary. Conversely, the control group only had access to traditional clinical skill tutorials and clinical sessions, which typically use either simulated or real patients. These patients may not allow for enough deliberate practice time due to fatigue or time constraints. In a clinical tutorial group setting, the teacher-to-student ratio is typically 1:7, which limits the number of attempts that students may be able to practice [24].

Reproducibility of Clinical Signs With a Comprehensive Selection of Conditions

Making a diagnosis from physical examination involves applying a set of clinical examination skills consistently, then evaluating the findings and reaching a conclusion. Repeated deliberate practice of the examination steps improves consistency, and exposure to a variety of conditions builds a knowledge base for appropriate diagnosis.

In the control group (and traditional clinical teaching environments), the range of conditions that a student may encounter is unequal across the student population. Simulated patients (often volunteers) are not able to fully replicate critical, life-threatening clinical signs for effective skill transfer, especially for a topic such as ocular examination. The alternative is to have students examine real patients with the actual condition. While this is done opportunistically in the clinic for all students, it is impossible for them to be exposed to the same conditions with the same degree of significant findings. However, in the EyelearnMR group, every student was given the opportunity to examine a defined and comprehensive set of conditions. They were able to fully experience and learn from “patients” displaying consistent and reproducible clinical signs [25]. This facilitated a broad knowledge base on which to build their diagnostic competencies [26]. This is in contrast with the control group, which had to rely on chance encounters with such patients in the clinical setting, likely with varying physical findings.

The ability of the EyelearnMR module to reproduce the same clinical signs allows for truly experiential learning and deliberate practice for the students, increasing the effectiveness of the teaching and knowledge retention. This forms a strong foundation for the understanding of the underlying concepts, which translated to better scores on the final assessment.

Comparison to the Literature

There are existing virtual and augmented reality applications in ophthalmology, but these are mainly on surgical simulation such as cataract surgery training or anatomical visualization [27]. However, EyelearnMR targets the ocular motility clinical examination—a skill set often neglected in current platforms [28]. It enables users to interact with life-sized, holographic patients with varied and reproducible motility disorders. This provides students with the opportunity to practice their examination techniques repeatedly in a realistic and reproducible environment and may help build their confidence and develop diagnostic skills in a more consistent and reliable way.

Strengths and Limitations

This is a novel study that investigates the use of MR to teach ocular motility testing. To our knowledge, no existing simulator can perform this task. A scoping review by Krutsinger et al [29] on virtual reality–based medical education in ophthalmology summarized its use mainly for surgical training; clinical applications in diagnosis; counseling; and physical examination, including simulated pupil examination, simulation-based slit lamp training, and ophthalmoscopy [23]. As such, this study addresses an important gap by examining the use of MR to teach ocular motility testing, an area not covered by current simulators or virtual reality platforms. By combining real and virtual environments, MR enables learners to appreciate and interact with extraocular movements in 3 dimensions, supporting a deeper understanding of spatial relationships and examination techniques that are difficult to convey through traditional or screen-based simulations.

The technical limitations of the HoloLens 2 headset include limited field of view and low battery life. The device has a field of view of 43° horizontally compared to the human’s field of view of 135°. Therefore, users must adjust their head angles to ensure that the targets are within the device’s field of view. On average, battery life can only be sustained for 2 hours of active use. Inclusive of setup time, a device would usually be near the end of its battery life at the end of the assessment. Student feedback highlighted issues with tracking accuracy despite the device’s capabilities for both hand and eye tracking. Cost is also a limitation as each HoloLens 2 headset can be costly and adequate funding is required to support such deployment. Finally, the EyelearnMR evaluated a single examination skill (ocular motility) with tightly defined parameters. This was done purposefully to maintain the scientific rigor in the study and reduce other potential confounders. Therefore, the results of this study cannot be extrapolated to other forms of clinical skill examination at this point in time. Future studies will be useful for exploring broader applications of MR in medical education.

MR can also create a potentially dangerous environment if users are not careful; for example, users may not be aware of their surroundings and could trip, fall, or walk into objects. This is more of a concern in virtual reality, where the user is unable to see the actual surroundings, whereas in MR, the user can observe their real environment.

Our cohort consisted of fourth-year medical students from a single medical school rotation, and participation was opt-in. This introduces 2 potential issues. First, the sample size was modest, and group allocation by clinical batch may mean that unmeasured differences between groups (eg, baseline motivation, prior exposure to neuro-ophthalmology, or varying tutorial quality) could influence outcomes. Second, because participation was voluntary, there may have been self-selection bias; students who agreed to participate could differ systematically from those who declined, particularly in interest or confidence in ophthalmology.

In terms of study design, as this study was on an opt-in basis for the students who rotated to the hospital, the recruited sample size was not balanced between the 2 groups: 24 in the control arm and 30 in the EyelearnMR arm. This imbalance in sample size could cause selection bias and potentially reduce statistical power and precision. Although the completed questionnaires were retrieved from the students, a limitation of this study was that it assumed academic integrity, trusting that students would refrain from sharing the scenarios or questions with future student groups rotating through the ophthalmology posting.

The assessment was administered at different time points for the 2 groups (days 8 and 9 vs days 11 and 12). Although this reflected how EyelearnMR would realistically be integrated into the rotation, we cannot exclude a potential recency effect. However, the assessment targeted application of examination technique and diagnostic reasoning rather than simple recall, and the control group had continuous clinical exposure and tutorials up to the end of the posting.

Although video-based assessments standardize stimuli, they may not fully represent the variability observed in real patients. Subtle motility deficits may appear different when captured on video rather than during a face-to-face examination. Additionally, the bespoke nature of the assessment means that it has not yet undergone formal psychometric validation (eg, reliability testing and item difficulty calibration). This may affect reproducibility if used in different settings or with different cohorts.

It is also important to acknowledge that the observed improvement cannot be attributed solely to the MR software itself. EyelearnMR was delivered during a structured teaching session that incorporated pedagogical best practices, including guided demonstration, self-paced exploration, and opportunities for repeated practice with immediate visual feedback. Any of these components may have contributed to the improved performance. This study was not designed to isolate the individual effects of software features, instructional approach, or practice opportunities, and therefore, the greatest effect on learning cannot be determined from our data alone.

Conclusions

On the basis of this study, EyelearnMR with traditional teaching is noninferior to traditional teaching alone. It provided a comparable experience and equally supported learning objectives. It is an effective supplementary teaching tool in ophthalmic education and may confer additional learning benefits to those of a traditional clinical posting, especially in the field of clinical examination technique.

Funding

This study received funding support from the Centre for Development of Teaching and Learning, National University of Singapore, via a Teaching Enhancement Grant. The funding source was not involved in the study design; collection, analysis, and interpretation of the data; writing of the report; or decision to submit the paper for publication.

Data Availability

The datasets generated or analyzed during this study are available from the corresponding author on reasonable request.

Authors' Contributions

Marcus and Dayna contributed equally as first authors. Victor and Clement contributed equally as last authors.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Video assessment questions and the user experience questionnaire.

DOCX File, 192 KB

  1. Gerup J, Soerensen CB, Dieckmann P. Augmented reality and mixed reality for healthcare education beyond surgery: an integrative review. Int J Med Educ. Jan 18, 2020;11:1-18. [CrossRef] [Medline]
  2. Bogomolova K, Sam AH, Misky AT, et al. Development of a virtual three‐dimensional assessment scenario for anatomical education. Anat Sci Educ. May 2021;14(3):385-393. [CrossRef] [Medline]
  3. Pennefather P, Krebs C. Exploring the role of xR in visualisations for use in medical education. Adv Exp Med Biol. 2019;1171:15-23. [CrossRef] [Medline]
  4. Wish-Baratz S, Crofton AR, Gutierrez J, Henninger E, Griswold MA. Assessment of mixed-reality technology use in remote online anatomy education. JAMA Netw Open. Sep 1, 2020;3(9):e2016271. [CrossRef] [Medline]
  5. Ruthberg JS, Tingle G, Tan L, et al. Mixed reality as a time-efficient alternative to cadaveric dissection. Med Teach. Aug 2020;42(8):896-901. [CrossRef] [Medline]
  6. Martin G, Koizia L, Kooner A, et al. Use of the HoloLens2 mixed reality headset for protecting health care workers during the COVID-19 pandemic: prospective, observational evaluation. J Med Internet Res. Aug 14, 2020;22(8):e21486. [CrossRef] [Medline]
  7. Moro C, Štromberga Z, Raikos A, Stirling A. The effectiveness of virtual and augmented reality in health sciences and medical anatomy. Anat Sci Educ. Nov 2017;10(6):549-559. [CrossRef] [Medline]
  8. Tepper OM, Rudy HL, Lefkowitz A, et al. Mixed reality with HoloLens: where virtual reality meets augmented reality in the operating room. Plast Reconstr Surg. Nov 2017;140(5):1066-1070. [CrossRef] [Medline]
  9. Al Janabi HF, Aydin A, Palaneer S, et al. Effectiveness of the HoloLens mixed-reality headset in minimally invasive surgery: a simulation-based feasibility study. Surg Endosc. Mar 2020;34(3):1143-1149. [CrossRef] [Medline]
  10. Zuo Y, Jiang T, Dou J, et al. A novel evaluation model for a mixed-reality surgical navigation system: where Microsoft HoloLens meets the operating room. Surg Innov. Apr 2020;27(2):193-202. [CrossRef] [Medline]
  11. Pratt P, Ives M, Lawton G, et al. Through the HoloLens™ looking glass: augmented reality for extremity reconstruction surgery using 3D vascular models with perforating vessels. Eur Radiol Exp. 2018;2(1):2. [CrossRef] [Medline]
  12. Ramesh PV, Joshua T, Ray P, et al. Holographic elysium of a 4D ophthalmic anatomical and pathological metaverse with extended reality/mixed reality. Indian J Ophthalmol. Aug 2022;70(8):3116-3121. [CrossRef] [Medline]
  13. Kumar N, Pandey S, Rahman E. A novel three-dimensional interactive virtual face to facilitate facial anatomy teaching using Microsoft HoloLens. Aesthetic Plast Surg. Jun 2021;45(3):1005-1011. [CrossRef] [Medline]
  14. Maniam P, Schnell P, Dan L, et al. Exploration of temporal bone anatomy using mixed reality (HoloLens): development of a mixed reality anatomy teaching resource prototype. J Vis Commun Med. Jan 2020;43(1):17-26. [CrossRef] [Medline]
  15. Robinson BL, Mitchell TR, Brenseke BM. Evaluating the use of mixed reality to teach gross and microscopic respiratory anatomy. Med Sci Educ. Aug 18, 2020;30(4):1745-1748. [CrossRef] [Medline]
  16. Silvero Isidre A, Friederichs H, Müther M, Gallus M, Stummer W, Holling M. Mixed reality as a teaching tool for medical students in neurosurgery. Medicina (Kaunas). Sep 26, 2023;59(10):1720. [CrossRef] [Medline]
  17. Muangpoon T, Haghighi Osgouei R, Escobar-Castillejos D, Kontovounisios C, Bello F. Augmented reality system for digital rectal examination training and assessment: system validation. J Med Internet Res. Aug 13, 2020;22(8):e18637. [CrossRef] [Medline]
  18. Schoeb DS, Schwarz J, Hein S, et al. Mixed reality for teaching catheter placement to medical students: a randomized single-blinded, prospective trial. BMC Med Educ. Dec 16, 2020;20(1):510. [CrossRef] [Medline]
  19. Minty I, Lawson J, Guha P, et al. The use of mixed reality technology for the objective assessment of clinical skills: a validation study. BMC Med Educ. Aug 23, 2022;22(1):639. [CrossRef] [Medline]
  20. Cook DA, Hatala R, Brydges R, et al. Technology-enhanced simulation for health professions education: a systematic review and meta-analysis. JAMA. Sep 7, 2011;306(9):978-988. [CrossRef] [Medline]
  21. Laugwitz B, Held T, Schrepp M. Construction and evaluation of a user experience questionnaire. Presented at: Proceedings of the 4th Symposium of the Workgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society; Nov 20-21, 2008. [CrossRef]
  22. Hinderks A, Schrepp M, Domínguez Mayo FJ, Escalona MJ, Thomaschewski J. Developing a UX KPI based on the user experience questionnaire. Comput Stand Interfaces. Jul 2019;65:38-44. [CrossRef]
  23. McGaghie WC, Issenberg SB, Cohen ER, Barsuk JH, Wayne DB. Does simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence. Acad Med. Jun 2011;86(6):706-711. [CrossRef] [Medline]
  24. Yardley S, Teunissen PW, Dornan T. Experiential learning: transforming theory into practice. Med Teach. 2012;34(2):161-164. [CrossRef] [Medline]
  25. Issenberg SB, McGaghie WC, Petrusa ER, Lee Gordon D, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach. Jan 2005;27(1):10-28. [CrossRef] [Medline]
  26. Botezatu M, Hult H, Tessma MK, Fors U. Virtual patient simulation: knowledge gain or knowledge loss? Med Teach. 2010;32(7):562-568. [CrossRef] [Medline]
  27. Ong CW, Tan MC, Lam M, Koh VT. Applications of extended reality in ophthalmology: systematic review. J Med Internet Res. Aug 19, 2021;23(8):e24152. [CrossRef] [Medline]
  28. Iskander M, Ogunsola T, Ramachandran R, McGowan R, Al-Aswad LA. Virtual reality and augmented reality in ophthalmology: a contemporary prospective. Asia Pac J Ophthalmol (Phila). 2021;10(3):244-252. [CrossRef] [Medline]
  29. Krutsinger BL, Moore JC. Virtual reality-based medical education in ophthalmology: a scoping review. Cureus. May 2025;17(5):e83845. [CrossRef] [Medline]


MR: mixed reality
UEQ: User Experience Questionnaire


Edited by Blake Lesselroth; submitted 15.Jan.2025; peer-reviewed by Boluwatife Afolabi, Rafael Scherer; final revised version received 13.Nov.2025; accepted 25.Nov.2025; published 03.Mar.2026.

Copyright

© Chun Jin Marcus Tan, Wei Wei Dayna Yong, Hui'En Hazel Anne Lin, Jaslyn Oh, How Sheng Rubin Yong, Fang Mei Jayme Khew, Liang Shen, Yujia Gao, Wei Chieh Alfred Kow, Yih Chung Tham, Dianbo Liu, Ching-Yu Cheng, Kee Yuan Ngiam, Yew Sen Yuen, Ray Manotosh, Eng Tat Khoo, Teck Chang Victor Koh, Woon Teck Clement Tan. Originally published in JMIR Medical Education (https://mededu.jmir.org), 3.Mar.2026.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic information, a link to the original publication on https://mededu.jmir.org/, as well as this copyright and license information must be included.