Published on in Vol 8, No 1 (2022): Jan-Mar

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/34751, first published .
Real-life Evaluation of an Interactive Versus Noninteractive e-Learning Module on Chronic Obstructive Pulmonary Disease for Medical Licentiate Students in Zambia: Web-Based, Mixed Methods Randomized Controlled Trial

Real-life Evaluation of an Interactive Versus Noninteractive e-Learning Module on Chronic Obstructive Pulmonary Disease for Medical Licentiate Students in Zambia: Web-Based, Mixed Methods Randomized Controlled Trial

Real-life Evaluation of an Interactive Versus Noninteractive e-Learning Module on Chronic Obstructive Pulmonary Disease for Medical Licentiate Students in Zambia: Web-Based, Mixed Methods Randomized Controlled Trial

Original Paper

1Heidelberg Institute of Global Health (HIGH), Faculty of Medicine and University Hospital, Heidelberg University, Heidelberg, Germany

2School of Medicine and Clinical Sciences, Levy Mwanawasa Medical University, Lusaka, Zambia

3SolidarMed, Lusaka, Zambia

Corresponding Author:

Elena Schnieders

Heidelberg Institute of Global Health (HIGH)

Faculty of Medicine and University Hospital

Heidelberg University

Im Neuenheimer Feld 672

Heidelberg, 69120

Germany

Phone: 49 6221 564904

Email: E.Schnieders@stud.uni-heidelberg.de


Background: e-Learning for health professionals in many low- and middle-income countries (LMICs) is still in its infancy, but with the advent of COVID-19, a significant expansion of digital learning has occurred. Asynchronous e-learning can be grouped into interactive (user-influenceable content) and noninteractive (static material) e-learning. Studies conducted in high-income countries suggest that interactive e-learning is more effective than noninteractive e-learning in increasing learner satisfaction and knowledge; however, there is a gap in our understanding of whether this also holds true in LMICs.

Objective: This study aims to validate the hypothesis above in a resource-constrained and real-life setting to understand e-learning quality and delivery by comparing interactive and noninteractive e-learning user satisfaction, usability, and knowledge gain in a new medical university in Zambia.

Methods: We conducted a web-based, mixed methods randomized controlled trial at the Levy Mwanawasa Medical University (LMMU) in Lusaka, Zambia, between April and July 2021. We recruited medical licentiate students (second, third, and fourth study years) via email. Participants were randomized to undergo asynchronous e-learning with an interactive or noninteractive module for chronic obstructive pulmonary disease and informally blinded to their group allocation. The interactive module included interactive interfaces, quizzes, and a virtual patient, whereas the noninteractive module consisted of PowerPoint slides. Both modules covered the same content scope. The primary outcome was learner satisfaction. The secondary outcomes were usability, short- and long-term knowledge gain, and barriers to e-learning. The mixed methods study followed an explanatory sequential design in which rating conferences delivered further insights into quantitative findings, which were evaluated through web-based questionnaires.

Results: Initially, 94 participants were enrolled in the study, of whom 41 (44%; 18 intervention participants and 23 control participants) remained in the study and were analyzed. There were no significant differences in satisfaction (intervention: median 33.5, first quartile 31.3, second quartile 35; control: median 33, first quartile 30, second quartile 37.5; P=.66), usability, or knowledge gain between the intervention and control groups. Challenges in accessing both e-learning modules led to many dropouts. Qualitative data suggested that the content of the interactive module was more challenging to access because of technical difficulties and individual factors (eg, limited experience with interactive e-learning).

Conclusions: We did not observe an increase in user satisfaction with interactive e-learning. However, this finding may not be generalizable to other low-resource settings because the post hoc power was low, and the e-learning system at LMMU has not yet reached its full potential. Consequently, technical and individual barriers to accessing e-learning may have affected the results, mainly because the interactive module was considered more difficult to access and use. Nevertheless, qualitative data showed high motivation and interest in e-learning. Future studies should minimize technical barriers to e-learning to further evaluate interactive e-learning in LMICs.

JMIR Med Educ 2022;8(1):e34751

doi:10.2196/34751

Keywords



Background

Medical education in sub-Saharan Africa (SSA) has expanded significantly in the last 3 decades as countries in the region have tried to address the critical shortfall of key health workers [1]. However, several factors threaten to impede developments on this front. These include a lack of teaching infrastructure and adequately trained medical teaching staff and the challenges many health professionals face as they attempt to manage heavy teaching workloads alongside priorities in clinical practice [1]. Another factor that affects advances in training clinicians is brain-drain—health professionals with critical teaching skills and experience relocate to high-income countries (HICs) in pursuit of better remuneration and employment conditions [2]. Although these systemic challenges threaten to impede medical education, there is a critical need to find ways to improve the educational and teaching experiences of students and lecturers in low-income settings, in which e-learning has been explored as a catalyst [3].

e-Learning is considered as potent as traditional classroom learning alone in a low-resource context [4], with several benefits. For instance, materials can be accessed at any time and in any geographic location using an internet connection, content may be available for offline access after download, and materials can be studied at the student’s own pace [5,6]. Furthermore, e-learning access is scalable, thus facilitating teaching large numbers of students, and updating the content is also more efficient [6]. e-Learning is considered potentially cost-effective owing to reduced costs of instruction, travel, and classroom infrastructure [5-7]. However, the initial implementation of e-learning and its running costs are expensive, which can be a challenge, especially in low- and middle-income countries (LMICs) [4,5,8]. Often, e-learning in LMICs does not progress past the pilot stage because the e-learning approach is not adapted to the individual needs of the institution and is frequently not implemented sustainably—a phenomenon coined pilot-itis [8].

As with traditional classroom learning, e-learning is a heterogeneous learning method, which means there are different ways of learning on the web. An aspect is the difference between interactive and noninteractive e-learning. Interactive e-learning is defined as content that reacts to a learner’s actions [9]. Examples of interactive e-learning include quizzes, interactive interfaces, virtual patients, and serious games. Virtual patients often involve learners in interactive clinical scenarios with a virtual person to teach clinical reasoning skills [10]. Serious games are technology-based games to teach a certain skill, mindset, or provide information [11]. Noninteractive e-learning, on the other hand, is defined as learning through static, nonresponsive web-based resources, such as PowerPoint slides without interactive elements, PDF scripts, or videos [3,12].

In health education research, interactive e-learning is often deemed more effective than noninteractive e-learning. Several studies in HICs have shown a positive effect of interactive e-learning on user satisfaction or knowledge compared with noninteractive e-learning [13-19]. In addition, knowledge frequently increases when user satisfaction is high [13,14,18]. However, studies comparing an interactive e-learning method with a noninteractive e-learning method for health care personnel in LMICs are rare, which potentially makes assumptions about the effectiveness of interactive e-learning in LMICs difficult for lecturers and other stakeholders. A study conducted in Colombia, an upper-middle–income country, compared learning on the commonly used e-learning platform Moodle with learning using an interactive intelligent tutor system. The latter fared better in their evaluation of medical students’ knowledge, learning efficiency, and usability [20].

An e-learning system for medical licentiate (ML) students was set up in 2016 at the Chainama College of Health Sciences in Lusaka, Zambia, which is now part of the Levy Mwanawasa Medical University (LMMU). In addition, third- and fourth-year students received tablets to facilitate e-learning access [21,22]. The e-learning system was then assessed using a mixed methods format and considered functional in these settings. However, the program faced some challenges, as students’ and lecturers’ use of the e-learning platform was low. Possible explanations were the low quality of the tablets used and insufficient training with the technology. Another shortcoming was the low availability of diverse and multimedia e-learning content, as mainly noninteractive materials were available [21-23].

This study aims to contribute to the multimedia e-learning content at the LMMU by providing targeted e-learning materials on chronic obstructive pulmonary disease (COPD). COPD is a noncommunicable, chronic but preventable disease that occupies the seventh place in the worldwide list of years of life lost [24,25]. Of 196 million people >40 years in SSA, approximately 26 million were estimated to have COPD in 2010, and the literature suggests that >80% of COPD deaths occur in LMICs worldwide [26,27]. To treat COPD, health care workers need to be aware of the disease, its diagnosis and management, and adequate guidelines, such as the international guidelines of the Global Initiative for Chronic Obstructive Lung Disease (GOLD) [25]. However, this is not sufficient, as COPD is mostly underrepresented in medical education in SSA, leading to COPD underdiagnosis [28-32]. Improved COPD education for health care workers in low-resource settings is essential, as smoking and old age—the disease’s key cause and risk factor, respectively—have been increasing in LMICs, predicting growth in COPD cases [25].

Study Objectives

The overarching objective of this web-based study is to compare learning outcomes from an interactive and noninteractive e-learning module on the topic of COPD for ML students following a mixed methods randomized controlled trial (RCT). The aim was to improve the understanding of real-life e-learning quality and delivery at the LMMU. Subsequently, the primary outcome for this study was user satisfaction, and the secondary outcomes were usability, short- and long-term knowledge gain, and barriers to e-learning access for ML students. These outcomes were determined quantitatively by web-based questionnaires and qualitatively by web-based rating conferences that explored how students experienced e-learning. On the basis of findings from previous studies, we hypothesized that an interactive e-learning module would be more effective in increasing learners’ satisfaction and knowledge gain than a noninteractive module. It should be noted that most previous studies were conducted in HICs and not in a low-income setting.


Overview

This study adheres to the CHERRIES (Checklist for Reporting Results of Internet E-Surveys) checklist and the CONSORT-EHEALTH (Consolidated Standards of Reporting Trials of Electronic and Mobile Health Applications and Online Telehealth) guidelines for reporting eHealth and mobile health RCTs (Multimedia Appendix 1) [33,34]. Qualitative data results are presented according to the COREQ (Consolidated Criteria for Reporting Qualitative Research) checklist [35]. This mixed methods study used an explanatory sequential design in which qualitative findings were used to clarify quantitative results.

Study Setting and Design

The RCT with an allocation ratio of 1:1 took place in Zambia, a lower-middle–income country. The trial was conducted on the web at the LMMU in Lusaka, Zambia, for 11 weeks between April and July 2021. The LMMU was established in 2018 and has become the largest health training institution in the country and the fourth public university [36]. e-Learning at the Chainama College of Health Sciences, now part of the LMMU, was successfully implemented in 2016/2017. The study design aimed to evaluate interactive and noninteractive e-learning in a real-life setting, meaning no study-related and specific e-learning training was provided [21-23].

Ethics

The study protocol was approved by the ethics committee of the Heidelberg University and the local ethics committee of the LMMU (Heidelberg S-691/2020; LMMU 00007/20). The trial was not registered in accordance with the International Committee of Medical Journal Editors [37].

Study Sample

ML students in their second, third, and fourth year of the Bachelor of Clinical Sciences program at the LMMU were invited to participate. It was assumed that existing knowledge on COPD was low and that all students had computer literacy, as the technology experience of ML students was assessed to be moderate in 2017 [23]. As there were only approximately 200 ML students in the second, third, and fourth year in the Bachelor of Clinical Sciences program at the LMMU, instead of a sample size calculation, a convenience sample of all eligible students was chosen. A sample size of approximately 50 participants was deemed feasible, considering consent and attrition rates.

Study Materials

Development and Testing

With the aid of FN, who received training at the center for key competencies in didactics at the Heidelberg University, ES developed both e-learning modules. The modules were then uploaded for asynchronous use on the e-learning platform Moodle. Given the e-learning implementation in 2016/2017, it was assumed that all students had access to the e-learning platform and electronic devices [21-23]. Three study team members (FN, PA, and ES) tested the web-based e-learning material before the trial on different digital devices, such as desktop computers and smartphones. Changes were incorporated before the start of the study, and no further changes were made.

Content

Both modules contained key information from the GOLD report 2021, specialist literature, and pulmonological experts [25,38, 39]. The GOLD report is a document published annually that summarizes global information on COPD through the latest scientific literature. Essential knowledge on COPD definition, epidemiology, etiology, symptoms, diagnosis, severity assessment, differential diagnosis, therapy, and prognosis was included in the e-learning modules at the appropriate level according to the curriculum of the ML program. By continuously comparing slides on subtopics and copying and pasting information from one module to the other, it was ensured that both modules comprised the same content scope.

Standard Material—Noninteractive

The noninteractive e-learning module on COPD for the control group included an average of 5 bullet points per slide with several figures and tables (see Multimedia Appendix 2 for screenshots of the noninteractive module).

Interactive Material

The intervention group was provided access to a voice-over interactive e-learning module designed with iSpring Suite (see Multimedia Appendix 3 for screenshots of the interactive module) [40]. The interactive module was composed of a simple interactive environment that allowed the user to control the representation of information and receive predetermined feedback on activities [9]. In more detail, the interactive module included the following items sorted from representation control to obtaining feedback: interactive interfaces including drag and drop options; interactive X-ray images to be explored with the curser; a puzzle; a 10-step virtual patient, including different question paths representing a typical COPD exacerbation case; and 3 short multiple-choice quizzes, for which participants received feedback. Furthermore, the principles of adult learning by Taylor and Hamdy [41] were incorporated into the module. For example, the learner had to complete certain tasks several times, which challenged existing knowledge on COPD and might have put the learner in a dissonance phase as existing knowledge might have been incomplete. This dissonance phase was followed by a refinement phase in which the learner received information on the problem's solution.

Outcome Measures

Overview

The primary outcome was learner satisfaction based on a comparison of interactive and noninteractive e-learning modules. The secondary outcomes were system usability and short- and long-term knowledge gain. After study initiation, an outcome was added—identified barriers to asynchronous e-learning—as feedback from participants revealed usability issues. These end points were determined quantitatively with questionnaires using a web-based survey tool and qualitatively by 2 rating conferences conducted via Zoom [42-44].

Quantitative

The usability, including internet use and comprehensibility of the questionnaires in the web-based survey tool, was tested by a local study team member before study onset, and changes were made accordingly [43]. The user satisfaction questionnaire contained 8 questions on a 5-point Likert scale displayed on 1 page; therefore, 40 points were achievable in the overall user satisfaction. The usability of the modules was tested according to the System Usability Scale (SUS), a validated usability score from 0 to 100, in which 68 could be interpreted as an average according to a curved grading scale [44]. The two knowledge gain tests assessing short- (knowledge gain test 1 [KT1]) and long-term (knowledge gain test 2 [KT2]) knowledge gain were composed of 15 multiple-choice questions, where each question counted as 1 point. The knowledge questionnaires displayed 1 question per page, resulting in 15 pages per test. The knowledge questions were all answerable with the presentation and partly derived from questions of German medical exams because questions on COPD from previous Zambian medical exams were not available. The answers could be reviewed and changed with the back button. All questions in web-based questionnaires had to be completed to submit the results. The questionnaires can be found in Multimedia Appendix 4.

Qualitative

When assessing a teaching intervention, qualitative data from rating conferences can shed light on the quantitative findings. This method is based on school quality assessments. The results of quantitative evaluation data are displayed to a representative group of up to 12 students, and the following discussion provides in-depth insight into individual motivations and opinions of the participants [42].

Quantitative Evaluation

With the support of local study team members (AS and PA) and the local study coordinator (MM), the principal investigator (ES) conducted recruitment, randomization, and actual implementation of the trial from Germany. This was possible, as everything was conducted on the web because of the COVID-19 pandemic.

Recruitment and Randomization

All eligible students were invited to participate via email on April 20, 2021, and recruitment continued until April 30, 2021 (see Multimedia Appendix 5 for the study information sheet). Email addresses were obtained from the ML course coordinator (AS). Compensation for study participation and internet use related to the study were airtime vouchers, with a value of 200 Zambian Kwacha (US $10.9), to be received at the end of the entire study period.

Students willing to participate sent informed consent via email. Afterward, all participating students were equally randomized into the intervention and control groups using the random number function in Excel (Microsoft Corporation) and a blocked randomization list with a block size of 2 participants [45].

Participants were informally blinded to their group allocation for the first part of the study, as it was not stated in the information sheet which e-learning methods were being compared.

Phase 1: Evaluation of Satisfaction, Usability, and Short-term Knowledge Gain

Following randomization, the participants were invited on May 1, 2021, to participate in their respective e-learning module that was accessible using their e-learning platform account. The e-learning module could be studied asynchronously with the e-learning platform and application. Study participants only had access to their respective e-learning modules. Participants were informed that completing the e-learning module and filling the questionnaires would take approximately 45 minutes, but no time limit was set. Participation reminders were sent on May 7, 11, and 20, 2021, and through local study team members by class representatives. The e-learning platform was down for a few hours on May 4 and 7, 2021, but participants were given until May 31, 2021, to complete these tasks. It was possible to contact the principal investigator via email and a local study coordinator during the entire study. In the case of nonsolvable technical difficulties with the e-learning platform or the internet, individual students were sent a link to their respective e-learning module that was uploaded onto the cloud, whereas students in the control group received a PDF file [46]. The latter was not possible for students in the intervention group, as the interactive presentation could not be saved as a PDF file. Study participants receiving the cloud link or PDF file were asked not to share the information with other participants.

After finishing the module, each participant was directly invited to complete the user satisfaction, SUS, and KT1 questionnaires on the web [43]. Participants stated their study ID in web-based questionnaires to protect personal data. They were asked not to use the presentation or any other additional help to answer the questions. As their log-in information to the e-learning module was not verified, participants were considered to have completed their respective e-learning modules by filling out the web-based questionnaires.

Participants who dropped out of the study because they could not complete phase 1 were labeled initial study dropouts, whereas participants who completed it were first-part participants. First-part participants were categorized as early responders if they completed the module directly or after 1 reminder and as late responders if they completed the module and survey after 2 or more reminders.

Phase 2: Evaluation of Long-term Knowledge Gain

Four weeks after phase 1, on June 28, 2021, the first-part participants were invited to complete the KT2 [43]. They were asked not to use the e-learning module or any other resource for help.

Data Extraction

Pseudonymized data from the web-based questionnaires were automatically transferred to an Excel spreadsheet, thereby maintaining data integrity and security, and then prepared for statistical computing.

Analysis
Overview

Statistical analysis of the quantitative data was performed using the programming language R (version 4.0.3; R Foundation for Statistical Computing) and the packages psych and likert [47,48]. A P value <.05 was considered statistically significant. Cohen d was assessed using a web-based tool, and a post hoc power analysis was calculated with the program G*Power (version 3.1; Erdfelder, Faul, and Buchner) [49].

Characteristics of Study Participants

Only participants who completed the web-based questionnaires and therefore were considered to have completed their respective e-learning module were analyzed for primary and secondary outcomes, resulting in a modified intention-to-treat analysis. Characteristics of first-part participants and initial study dropouts, as well as characteristics of rating conference participants, were compared using a 2-tailed t test and responding chi-square tests.

Quantitative Comparison of the Two Modules

Differences in questionnaire results between the intervention and control groups were evaluated using the Mann-Whitney U test. The difference between the two knowledge gain tests’ scores of each group was calculated using a paired Wilcoxon test.

Factors Influencing Satisfaction, Usability, and Knowledge

We used linear regression, the Mann-Whitney U test, and the Kruskal-Wallis test to analyze whether several factors influenced overall user satisfaction, system usability, and knowledge gain test scores. If a factor with >2 subgroups, such as study year (second, third, and fourth), had a statistically significant influence on the questionnaire result, multiple pairwise comparisons were calculated using the R-function pairwise.wilcox.test.

Qualitative Evaluation

Overview

ES recruited rating conference participants by email and acted as a moderator. Before the study commenced, she had no relationship with the rating conference participants. Additional participants present were 4 extra study team members, including AB and FN, for transcription purposes. They took field notes and audio recordings, which ES later used for transcription.

Recruitment

Approximately 2 weeks after the first study period, on June 16 and 17, 2021, a total of 2 rating conferences took place via Zoom. More than half of the first-part participants (24/41, 59%) were invited to receive sufficient data saturation. Participants in the rating conferences were purposively sampled to be representative of the overall study population that completed the first part of the study. The purposive sampling was stratified for each allocation group according to sex, time of participation (early responder and late responder), and age (<25 years and >25 years), as the mean age (24.3, SD 4.8 years) of first-part participants was approximately 25 years. Rating conference participants received an additional airtime voucher (200 Zambian Kwacha) as compensation.

Phase 3: Conducting the Rating Conferences

Quantitative results from the web-based questionnaires were presented in the rating conferences, which lasted 60 minutes each. The following discussion was semistructured into four parts: satisfaction, usability, knowledge, and e-learning. Each subpart commenced with open questions from the moderator and probes, where appropriate. The semistructured interview guide was not pilot-tested; it was, however, internally reviewed, and a final version was agreed upon by the research team. An active discussion among all participants of the rating conferences was encouraged.

Analysis

The principles for coding and analyzing data were determined in advance. The determined codes and themes were not dependent on their prevalence in the entire data set but rather established through salience in the data. The analysis focused on a detailed description of the data using inductive data-driven analysis. Semantic rather than latent themes were identified, and finally, the analysis was approached in a realist manner, implying what was said was directly linked to its meaning [50]. The data were analyzed using thematic analysis, according to Braun and Clarke [50]. FR and ES examined the data set and identified codes and themes, which were structured into a preliminary coding tree using the NVivo program (version 12; QSR International). The coding tree was then finalized through continuous review of the data set, codes, and themes and an ongoing discussion between the two researchers responsible for data analysis. Afterward, the final coding tree was used by both researchers independently to code the data set again, and any discrepancies were discussed collaboratively. The final coding tree consisted of the following structure: the comparison of the e-learning modules regarding satisfaction, usability, and knowledge, access to the e-learning material, opinions on e-learning and improvement suggestions, and study limitations.


Quantitative Evaluation of Phases 1 and 2

Characteristics of Study Participants

In total, 202 ML students, predominantly in their second year of study, were identified as eligible for participation. Of these, 47% (94/202) of the students signed up for the study. Ultimately, 44% (41/94) of these students participated in the first part of the study and were analyzed. The participant flow and reasons why enrolled participants did not complete the e-learning module and questionnaires (initial study dropouts) are shown in Figure 1. If a student had filled out the web-based questionnaire, it was assumed that they had also received their allocated intervention or control. In all, 2 students in the intervention group later reported in the rating conference that they had switched groups; however, a post hoc sensitivity analysis that excluded these 2 students revealed no differences in the outcomes. The KT2 was completed by 39 first-part participants, as 2 students were lost to follow-up. All participants who started filling out web-based questionnaires also completed them.

Table 1 shows the characteristics of first-part participants, initial study dropouts, and first-part participants in intervention and control groups. There were significantly more female students that were enrolled but did not complete the first part of the study. Apart from that, characteristics did not differ significantly.

Figure 1. CONSORT (Consolidated Standards of Reporting Trials) 2010 flow diagram. KT1: knowledge gain test 1; KT2: knowledge gain test 2; SUS: System Usability Scale; US: user satisfaction.
View this figure
Table 1. Characteristics of first-part participants, initial study dropouts, and first-part participants in intervention and control groups.
CharacteristicsFirst-part participants (n=41)Initial study dropouts (n=53)P valueIntervention (n=18)Control (n=23)
Age (years), mean (SD)24.3 (4.8)23.4 (5.4), n=40.4423.6 (3.5)24.9 (5.7)
Sex (female), n (%)14 (34)34 (64).0076 (33)8 (35)
Group (intervention), n (%)18 (44)29 (55).41N/AaN/A
Study year, n (%).53


229 (71)40 (75)
14 (78)15 (65)

35 (12)8 (15)
2 (11)3 (13)

47 (17)5 (9)
2 (11)5 (22)

aNot applicable.

Quantitative Comparison of the Two Modules
Primary Outcome: User Satisfaction

Results for user satisfaction were not statistically different between the intervention and control groups (Table 2). Bar plots of each user satisfaction question result for both groups are shown in Figure 2. Figure 3 depicts, among other things, the overall user satisfaction scores of the intervention and control groups in a box plot.

Table 2. Questionnaire results.
ParametersIntervention (n=18)Control (n=23)P value
User satisfaction (n=41), median (Q1,a Q3b)33.5 (31.3, 35)33 (30, 37.5).66
System Usability Scale (n=41), median (Q1, Q3)65 (50.6, 76.9)70 (57.5, 76.3).36
KT1c (n=41), median (Q1, Q3)5.5 (4, 9.3)7 (5, 9).26
Self-reported time for e-learning module (minutes; n=41), median (Q1, Q3)51.5 (45, 60)55 (40, 63).92
KT2d (n=39), median (Q1, Q3)6 (3, 7)6 (3.3, 7.8).88
KTe difference (test 1-2), n=39, median (Q1, Q3)0.5 (−2, 3)0 (−1, 5).58

aQ1: first quartile.

bQ3: third quartile.

cKT1: knowledge gain test 1.

dKT2: knowledge gain test 2.

eKT: knowledge gain test.

Figure 2. Results of user satisfaction questions of the intervention and control groups in percent. Q1: I enjoyed the module. Q2: I am satisfied with the module. Q3: My COPD knowledge increased significantly. Q4: My interest in COPD increased. Q5: Module’s key messages were clear. Q6: Module is relevant for medical practice. Q7: It was easy to learn with the module. Q8: I would recommend the module to a friend. C: control; COPD: chronic obstructive pulmonary disease; I: intervention Q: question.
View this figure
Figure 3. Box plots of different questionnaire results of the intervention and control groups. Box plots show median, first quartile, third quartile, minimum, maximum, and outliers. KT1: knowledge gain test 1; KT2: knowledge gain test 2; SUS: System Usability Scale; US: user satisfaction.
View this figure
Secondary Outcomes: Usability and Knowledge Gain

The SUS and KT1 scores and self-reported time spent learning with the e-learning module did not differ statistically significantly between the intervention and control groups (Table 2). However, the data indicated that the intervention group stated slightly lower system usability and received a slightly lower KT1 score. In addition, there were no statistically significant differences in the KT2 and knowledge test scores between the two groups. The sample size for these 2 analyses was 39, as 2 participants were lost to follow-up. Furthermore, each group had a knowledge test score difference close to 0, and the analysis also confirmed that the KT1 and KT2 scores of each group were not significantly different. Figure 3 shows the different questionnaire scores of the intervention and control groups in boxplots.

Factors Influencing Satisfaction, Usability, and Knowledge

The influence of the following factors on user satisfaction, SUS, KT1, and KT2 scores was evaluated: additional study resources, age, device, participant environment, response time for study participation, sex, study year, and time spent learning (Table 3). The influence of the study year on the SUS score was statistically significant. Further analysis revealed a significant difference in SUS scores between second- and fourth-year students, and fourth-year students correlated with a higher SUS score. In addition, a significant correlation was found between the study year and KT1 score. However, when testing for multiple pairwise comparisons to further determine which study years differed significantly in their KT1 scores, no statistically significant differences were found. Most likely, the difference in KT1 score between the second and third study years caused the overall significant correlation, as third-year students had a higher median KT1 score than second-year students, and the P value of that combination was the lowest at .06.

Table 3. P values of correlations between different factors with questionnaire results.
FactorsUSa (n=41)SUSb (n=41)KT1c (n=41)KT2d (n=39)
Sex.48.37.52.48
Age.45.24.85.39
Study year.17.04.03.11
Response time.08.93.88.57
Environment.20.35.76.71
Device.27.44.19.11
Other resource.25.08.07.41
Time.29.15.06.49

aUS: user satisfaction.

bSUS: System Usability Scale.

cKT1: knowledge gain test 1.

dKT2: knowledge gain test 2.

Qualitative Evaluation of Phase 3

Characteristics of Study Participants

We invited 24 first-part participants to participate in the rating conferences (see Methods for sample size and recruitment procedures), of whom 54% (13/24) replied and participated in 2 rating conferences. Table 4 shows the characteristics of all rating conference participants and the rest of the first-part participants and the characteristics of both rating conference groups. No statistically significant differences were found among groups.

Table 4. Characteristics of rating conference participants versus other first-part participants and rating conferences.
CharacteristicsRating conference participants (n=13)Other first-part participants (n=28)P valueRating conference 1 (n=7)Rating conference 2 (n=6)P value
Age (years), mean (SD)26 (6.9)23.5 (3.4).2426 (6.1)26 (8.3).99
Group (intervention), n (%)8 (62)10 (36).234 (57)4 (67).99
Sex (female), n (%)4 (31)10 (36).993 (43)1 (17).68
Study year, n (%).67

.88

28 (62)21 (75)
4 (57)4 (67)

32 (15)3 (11)
1 (14)1 (17)

43 (23)4 (14)
2 (29)1 (17)
Qualitative Results

In addition to their own views, participants also relayed the views of other participants absent in the conference as they had communicated with other study members. As the results of these 2 perspectives did not differ, they are presented together.

Primary Outcome User Satisfaction and Secondary Outcome Usability

Students often reported that it was their first time learning about COPD and expressed gratitude for the opportunity. Comparing the two e-learning methods, satisfaction and usability were linked and, therefore, assigned a category together. Participants reported challenges in accessing both e-learning modules, resulting in lower satisfaction:

I think it really affected my happiness because I don’t like ending things halfway or something taking that long.
[Participant 7, control, female]

The noninteractive module took a long time to load and sometimes just crashed while viewing the presentation; however, access to the interactive module seemed to be impeded even more as a few students from that group (3/8, 38%) reported that they were not able to finish learning with it or could not access it at all:

I was not able to get into anything.
[Participant 5, intervention—changed to control, male]

In addition, the interactive software was described as challenging to use once having gained access to it. This was mainly because of technical difficulties, as it was reported (4/8, 50%) that going back and reviewing the interactive e-learning module was difficult, and the graphics were poorly presented on students’ phones.

These access challenges using the interactive module led to intervention participants using alternative methods to learn about COPD. Further research on the web (2/8, 25%) or gaining access to the e-learning module of the noninteractive group (2/8, 25%) were reported:

I failed to use the interactive instead I managed to access the non-interactive.
[Participant 5, intervention—changed to control, male]

An explanation of how access to the noninteractive e-learning module was achieved was not given by the participants in question. An additional challenge for students in the intervention group was the limited e-learning experience with interactive e-learning:

Some people [said]: ‘ah I gave up’ after trying to use it.... Because some of them it was the first time having to use that interactive session. So, some of them didn’t even know they had to actually click some of those things.
[Participant 9, intervention, male]

A few students (2/8, 25%) reported that they would have preferred the noninteractive e-learning module because it was simpler, and the interactive e-learning module was deemed complicated:

I thought like it was a little bit clustered [cluttered]. Like I actually had to search around and see where exactly I have to go back to. So yeah otherwise, other than that I would have actually even preferred to have the PowerPoint one.
[Participant 9, intervention, male]

However, other students (3/8, 38%) declared being satisfied with the interactivity of the intervention module, as “it’s like you are having your lecturer right there” (participant 12, intervention, male). They enjoyed “the imagery parts where you could actually click on things” (participant 10, intervention, male).

Furthermore, rating conference statements showed that there was no gender dimension regarding access to interactive e-learning, and female and male participants struggled to access the interactive module alike.

Secondary Outcome Knowledge Gain

Both groups regarded the knowledge gain test’s difficulty as adequate. Nevertheless, there was a discrepancy between the participants’ views of its feasibility and the overall outcome. When confronted with the results, some participants (5/13, 39%) viewed the impeded access to the e-learning modules as the reason for the average marks of both groups:

I think the reason why the performance was average is probably because maybe the majority were not able to finish their modules, so I guess.
[Participant 6, control, female]

Nonetheless, more members of the interactive group (4/8, 50%) linked their increased barriers in accessing and using their e-learning module with their reduced knowledge gain test results:

I feel that the ones that had the control maybe they had a slightly easier way of going back to certain things that they had to read over.... I think if people had more experience to actually go back to the interactive sessions, I think there would have been better marks than that.
[Participant 9, intervention, male]
Secondary Outcome Barriers to e-Learning

A few barriers to access the e-learning material are mentioned above; however, the following results provide a more comprehensive overview. Figure 4 depicts the identified barriers, which can be divided into technical and individual barriers. Technical barriers identified were limited access to digital devices compatible with the e-learning platform, technical challenges with the e-learning platform, including log-in and the e-learning software itself, and internet access. Determined individual barriers occurred because of the limited e-learning experience of participants. These included limited knowledge on logging in to Moodle, using the e-learning platform and the software of the e-learning modules, and problem solving if a technical issue occurred. The difference between technical and individual barriers was that individual barriers were user generated.

A student reported that the tablets that were initially distributed when the e-learning program was implemented were not being used by him or by some of his fellow students. The reason was that the device “just lags and then it will fail to load” (participant 9, intervention, male). In addition, students (4/13, 31%) reported that access to other suitable electronic devices was difficult for some participants:

Yes, they did have smartphones, but not the ones that would load the e-learning module.
[Participant 12, intervention, male]
Figure 4. Barriers to e-learning.
View this figure

Some students encountered technical challenges when trying to log in to the e-learning platform (2/13, 15%), as they could not generate new passwords themselves:

It took me quite a, I think a few days. I had to actually get in touch with the HIGH IT personal from the university to actually help me with my username and my password.
[Participant 9, intervention, male]

When fellow students of this particular participant heard that he was able to log in, they “were actually shocked to say how did you manage?” (participant 9, intervention, male).

Furthermore, the software of the e-learning modules was considered a barrier in various ways. Participants often reported that the modules were “taking long to load” (participant 12, intervention, male) or that the system “just froze...it didn’t have anything to do with the network” (participant 7, control, female). Once they had gained access to the e-learning materials, some participants (4/13, 31%) stated difficulties in going back in the presentation or viewing the graphics on their phones. This was mainly the case for the interactive e-learning modules. Often, study participants (5/13, 39%) reported that difficulties vanished when using a larger electronic device, such as a laptop or desktop computer:

So I had the same experience when I used my phone, but when I switched to the PC it was like working.
[Participant 12, intervention, male]
I needed to use a laptop I think for me to have access.
[Participant 1, intervention, male]

Participants (5/13, 39%) stated that the internet connection posed another barrier to accessing the e-learning modules. The connection had to be fast, loading the modules took a long time, and some students were located in areas with very limited internet access. When asked why there were so many study dropouts, a participant replied the following:

For the people that I got to ask, one of them was in an area that had really horrible network. So, she only got the email like time after the whole participation thing had passed. So maybe the main reason was that everything had network issues and maybe things were not syncing or loading as fast as some people, because they were in a different area.
[Participant 7, control, female]

The use of the e-learning platform Moodle, and consequently students’ e-learning experience with it, was reported to be low. Other methods of web-based learning, although not asynchronous, were used during the COVID-19 pandemic:

We once tried to use Moodle at the school, but it never worked out, so we switched to Zoom or Google Meet.
[Participant 12, intervention, male]

This limited experience frequently impeded participants’ access as they forgot their e-learning platform log-in details and had restricted knowledge about the e-learning platform and software or technical problem solving if an issue emerged:

So, others had forgotten how to use it. So, I find instead of putting their username, they were putting in the email address with the correct password. So, they were failing to login.
[Participant 1, intervention, male]
Most of the people that we have in our class haven’t used the e-learning modules or used Moodle. So, they had challenges with navigating through.
[Participant 9, intervention, male]
Opinions of e-Learning and Suggestions for Future Improvement

It was evident that, despite the access challenges, students’ motivation and their opinions regarding e-learning were positive, especially in a pandemic context:

I think it’s actually a good development. And I think it would help, especially in this time where we are actually battling with Corona. It would actually help. And then it gives you also a chance to actually do it at your own time and you don’t feel rushed. So, you actually prepare for it.
[Participant 9, intervention, male]

However, it was also mentioned that asynchronous e-learning was fairly difficult and lacked interaction with teachers (5/13, 39%):

It would have been better if there was someone explaining it.
[Participant 7, control, female]

A student’s opinion was that e-learning “can work as a backup where physical learning is not possible due to limited space or as a way of revising with students” (participant 11, control, male).

Finally, participants gave several suggestions for improvement, such as developing an e-learning software compatible with their phones or otherwise access to suitable gadgets, improving the log-in to the e-learning platform, and using more e-learning, which should also be more standardized in its presentation:

And I think with a little bit more usage I think I would get experience in terms of how to really navigate it well and yeah. I think that’s the issue. I think using it more and not having issues with the logging in, I think would really, really help.
[Participant 9, Intervention, Male]
I just feel like if it could be more consistent just for people to get a grip of it that would be nice.
[Participant 8, control, male]

Comparison of Interactive Versus Noninteractive e-Learning

Primary Outcome User Satisfaction and Secondary Outcome Usability
Principal Findings and Explanations

In contrast to the initial hypothesis derived from studies on HICs, there were no significant differences among the groups in the primary outcome of user satisfaction in this low-resource setting [13,14,18,19]. This suggests that both modules were received similarly. The overall user satisfaction in both groups was acceptable. The median SUS score of both modules was assessed as average. Furthermore, there was no significant difference in SUS scores between the two modules, implying that both were equally challenging to use. However, contrary to the quantitative data, qualitative data showed that the interactive e-learning module had lower usability than the noninteractive module. The interactive module was harder to access, as multiple students could not finish it, it was not correctly displayed on the phones, and revising it was difficult. The interactive module was also harder for some students, as they were not familiar with interactive e-learning. Qualitative data also indicated that usability challenges negatively influenced students’ satisfaction with the modules, thereby linking these 2 distinct outcomes. There are several possible explanations for the lack of differences in user satisfaction between the two groups.

A reason could be an insufficient number of study participants to show an effect. Owing to many dropouts, the size of the analyzed population was limited. Furthermore, the difference in user satisfaction between both groups was small, and a post hoc power analysis revealed a low power of 7%, leading to the conclusion that quantitative data might be insufficient to prove or disprove the assumed hypothesis.

Another reason could be that the increased usability issues of the interactive module may have had a negative effect on the user satisfaction rating, as indicated by the qualitative data. Gunesekera et al [51] conducted a literature review that supports this assumption on the relationship between usability and satisfaction. Better usability results in a higher motivation to learn [52]. Nevertheless, the correlation is not as simple as it seems. Davids et al [53] conducted a study in South Africa using a similar approach. However, they compared their original interactive e-learning module with a revised version in which all usability issues were addressed. Yet, comparable with this study, there were no significant differences in satisfaction, usability, and knowledge gain between the two groups. When analyzing the objective usability through a video of the study, however, there were significantly fewer problems in the intervention group, resulting in objective usability differences among groups. When assuming that there were indeed usability differences but no user satisfaction differences between the two groups, the results of Davids et al [53] contradict the conclusion of the literature review by Gunesekera et al [51].

Another explanation for the lack of quantitative difference in satisfaction between the two groups could be that the participating students were more familiar with traditional teaching methods and less familiar with interactive e-learning than students in HICs [7]. Consequently, this could impede the rating of satisfaction and usability of interactive materials. The qualitative data of this study further supports this interpretation, as some participants in the intervention group were overwhelmed with the interactive technology or preferred the noninteractive presentation because it was simpler, possibly because of a lack of experience with interactive e-learning. Additional evidence for this was that fourth-year students rated the usability of their e-learning modules significantly higher than second-year students. They might have been exposed to e-learning technology longer and therefore found it easier to use.

Finally, as the 8 user satisfaction questions selected were not validated, they may not have accurately portrayed user satisfaction.

Comparison With Previous Work

When considering these results in context with the existing literature, studies with similar findings are rare. Nevertheless, most studies use distinct tools to assess user satisfaction, which limits comparisons. For the most part, studies that compared the user satisfaction of interactive and noninteractive e-learning for health care personnel demonstrated results in favor of the interactive e-learning method [13,14,18,19,54]. However, they were mostly conducted in HICs. Koka et al [14] provided an example of this. Their study was conducted in Switzerland and showed that paramedics undergoing an interactive e-learning module had increased knowledge of the National Institutes of Health Stroke Scale and higher satisfaction with the learning method than paramedics watching a video of the same learning content [14]. Another example is the RCT implemented by Lee et al [19] in Taiwan. In this study, undergraduate medical students were randomized to receive an interactive multimedia module or PowerPoint presentation slides. Although no significant difference in knowledge gain was observed among groups, the intervention group received significantly higher user satisfaction scores [19]. Nevertheless, there are studies that, as this study, show no difference in user satisfaction, comparing interactive with noninteractive e-learning [55,56].

Overall, the results of this study invoke the question of whether ease of use is a more important factor for user satisfaction than content presentation. Given that this study’s findings differed from the conclusions of similar studies in HICs, they further raise the question of equity in access to knowledge and education via e-learning in LMICs.

Secondary Outcome Knowledge Gain
Principal Findings and Explanations

Both groups received low to average KT1 and KT2 scores. This could indicate that both e-learning modules were not able to convey as much information as expected. However, another possibility is that the knowledge tests did not measure the true knowledge as they were not validated.

An additional finding of this study was that there was no significant difference between the KT1 and KT2 scores of each group. Assuming that both knowledge tests were equally challenging, this indicates that there was no significant knowledge loss after 6 weeks for both groups. This result could be interpreted as an advantage for both e-learning courses. However, it was not compared with a group that only received traditional classroom teaching, for example, and therefore cannot be contextualized.

Contrary to other studies, the analysis of this work also revealed no significant difference in short- or long-term knowledge gain between the two groups [13-18]. This was potentially related to qualitative data, which indicated that impeded access to the interactive e-learning module made it harder for students in the intervention group to learn the material or even look up information during the knowledge test. Participants were told not to use any material to help answer the knowledge questions; however, this was not verifiable.

Comparison With Previous Work

There have been several RCTs, including the one by Koka et al [14] that postulate interactive e-learning increases knowledge better than noninteractive e-learning. However, they were all conducted in HICs. Velan et al [17] showed in a randomized crossover trial that interactive e-learning modules were significantly more effective in improving medical students’ knowledge about the adequate use of imaging than PDF-based modules. DeBate et al [15] compared an interactive e-learning module for secondary prevention of eating disorders using a flat-text e-learning module in an RCT. They concluded that the interactive module was better at improving students' skill-based knowledge and self-efficacy but not overall knowledge [15]. Morgulis et al [16] demonstrated in an RCT that an interactive e-learning module significantly increased knowledge about leukemia better than existing web-based resources in senior medical students.

However, it seems that the hypothesis does not always hold true. Apart from the RCT by Lee et al [19], other studies provide additional examples. Suppan et al [55,56] conducted 2 web-based RCTs with student paramedics and emergency medicine personnel in Switzerland. The intervention group received a gamified e-learning module about personal protective equipment for COVID-19, whereas the control group received flat-text COVID-19 guidelines for prehospital emergency medicine use. The primary end point was the difference in postintervention knowledge between the two groups, and, as in this study, it was not statistically significant. Another study conducted with Canadian medical students compared an interactive e-learning module on global health with PDF articles on the same topic. Although participants’ satisfaction with the interactive module was higher, no difference in postintervention knowledge was detected [54].

Barriers to e-Learning

Principal Findings and Explanations

There were 56% (53/94) of study dropouts, possibly because of problems accessing the e-learning modules. The identified barriers to e-learning were of a technical and individual nature. Technical barriers included limited access to suitable electronic devices and difficulties with the e-learning platform, including log-in and software issues (eg, long loading times, crashing, and poor graphics presentation). An additional technical barrier was insufficient internet access. The e-learning platform can also be used via an application that would have probably increased the technical usability; however, this was possibly not known to all study participants. Because of the COVID-19 pandemic, the small information technology (IT) support team at LMMU was overwhelmed by many tasks when participants needed access to the e-learning platform. This may explain the insufficient capacity to instruct all students before the study. Individual barriers may be summarized as limited e-learning culture owing to low e-learning use and encompassed restricted e-learning experience in logging in to the e-learning platform, using the e-learning platform and software, and technical problem solving if technical issues occurred. In addition, the lack of communication with teachers was often viewed as having a negative impact. Among the study dropouts, there was a significantly higher number of female students, which may indicate that this student group was more affected by these barriers. A possible reason could be inadequate technology experience, as a questionnaire in 2017 indicated that female ML students had low technology experience, whereas male ML students had moderate experience [23].

It is assumed that had this study been conducted on campus, some of these hindrances, especially regarding the e-learning infrastructure (suitable devices and internet), could have potentially been avoided. However, because of the COVID-19 pandemic, participants had limited access to facilities at the LMMU campus.

Comparison With Previous Work

Most of the identified barriers, such as poor e-learning infrastructure, including device and internet availability or insufficient interaction with a teacher, are well known in the literature on e-learning in LMICs, and some are known from previous studies at the LMMU [1,4,22]. An example is a survey in the Philippines that assessed barriers encountered by medical students when trying to learn on the web after the COVID-19 pandemic had just hit. Identified barriers also included limited access to electronic devices and the internet. However, students also struggled to adapt to the new learning method [57]. This may suggest that, as in this study, some barriers to e-learning in LMICs are set beyond the technical infrastructure, as they might also be dependent on the individual characteristics of e-learning students. These individual barriers may be inherent to nascent e-learning systems in a low-resource context.

e-Learning Use

Barteit et al [22] assessed e-learning platform use as low in 2017. Unfortunately, this appears unresolved, as some participants reported that they did not use the e-learning platform to study. Furthermore, most students had e-learning platform accounts but had not used them regularly, so some had forgotten their log-in details. Explanations for this low use are difficult to discern because of the various stakeholders involved in an e-learning system. In 2017, reasons included the low quality of the tablets, insufficient e-learning training for students and lecturers, and average quality of the e-learning material, with low motivation of teachers to update and improve the content [22]. As the aims of this study did not include the evaluation of the use of the e-learning platform, only assumptions can be made for low use. Several factors should be considered to promote e-learning use in a low-resource context, some of which may be applied insufficiently at the LMMU: up-to-date information should be conveyed in the e-learning material, the practicality of e-learning should be advocated while e-learning services should be expanded, e-learning should be user friendly, sufficient technology training should be provided to students and lecturers, and individual motivation toward e-learning should be increased to promote overall e-learning use [7]. The IT resources during the implementation of this study were strained, meaning there may be insufficient IT resources to promote these factors to increase e-learning use at the LMMU.

Strengths and Limitations

This study is the first to compare interactive and noninteractive e-learning for students in clinical sciences or comparable studies in Zambia and one of the first known in a lower-middle–income country. As the value of e-learning in low-resource countries is increasingly recognized, especially during the COVID-19 pandemic, it is important to assess different e-learning methods in these settings, and the mixed methods design of this study allowed a comprehensive overview of the subject. However, this study had several limitations. They can be structured into general study limitations, limitations associated with the web-based study format, and shortcomings of the e-learning module comparison.

The analyzed study sample might be biased because only users were evaluated and not the original sample (because of many dropouts). This also resulted in a small sample size of the analyzed population and low post hoc power. However, the sample still seems to represent the overall group of students in the ML course quite well. In addition, the principal investigator developed the e-learning content, which could have affected the results. Social desirability could have also influenced participants’ statements in rating conferences, as the principal investigator was also a rating conference moderator. We attempted to circumvent this bias in qualitative data acquisition by repeatedly asking the participants for their honest opinions, building rapport, and probing for details.

Because of the COVID-19 pandemic, this study was conducted on the web, which poses further limitations. Recruitment was completed by email, which could have limited the number of students enlisted. Furthermore, insufficient internet access and connectivity may have affected the students’ completion of the web-based questionnaires and communication in the rating conferences because of dialogue loss.

Qualitative data suggested that the interactive module was more difficult to access and use; therefore, the comparison of the two e-learning modules was likely limited by the experienced technical problems. In addition, some students gained access to the other e-learning module but were analyzed for their originally assigned module; however, a post hoc sensitivity analysis that excluded these 2 students showed no differences in the assessed outcomes. Finally, participants may have looked up answers on the web, done teamwork, or unblinded themselves through conversations with other participants. Although such behavior affects outcome variables, it is most likely a reflection of learning in real-world circumstances.

Suggestions for Further Research

Secondary results suggested that the current relevant question may not be interactive versus noninteractive e-learning at the LMMU but the ease of access to e-learning. Although students’ motivation for e-learning was high, the e-learning program at the LMMU still faces several challenges. These can and should be addressed through further e-learning training for all students and lecturers and the promotion of continuous implementation of e-learning as an integral part of the curriculum. Increased use, in turn, would likely help improve the user experience of the e-learning platform. Additional resources should be allocated for IT personnel and infrastructure, if possible and needed. Future studies comparing interactive and noninteractive e-learning for health care personnel in low-resource settings such as Zambia should ensure that potentially limiting factors in the technical access to e-learning materials are mitigated. This could be achieved by uploading the study content for offline use to a set number of tablets. However, this would likely decrease external validity.

Conclusions

In contrast to previous studies conducted in HICs, interactive and noninteractive e-learning were not significantly different in terms of user satisfaction and knowledge gain. However, these results may not be generalizable to other low-resource settings because the post hoc power was low, and the e-learning system at the LMMU has not yet reached its full potential. Consequently, barriers to accessing e-learning, which were of a technical and individual nature, may have affected the results, particularly as the interactive module was deemed harder to access and use. The extent to which some limitations were inherent to the nascent e-learning system, as opposed to the result of impaired e-learning access, is difficult to assess. Future studies should minimize technical e-learning barriers to further evaluate interactive e-learning in LMICs.

Acknowledgments

Statistical consultation was obtained from the Institute for Medical Biometrics and Informatics at the Heidelberg University, Germany. The authors thank Dr Gardner Syakantu, dean of the School of Medicine and Clinical Sciences at the Levy Mwanawasa Medical University, for supporting this study.

Authors' Contributions

FN initiated the study and conceived the research questions and study design. ES created the e-learning material; obtained consent from the ethics committees; and conducted the recruitment, randomization, implementation, and quantitative analysis of the trial with the help and supervision of FN, MM, AS, PA, SB, VRL, and GS. ABR guided the qualitative research, and FR and ES performed the qualitative analysis. All the authors contributed to the writing of the manuscript.

Conflicts of Interest

FN and SB participated in implementing the e-learning system at the Levy Mwanawasa Medical University in 2016 and 2017. In addition, the principal investigator developed both e-learning modules. This publication was supported by the Heidelberg Graduate School of Global Health, which is funded by the Else-Kröner-Fresenius-Stiftung. The study was conducted within the framework of the BLiZ (Blended Learning in Zambia) Project with a 2-fold focus on sustainability and long-term implementation (also funded by the Else-Kröner-Fresenius-Stiftung [project 2019_HA25]). The funder had no influence on study design, implementation, or publication.

Editorial Notice

This randomized study was not registered. The authors explained that they "worked solely with health care providers and not patients". The editor granted an exception from ICMJE rules mandating prospective registration of randomized trials because the risk of bias appears low and the study was considered formative, guiding the development of the application. However, readers are advised to carefully assess the validity of any potential explicit or implicit claims related to primary outcomes or effectiveness.

Multimedia Appendix 1

CONSORT-EHEALTH checklist (V 1.6.1).

PDF File (Adobe PDF File), 796 KB

Multimedia Appendix 2

Screenshots of the noninteractive module [25,38,39].

PDF File (Adobe PDF File), 1305 KB

Multimedia Appendix 3

Screenshots of the interactive module [25,38,39].

PDF File (Adobe PDF File), 9736 KB

Multimedia Appendix 4

Questionnaires.

PDF File (Adobe PDF File), 243 KB

Multimedia Appendix 5

Study information sheet.

PDF File (Adobe PDF File), 183 KB

  1. Mullan F, Frehywot S, Omaswa F, Buch E, Chen C, Greysen S. Medical schools in sub-Saharan Africa. Lancet 2011:1113-1121 [FREE Full text] [CrossRef] [Medline]
  2. Aluttis C, Bishaw T, Frank MW. The workforce for health in a globalized context--global shortages and international migration. Glob Health Action 2014;7:23611 [FREE Full text] [CrossRef] [Medline]
  3. Barteit S, Guzek D, Jahn A, Bärnighausen T, Jorge M, Neuhann F. Evaluation of e-learning for medical education in low- and middle-income countries: a systematic review. Comput Edu 2020 Feb:103726 [FREE Full text] [CrossRef] [Medline]
  4. Frehywot S, Vovides Y, Talib Z, Mikhail N, Ross H, Wohltjen H, et al. E-learning in medical education in resource constrained low- and middle-income countries. Hum Resour Health 2013;11:4 [FREE Full text] [CrossRef] [Medline]
  5. Welsh E, Wanberg C, Brown K, Simmering M. E-learningmerging uses, empirical results and future directions. Int J Train Dev 2003;7(4):245-258 [FREE Full text] [CrossRef]
  6. Ruiz JG, Mintzer MJ, Leipzig RM. The impact of e-learning in medical education. Acad Med 2006 Mar;81(3):207-212. [Medline]
  7. Bhuasiri W, Xaymoungkhoun O, Zo H, Rho J, Ciganek A. Critical success factors for e-learning in developing countries: a comparative analysis between ICT experts and faculty. Comput Edu 2012;58(2):843-855 [FREE Full text] [CrossRef]
  8. Barteit S, Jahn A, Banda SS, Bärnighausen T, Bowa A, Chileshe G, et al. E-learning for medical education in Sub-Saharan Africa and low-resource settings: viewpoint. J Med Internet Res 2019 Jan 09;21(1):e12449 [FREE Full text] [CrossRef] [Medline]
  9. Kalyuga S. Enhancing instructional efficiency of interactive e-learning environments: a cognitive load perspective. Edu Psychol Rev 2007;19(3):387-399 [FREE Full text] [CrossRef]
  10. Kononowicz AA, Zary N, Edelbring S, Corral J, Hege I. Virtual patients - what are we talking about? A framework to classify the meanings of the term in healthcare education. BMC Med Educ 2015;15:11 [FREE Full text] [CrossRef] [Medline]
  11. Gorbanev I, Agudelo-Londoño S, González RA, Cortes A, Pomares A, Delgadillo V, et al. A systematic review of serious games in medical education: quality of evidence and pedagogical strategy. Med Educ Online 2018 Dec;23(1):1438718 [FREE Full text] [CrossRef] [Medline]
  12. Dongsong Z, Lina Z. Enhancing e-learning with interactive multimedia. Inform Resour Manag J (IRMJ) 2003;16(4):1-14. [CrossRef]
  13. Boeker M, Andel P, Vach W, Frankenschmidt A. Game-based e-learning is more effective than a conventional instructional method: a randomized controlled trial with third-year medical students. PLoS One 2013;8(12):e82328 [FREE Full text] [CrossRef] [Medline]
  14. Koka A, Suppan L, Cottet P, Carrera E, Stuby L, Suppan M. Teaching the National Institutes of Health Stroke Scale to paramedics (e-learning vs video): randomized controlled trial. J Med Internet Res 2020 Jun 09;22(6):e18358 [FREE Full text] [CrossRef] [Medline]
  15. DeBate RD, Severson HH, Cragun D, Bleck J, Gau J, Merrell L, et al. Randomized trial of two e-learning programs for oral health students on secondary prevention of eating disorders. J Dent Educ 2014 Jan;78(1):5-15. [Medline]
  16. Morgulis Y, Kumar RK, Lindeman R, Velan GM. Impact on learning of an e-learning module on leukaemia: a randomised controlled trial. BMC Med Educ 2012 May 28;12:36 [FREE Full text] [CrossRef] [Medline]
  17. Velan GM, Goergen SK, Grimm J, Shulruf B. Impact of interactive e-learning modules on appropriateness of imaging referrals: a multicenter, randomized, crossover study. J Am Coll Radiol 2015 Nov;12(11):1207-1214. [CrossRef] [Medline]
  18. Suppan M, Stuby L, Carrera E, Cottet P, Koka A, Assal F, et al. Asynchronous distance learning of the national institutes of health stroke scale during the COVID-19 pandemic (E-learning vs Video): randomized controlled trial. J Med Internet Res 2021 Jan 15;23(1):e23594 [FREE Full text] [CrossRef] [Medline]
  19. Lee L, Chao Y, Huang C, Fang J, Wang S, Chuang C, et al. Cognitive style and mobile e-learning in emergent otorhinolaryngology-head and neck surgery disorders for millennial undergraduate medical students: randomized controlled trial. J Med Internet Res 2018 Feb 13;20(2):e56 [FREE Full text] [CrossRef] [Medline]
  20. Muñoz DC, Ortiz A, González C, López DM, Blobel B. Effective e-learning for health professional and medical students: the experience with SIAS-Intelligent Tutoring System. Stud Health Technol Inform 2010;156:89-102. [Medline]
  21. Barteit S, Jahn A, Bowa A, Lüders S, Malunga G, Marimo C, et al. How self-directed e-learning contributes to training for medical licentiate practitioners in Zambia: evaluation of the pilot phase of a mixed-methods study. JMIR Med Educ 2018 Nov 27;4(2):e10222. [CrossRef]
  22. Barteit S, Neuhann F, Bärnighausen T, Lüders S, Malunga G, Chileshe G, et al. Perspectives of nonphysician clinical students and medical lecturers on tablet-based health care practice support for medical education in Zambia, Africa: qualitative study. JMIR Mhealth Uhealth 2019 Jan 15;7(1):e12637. [CrossRef]
  23. Barteit S, Neuhann F, Bärnighausen T, Bowa A, Wolter S, Siabwanta H, et al. Technology acceptance and information system success of a mobile electronic platform for nonphysician clinical students in Zambia: prospective, nonrandomized intervention study. J Med Internet Res 2019 Oct 09;21(10):e14748 [FREE Full text] [CrossRef] [Medline]
  24. GBD 2017 Causes of Death Collaborators. Global, regional, and national age-sex-specific mortality for 282 causes of death in 195 countries and territories, 1980-2017: a systematic analysis for the Global Burden of Disease Study 2017. Lancet 2018 Nov 10;392(10159):1736-1788 [FREE Full text] [CrossRef] [Medline]
  25. GOLD. GOLD Report - Global strategy for the diagnosis, management and prevention of COPD. Global Initiative for Obstructive Lung Disease. 2021.   URL: https://goldcopd.org/2021-gold-reports/ [accessed 2021-04-01]
  26. Adeloye D, Basquill C, Papana A, Chan KY, Rudan I, Campbell H. An estimate of the prevalence of COPD in Africa: a systematic analysis. COPD 2015 Feb;12(1):71-81. [CrossRef] [Medline]
  27. World Health Organisation. Chronic Obstructive Pulmonary Disease. 2021.   URL: https://www.who.int/news-room/fact-sheets/detail/chronic-obstructive-pulmonary-disease-(copd) [accessed 2021-09-01]
  28. Ozoh OB, Awokola BI, Buist SA. A survey of the knowledge of general practitioners, family physicians and pulmonologists in Nigeria regarding the diagnosis and treatment of chronic obstructive pulmonary disease. West Afr J Med 2014;33(2):100-106. [Medline]
  29. Lamprecht B, Soriano JB, Studnicka M, Kaiser B, Vanfleteren LE, Gnatiuc L, BOLD Collaborative Research Group‚ the EPI-SCAN Team‚ the PLATINO Team‚the PREPOCOL Study Group. Determinants of underdiagnosis of COPD in national and international surveys. Chest 2015 Oct;148(4):971-985. [CrossRef] [Medline]
  30. Desalu OO, Onyedum CC, Adeoti AO, Gundiri LB, Fadare JO, Adekeye KA, et al. Guideline-based COPD management in a resource-limited setting - physicians' understanding, adherence and barriers: a cross-sectional survey of internal and family medicine hospital-based physicians in Nigeria. Prim Care Respir J 2013 Mar;22(1):79-85 [FREE Full text] [CrossRef] [Medline]
  31. Robertson NM, Nagourney EM, Pollard SL, Siddharthan T, Kalyesubula R, Surkan PJ, et al. Urban-rural disparities in chronic obstructive pulmonary disease management and access in Uganda. Chronic Obstr Pulm Dis 2019 Jan 04;6(1):17-28 [FREE Full text] [CrossRef] [Medline]
  32. Ozoh OB, Awokola T, Buist SA. Medical students' knowledge about the management of chronic obstructive pulmonary disease in Nigeria. Int J Tuberc Lung Dis 2014 Jan;18(1):117-121. [CrossRef] [Medline]
  33. Eysenbach G, CONSORT-EHEALTH Group. CONSORT-EHEALTH: improving and standardizing evaluation reports of Web-based and mobile health interventions. J Med Internet Res 2011;13(4):e126 [FREE Full text] [CrossRef] [Medline]
  34. Eysenbach G. Improving the quality of Web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES). J Med Internet Res 2004 Sep 29;6(3):e34 [FREE Full text] [CrossRef] [Medline]
  35. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care 2007 Dec;19(6):349-357 [FREE Full text] [CrossRef] [Medline]
  36. Heidelberg Institute of Global Health. Levy Mwanawasa Medical University (LMMU). 2021.   URL: https:/​/www.​klinikum.uni-heidelberg.de/​heidelberger-institut-fuer-global-health/​partner/​partner/​levy-mwanawasa-medical-university-lmmu [accessed 2021-09-17]
  37. International Committee of Medical Journal Editors. Clinical Trials Registration. 2021.   URL: http://www.icmje.org/about-icmje/faqs/clinical-trials-registration/ [accessed 2021-09-15]
  38. Herold G. Innere Medizin: eine vorlesungsorientierte Darstellung ; unter Berücksichtigung des Gegenstandskataloges für die Ärztliche Prüfung ; mit ICD 10-Schlüssel im Text und Stichwortverzeichnis. Köln: Herold; 2014:348-355.
  39. AMBOSS GmbH. Chronisch Obstruktive Lungenerkrankung. 2021.   URL: https://next.amboss.com/de/ [accessed 2021-02-15]
  40. iSpring. iSpring Suite 10. 2020.   URL: https://www.ispringsolutions.com/ [accessed 2021-03-01]
  41. Taylor DCM, Hamdy H. Adult learning theories: Implications for learning and teaching in medical education: AMEE Guide No. 83. Medical Teacher 2013 Sep 04;35(11):e1561-e1572. [CrossRef]
  42. Keller H, Heinemann E, Kruse M. Praxisbericht: Die Ratingkonferenz. Zeitschrift für Evaluation. 2012.   URL: https://elibrary.utb.de/doi/pdf/10.31244/zfe.2012.02.07?download=true [accessed 2021-03-15]
  43. Leiner D. SoSci Survey (Version 3.1.06). 2019.   URL: https://www.soscisurvey.de [accessed 2021-05-01]
  44. Lewis JR, Sauro J. Item benchmarks for the system usability scale. Journal of Usability Studies. 2018.   URL: https://uxpajournal.org/de/item-benchmarks-system-usability-scale-sus/ [accessed 2021-03-15]
  45. Create a blocked randomisation list. Sealed Envelope Ltd. 2021.   URL: https://www.sealedenvelope.com/simple-randomiser/v1/lists [accessed 2021-04-01]
  46. iSpring Space. iSpring Solutions Inc. 2021.   URL: https://www.ispringsolutions.com/ispring-cloud [accessed 2021-04-01]
  47. Revelle W. psych: Procedures for Psychological, Psychometric, and Personality Research. Evanston, Illinois: Northwestern University; 2021.   URL: https://CRAN.R-project.org/package=psych [accessed 2021-05-01]
  48. Bryer J, Speerschneider K. likert: Analysis and Visualization Likert Items. 2016.   URL: https://CRAN.R-project.org/package=likert [accessed 2021-05-01]
  49. Lenhard W, Lenhard A. Berechnung von Effektstärken. Psychometrica. 2016.   URL: https://www.psychometrica.de/effektstaerke.html [accessed 2021-12-24]
  50. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol 2006 Jan;3(2):77-101. [CrossRef]
  51. Gunesekera AI, Bao Y, Kibelloh M. The role of usability on e-learning user interactions and satisfaction: a literature review. J Syst Inform Technol 2019 Aug 12;21(3):368-394. [CrossRef]
  52. Zaharias P, Poylymenakou A. Developing a usability evaluation method for e-learning applications: beyond functional usability. Int J Hum-Comput Interact 2009 Jan 14;25(1):75-98. [CrossRef]
  53. Davids MR, Chikte UME, Halperin ML. Effect of improving the usability of an e-learning resource: a randomized trial. Adv Physiol Educ 2014 Jun;38(2):155-160 [FREE Full text] [CrossRef] [Medline]
  54. Gruner D, Pottie K, Archibald D, Allison J, Sabourin V, Belcaid I, et al. Introducing global health into the undergraduate medical school curriculum using an e-learning program: a mixed method pilot study. BMC Med Educ 2015 Sep 2;15(1):142. [CrossRef]
  55. Suppan L, Abbas M, Stuby L, Cottet P, Larribau R, Golay E, et al. Effect of an e-learning module on personal protective equipment proficiency among prehospital personnel: web-based randomized controlled trial. J Med Internet Res 2020 Aug 21;22(8):e21265 [FREE Full text] [CrossRef] [Medline]
  56. Suppan L, Stuby L, Gartner B, Larribau R, Iten A, Abbas M, et al. Impact of an e-learning module on personal protective equipment knowledge in student paramedics: a randomized controlled trial. Antimicrob Resist Infect Control 2020 Nov 10;9(1):185. [CrossRef]
  57. Baticulon RE, Sy JJ, Alberto NR, Baron MB, Mabulay RE, Rizada LG, et al. Barriers to online learning in the time of COVID-19: a national survey of medical students in the Philippines. Med Sci Edu 2021 Feb 24;31(2):1-12. [CrossRef]


BLiZ: Blended Learning in Zambia
CHERRIES: Checklist for Reporting Results of Internet E-Surveys
CONSORT-EHEALTH: Consolidated Standards of Reporting Trials of Electronic and Mobile Health Applications and Online Telehealth)
COPD: chronic obstructive pulmonary disease
COREQ: Consolidated Criteria for Reporting Qualitative Research
GOLD: Global Initiative for Chronic Obstructive Lung Disease
HIC: high-income country
IT: information technology
KT1: knowledge gain test 1
KT2: knowledge gain test 2
LMIC: low- and middle-income country
LMMU: Levy Mwanawasa Medical University
ML: medical licentiate
RCT: randomized controlled trial
SSA: sub-Saharan Africa
SUS: System Usability Scale


Edited by N Zary; submitted 06.11.21; peer-reviewed by L Stuby, B Battulga; comments to author 15.12.21; revised version received 29.12.21; accepted 30.12.21; published 24.02.22

Copyright

©Elena Schnieders, Freda Röhr, Misho Mbewe, Aubrey Shanzi, Astrid Berner-Rodoreda, Sandra Barteit, Valérie R Louis, Petros Andreadis, Gardner Syakantu, Florian Neuhann. Originally published in JMIR Medical Education (https://mededu.jmir.org), 24.02.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic information, a link to the original publication on https://mededu.jmir.org/, as well as this copyright and license information must be included.