Published on in Vol 11 (2025)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/70843, first published .
Digital Literacy Training for Digitalization Officers (“Digi-Managers”) in Outpatient Medical and Psychotherapeutic Care: Conceptualization and Longitudinal Evaluation of a Certificate Course

Digital Literacy Training for Digitalization Officers (“Digi-Managers”) in Outpatient Medical and Psychotherapeutic Care: Conceptualization and Longitudinal Evaluation of a Certificate Course

Digital Literacy Training for Digitalization Officers (“Digi-Managers”) in Outpatient Medical and Psychotherapeutic Care: Conceptualization and Longitudinal Evaluation of a Certificate Course

1Health Informatics, Faculty of Health, Witten/Herdecke University, Pferdebachstr. 11, Witten, Germany

2IT & Digital Health Division, Kassenärztliche Vereinigung Westfalen-Lippe, Dortmund, Germany

3Department of Education, Ärztekammer Westfalen-Lippe, Münster, Germany

4Department of Healthcare, Fraunhofer Institute for Software and Systems Engineering, Dortmund, Germany

Corresponding Author:

Anne Mainz, MSc


Background: Digital tools, services, and information in patient care demand new competencies in outpatient care, and the workforce is faced with the need to deal with digitalization.

Objective: In a targeted certificate course (Certification of Digitalization Officers in Medical Practices and Psychotherapeutic Practices, Digi-Manager), medical assistants are trained to serve as digitalization officers, enabling them to implement the requirements of digitalized health care within their practices.

Methods: As part of an accompanying study, the course is evaluated by the participants, and the change in their digital literacy is recorded. We measured different knowledge, skills, and attitude dimensions at 3 different times—before, during, and after the course—and used ANOVA to examine significant changes.

Results: Digi-Managers started the course with an already high self-assessment of their digital literacy. Skills and knowledge increased significantly in all categories (cognitive, technical, ethical, and health information) from the initial to the final measurement, as did self-confidence in the use of general software and hardware. Positive attitude remained stable over the training period, and the course was rated very positively by participants across all areas.

Conclusions: Training programs on digital topics for health care professionals are necessary, and this certification course is a role model for successful further education through a mixture of theoretical knowledge transfer and practical application. Especially, the use of a digital maturity model and a digital laboratory was a unique and useful feature. Further research needs to go into alternative assessment methods of digital literacy, as the results suggest that self-assessment measures self-efficacy and confidence, rather than pure competence. Nevertheless, the increase in self-assessed competence suggests that the training was successful.

JMIR Med Educ 2025;11:e70843

doi:10.2196/70843

Keywords



Background

The use of digital innovations with different tools, services, and information in patient care is associated with new requirements for day-to-day work in outpatient medical care, and the spectrum of them is expanding more and more. Not only physicians but also other medical professionals are faced with the need to deal with digitalization [1]. Telemedicine, eHealth, and mobile health (mHealth) enable access to health information, improve communication, allow personalized care, make remote monitoring possible, and support self-management of patients [2]. Administrative processes are also increasingly supported digitally [1].

The digital transformation in Germany’s health care system has led to a few important digital tools in recent years: the electronic patient record (ePA), electronic sick leave certificates (eAU), electronic prescription (eRezept), the electronic medication plan (eMP), electronic communication systems (KIM and TIM), telehealth applications, and digital health applications (DiGA). Additionally, various legal initiatives to establish more technologies in health care are currently coming into force in Germany [3]. On the side of the outpatient medical practices in Germany, the level of digitization or digital maturity depends heavily on the individual personality, motivation, and competence of the people involved [4-6]. You will find mostly outdated IT systems within the practices [3], and many things still happen in paper form, such as the storage of data or communication with other health care providers or patients [4].

Besides the availability of digital tools, services, and information, individuals need more motivation to use these technologies. Following the self-determination theory [7], the feeling of competence is one of the three pillars to develop motivation. Otherwise, feelings of incompetence regarding the use of health informatics technology tools lead to reluctance in using these tools and are one of the main reasons to avoid them. Digital competence or digital literacy in health care could be defined as the ability to integrate and apply context-appropriate knowledge, skills, and psychosocial factors—such as attitudes, beliefs, values, and motivation—to perform within the health care domain [8]. Especially when health care workers were not given enough time to learn on-the-job or did not have enough support from peers, they were not willing to use new technology [9]. To enable individuals to gain digital literacy in health care, education is needed.

In their focus group study, Mannevaara et al [10] identified knowledge and skill issues regarding IT-related management and IT background knowledge as the main challenges faced in health care. Competencies related to direct patient care, communication, ethics in health IT, project- and change management, digital literacy, information and knowledge management, teaching, and education were essential in today’s health care practice. Especially, competencies in decision-making, information and knowledge management, teaching, training, and data security were highlighted as important by German participants in this study [10]. Hübner et al [11] identified the top 3 core competency areas that need to be addressed for employees in the health care sector that work in direct patient care: (1) communication, (2) documentation, and (3) information and knowledge management in patient care, which coincides with the focus of the Digi-Manager training course.

Goal of This Study

With this in mind, the project “Certification of Digitalization Officers in Medical Practices and Psychotherapeutic Practices (Digi-Manager),” funded by the Federal Ministry of Health in Germany, launched an educational program for medical assistants to train them as digitalization officers. In Germany, medical assistants make appointments for patients, document treatment procedures for patient files, take care of billing for services rendered, and organize practice procedures. They apply bandages, prepare syringes, or draw blood for laboratory tests. They also inform patients about pre- and posttreatment options, maintain medical instruments, and carry out laboratory work [12].

Digitalization officers, or how they are called in this certification program, “Digi-Managers,” create digitalization strategies for their own practices, can get digitalization projects off the ground, and act as a point of contact for the digitalization of patient care.

To evaluate if the digital literacy of the participants could be increased through participation in the Digi-Manager course, their digital literacy before (t0), during (t1), and after (t2) the course was measured and compared. To measure the whole concept of digital literacy, participants are asked about their knowledge, skills, and attitudes regarding digital health technologies.

Hypotheses

The transfer of skills and abilities should be verifiable in a successful training program, which is why hypothesis H1 is formulated. A scoping review [13] suggests that discussions, group workshops, self-directed learning materials, and providing practice opportunities, among other things, help to grow digital confidence, which are provided within the Digi-Manager training. This leads to our hypothesis H2. Other studies show that increased usage causes a rise in computer confidence, which also increases positive attitudes toward computers [14], which supports the assumption of hypothesis H3.

  • H1: Attending the Digi-Manager training course significantly improves the specific knowledge and skills imparted to participants.
  • H2: Attending the Digi-Manager training course significantly improves participants’ general confidence level for technology usage.
  • H3: Attending the Digi-Manager training course significantly improves participants’ attitude toward digital (health) applications.

Ethical Considerations

The ethics application was reviewed and approved by the ethics committee of Witten/Herdecke University on April 20, 2023, and no ethical or legal concerns were raised (application/approval no. S-93/2023). The participants received no further compensation for participating. Before each survey, the participants received information about the duration, procedure, and content of the study and had to provide consent. For each question, there was the option to refuse to answer. The responses of the participants were pseudonymized using an identification code, and no identifying information was queried.

Course Concept

The aim of the Digi-Manager certificate course was to enable medical assistants in outpatient medical practices to derive and implement digitalization strategies and projects for their own practices and become contact persons for the digitalization of patient care. Therefore, they had to acquire skills in the areas of technology, data use, IT security, and data protection.

The participants were released from work for a total of 205 hours during the certificate course and received no further incentives for participation. The participants spend 60 hours on the knowledge modules and 17 hours on the practical modules in web-based and on-site classroom courses. Approximately 128 hours were planned additionally for orientation, organization, self-study, exams, and creating a digitalization strategy (Figure 1). Different time slots (2‐4 slots per module) were offered for each course, and participants were allowed to choose them freely. Participation in each module was mandatory to pass the certificate course, and attendance was assessed through an attendance list. The content of the course was developed based on an existing certified course of ÄKWL (Ärztekammer Westfalen-Lippe, a medical association representing 40,000 doctors of the Westphalia-Lippe region) that has been running for many years and was adapted and supplemented by a panel of experts with regard to the requirements of digitalization officers (Multimedia Appendix 1).

Figure 1. Schedule of the certificate course.

The course consisted of so-called knowledge and practical modules. The knowledge modules lasted from May until September 2023 and were held as blended learning courses with e-learning materials and web-based and on-site courses. The e-learning materials and web-based classes were distributed by the learning management platform ILIAS.

The practical modules lasted from October 2023 until May 2024 and consisted of a kick-off event, a fundamental knowledge course, and consultations in smaller groups. The courses took place at the dipraxis, a digital laboratory for testing digital tools and analyzing processes in outpatient care. A major component of the practical modules was the digital maturity model of the KVWL (Kassenärztliche Vereinigung Westfalen-Lippe, an association of statutory health insurance physicians of the Westphalia-Lippe region). The model was developed on the findings of Neunaber and Meister [15] for this project. It measures the digital maturity of the practices on 5 assessment categories: corporate management, infrastructure (IT security, data protection, interoperability, data processing, telematic infrastructure, and data collection), treatment and therapy, patient management, and administration. Course participants classified their own practice by answering 25 items. A digital tool visualized the digitalization status of the practice based on the answers using a radar chart. The model could be used by Digi-Managers to assess the current level of digitalization in the practice and identify potential for improvement of the digital situation. With the results of the digital maturity model and the consulting in the dipraxis, the digital managers developed practice-specific digitalization strategies.

Recruitment

The respondents for the survey were the participants of the Digi-Manager training course. Of the total of 100 participants, all were asked to take part in the surveys.

Guideline

The GREET (Guideline for Reporting Evidence-Based Practice Educational Interventions and Teaching) checklist was used to describe the educational intervention in detail [16]. It comprises 17 items to describe why, what, who, how, where, when, how much, and how well an educational intervention took place and what planned and unplanned changes occurred.

Questionnaires

Within the initial survey (Multimedia Appendix 2), participants were asked about their basic demographic information to describe the study population and identify possible biases: their age, gender identity, duration of employment, and the medical field of the doctor’s office they work at. Various studies indicate that age and gender are associated with different usage behavior and perception of digital tools [17,18], but other studies show that the influence of these variables is often overestimated [19,20]. In order to control these possible effects, demographic information was collected and analyzed.

Digital literacy of the participants was measured using 2 existing questionnaires, which recorded both the skills and knowledge relating to digital (health) systems as well as attitudes toward digital (health) systems. Since digital literacy is understood as the competence to deal with digital tools in the everyday professional life of medical assistants in this study, an instrument was needed that operationalizes different dimensions of competence and refers to the wide range of digital tools in medical practices. Fitting tools in terms of content and quality criteria were identified using the results of a preceding scoping review [21]. The first questionnaire used was the Public Health Informatics Competencies for Primary Health Care (PHIC4PHC) questionnaire [22], with a total of 42 items on a 5-point Likert scale. The items are divided into the following dimensions: cognitive proficiency (digital health system knowledge and digital health system skills), technical proficiency (general computer skills, office application skills, and network skills), ethical proficiency (privacy, security, and legal knowledge), and health information literacy (health information access, management, integration, and evaluation).

In addition to knowledge and skills, the questionnaire by Kuek and Hakkennes [23] was used to record the attitude toward the use of digital health care systems via items on (1) confidence in usage behavior, (2) technology acceptance, and (3) acceptance and use of technology.

The first part of the questionnaire required participants to indicate their confidence level for different commonly used hardware and software devices on a 5-point scale from not at all confident to completely confident. The devices and applications in question were computers, office applications, smartphones, tablets, email, and social media. The second and third parts of the questionnaire were based on the TAM (technology acceptance model) and UTAUT (unified theory of acceptance and use of technology) questionnaires, which have been used and validated in various studies. Technology acceptance of health information systems was measured using the dimensions perceived usefulness and perceived ease of use of the TAM, with 12 items, and supplemented with the dimensions attitude toward technology, social influence, facilitating conditions, and anxiety of the UTAUT, with 15 items, all on a 7-point Likert scale.

The system usability scale (SUS) was used as an established, standardized, and quick questionnaire for the assessment of perceived usability of the digital maturity level tool (Brooke, 1996, quoted from [24]). The 10 items were measured on a 5-point Likert scale.

Procedure

Digital literacy was assessed in the form of a longitudinal study parallel to the certificate course. The measurement points were based on the participants’ progress in the course: before completing the first knowledge module (t0), after completing all 4 knowledge modules (t1), and after completing the practical modules (t2). The links to the questionnaires were distributed via the learning management platform ILIAS, which was used to provide communication and content for the Digi-Manager training. This made it easier for participants to access the surveys, and the integrated reminder and activation functions ensured that participation in the surveys was not forgotten. In addition, the completion of the evaluation surveys could be used as a condition for further progress within the learning material in ILIAS. As no further learning content was planned after the final survey, reminder emails were sent to the participants for this purpose.

Before starting every survey, the participants were informed about the content of the study, data processing, and data protection and were asked for their written consent to participate. To statistically evaluate the change in digital literacy without violating anonymity, participants were asked to provide each survey with their individual identification code. This code was a 6-digit character string made up of time-stable personal characteristics.

The first questionnaire for t0 collected basic demographic information and measured the participants’ existing digital skills. The second questionnaire for t1 was intended to assess the digital literacy of the participants again and additionally asked about the evaluation of the first part of the training. At this second measurement point, the Digi-Managers also had the opportunity to evaluate the certificate course in terms of support and design (participant support, technical moderation, quality of scripts, and atmosphere), content (topicality of content, content structure, selection of speakers and authors, discussion/interaction, practical relevance, and personal goal achievement), planning and organization (program announcement, selection of dates, and time frame), and the web conferencing system (technical functionality, user friendliness of the screen, sound quality, and image quality). Participants were able to rate the various aspects of these dimensions on a scale of 1 to 6, with 1 as the best and 6 as the worst rating option. In addition, there were free-text questions in which the participants were asked what they perceived as particularly positive or particularly negative about the training. In the final questionnaire for t2, digital literacy was assessed one last time, the second part of the training was evaluated, and the applicability and comprehensibility of the digital maturity level tool were recorded.

Data Analysis

The datasets of the different surveys were matched through the 6-digit individual identification code. Since some data records did not have fully matching codes, data records were also assigned that were at most 2 characters different or in a reversed order.

The collected data were analyzed with the IBM SPSS Statistics 28 analysis software. To analyze the data descriptively, the frequency, mean values, and standard deviations were reported. Normal distribution as a prerequisite was tested by the Shapiro-Wilk test because it is more powerful than the Kolmogorov-Smirnov test [25], and sphericity was tested with the Mauchly test. The Levene test was used to test the homogeneity of variances.

Correlations were tested through Pearson r and rated with the classification from Cohen [26], with |r|=0.1 as weak correlation, |r|=0.3 as moderate correlation, and |r|=0.5 as strong correlation. ANOVA with repeated measures was used to test if the mean values of digital literacy at different measurement points differ significantly. Post hoc tests were used to determine between which measurement times significant differences exist. For evaluation of effect sizes, the classification according to Cohen [26] was chosen with 0.01 as a weak effect, 0.06 as a moderate effect, and 0.14 as a large/strong effect. As η² systematically overestimates the effect size, ω² and ε² are also calculated, as these have lower bias [27]. Between-subject effects such as age, gender, and education level were also tested to gain insights into their additional effect.

The SUS score was calculated as the sum of item scores with the negative worded items inverted and multiplied by 2.5, resulting in a value between 0 and 100 [24].

The free-text answers were formed into inductive categories, and only the top 3 of particularly positive or particularly negative aspects were reported. All other named categories could be seen in Multimedia Appendix 3.


Digi-Manager Characteristics

A total of 100 participants started the Digi-Manager training course. The participants were aged between 20 and 61 years, with an average age of 37.4 (SD 11.3) years, and the vast majority identified as female (95/100, 95%). Only 4 (4%) identified as male, and 1 (1%) did not provide any information on their gender identity. The participants worked at doctors’ offices with different primary medical fields, as shown in Table 1.

Table 1. Participants’ demographic data (N=100).
CharacteristicsParticipants (N=100)
Age (years), mean (SD)37.4 (11.3)
Gender, n (%)
 Woman95 (95)
 Man4 (4)
 Prefer not to say1 (1)
Primary medical field of doctor’s office, n (%)
 General practice50 (50)
 Gynecology9 (9)
 Dermatology5 (5)
 Orthopedics5 (5)
 Psychiatry or psychotherapeutic practice4 (4)
 Pediatrics3 (3)
 Internal medicine3 (3)
 Neurology3 (3)
 Urology2 (2)
 Ophthalmology2 (2)
 Gastroenterology2 (2)
 Child and adolescent psychiatry2 (2)
 Oral and maxillofacial surgery2 (2)
 Otorhinolaryngology1 (1)
 Pneumology1 (1)
 Nuclear medicine1 (1)
 Radiology1 (1)
 Reproductive medicine1 (1)
 Not specified3 (3)

For the first survey, the participation rate was 100%. In the second survey, 97 out of the 100 Digi-Managers participated. For the third and last survey, only 64 of the 100 answered the entire questionnaire, despite repeated reminders via email and ILIAS.

Gender as a between-subject effect was not further monitored because of the vast majority (95/100, 95%) of female-identifying participants. The variables that showed a significant correlation with age were all 3 measurement points for confidence in usage behavior with a medium negative effect (rt0=–0.422, P<.001; rt1=–0.423, P<.001; rt2=–0.381, P=.005). The other correlation values can be seen in Multimedia Appendix 4.

Evaluation of the Certificate Course

The course was evaluated separately for the knowledge and the practical modules, but for both, the review was very positive. Most ratings were equal to or above a mean value of 2; only the selection of dates for the knowledge modules and the time frame for both the knowledge and practical modules were just below this value. All other categories for support and design, content, and web conferencing systems were ranked 1.4‐2 (Table 2).

Table 2. Evaluation of the certificate course aspects by the Digi-Managers (scale from 1 (very good) to 6 (very bad)).
Evaluation categoriesKnowledge modules, mean (SD)Practical modules, mean (SD)
Support and design
 Participant support1.7 (0.9)1.4 (0.7)
 Technical moderation1.6 (0.7)a
 Quality of scripts1.9 (0.9)
 Atmosphere1.7 (0.8)1.4 (0.7)
Content
 Topicality of content1.6 (0.6)1.4 (0.7)
 Content structure1.8 (0.8)1.7 (0.8)
 Selection of speakers/authors1.9 (0.8)1.6 (0.8)
 Discussion/interaction2 (1)1.8 (0.9)
 Practical relevance2 (1)1.7 (1)
 Personal goal achievement1.9 (0.9)1.7 (0.8)
Planning and organization
 Program announcement1.9 (1.1)1.5 (0.7)
 Selection of dates2.2 (1.1)1.7 (0.8)
 Time frame2.1 (1)2.1 (1.1)
Web conferencing system
 Technical functionality1.8 (0.7)
 User friendliness of the screen1.9 (0.9)
 Sound quality1.8 (0.9)
 Image quality1.8 (0.8)

aNot applicable.

The most named positive aspects of the knowledge modules were the content (n=24), with comprehensive scripts, refreshment of knowledge, and new impulses in a comprehensible manner, and the possibility to have access before and after the materials. The second most named was the good support (n=24), which was fast, friendly, and competent, gave individual help, and was very helpful with further questions. The practical relevance and application orientation were named as a positive aspect, the third most (n=14). In turn, most participants said that they could not name any negative aspects (n=25). Some mentioned that it was too much input for the short period of time (n=7), and others said that the modules had too much frontal teaching (n=6), which made the courses dry, difficult to follow, boring, and with too little interaction.

For the practical modules, the exchange with other participants in small groups was particularly positively highlighted by the Digi-Managers (n=26) for offering new perspectives or solutions in conversations. Furthermore, positively perceived was the very informative and instructive nature of the practice modules (n=10) and direct transfer to everyday practice (n=9). When asked about negative aspects, the most common response was that the participants could not name any (n=12). Some participants said that they wished for better day and time selection options (n=4), because the option selections were confusing, could not always be set up, and the always-changing slots were problematic. Some participants wished for more feedback on the tasks they had completed (n=4).

Evaluation of the Digital Maturity Tool

The usability of the digital maturity tool was ranked via SUS score with a mean value of 85 (SD 12.9), which could be classified as an A+ grade after the Sauro-Lewis grading scale [28]. The content of the tool was evaluated very positively in the different aspects with values between 1.39 and 1.79 (Table 3).

Table 3. Evaluation of the digital maturity tool content aspects by the Digi-Managers (scale from 1 (very good) to 6 (very bad)).
Content evaluation categoriesContent rating, mean (SD)
Topicality of content1.4 (0.6)
Content structure1.6 (0.7)
Practical relevance1.6 (0.7)
Personal goal achievement1.8 (1)

Regarding the digital maturity tool, the participants especially liked the visual representation as a radar chart (n=23) because it was appealing, clear, and colorful; showed all relevant information at a glance; and made the digitalization tangible. A lot of participants positively highlighted the possibility to determine the “status quo” of their practice (n=15) and the potential to uncover gaps and deficits (n=11). Most had no negative aspects (n=16), but some mentioned that the answer options to the items were not always clearly distinguishable, or there were no answer options that fitted their practice perfectly, but it was possible to clarify answers through free-text fields (n=8). Some participants had problems with lots of technical jargon (n=5).

Progression of Digital Literacy

An ANOVA with repeated measures showed that cognitive proficiency changed significantly (F1.67,63.55=5.5, P<.009; partial η²=0.13). ω²=0.1 and ε²=0.1 could be classified as a medium effect. Because of violations of sphericity (P=.018), the Greenhouse-Geisser adjustment was used. Bonferroni-adjusted post hoc analysis revealed a significant increase (P=.027) in cognitive proficiency from the second to last measurement point (meanDiff –0.25, 95% CI –0.48 to –0.22) and a significant increase (P=.002) between the first and last survey (meanDiff –0.28, 95% CI –0.46 to –0.09) (meant0 4.3, SDt0 0.4; meant1 4.4, SDt1 0.6; meant2 4.6, SDt2 0.3) (Figure 2).

Technical proficiency initially decreased before rising again at the last measurement point (meant0 4.5, SDt0 0.4; meant1 4.2, SDt1 0.3; meant2 4.6, SDt2 0.4). None of the variables were normally distributed. Because of violations of sphericity, the Greenhouse-Geisser adjustment was used to correct the repeated measures ANOVA, and it showed significant differences (F1.83,75.05=31.7, P<.001; partial η²=0.44). ω²=0.42 and ε²=0.42 showed a large effect. Bonferroni-adjusted post hoc analysis revealed significant differences between all measurement points: a significant decrease (P<.001) between the first and second survey (meanDiff 0.25, 95% CI 0.14 to 0.36), as well as a significant increase (P<.001) between the second and third survey (meanDiff –0.41, 95% CI –0.54 to –0.28), and an overall significant increase (P=.022) between the first and third survey (meanDiff –0.17, 95% CI –0.31 to –0.02).

Ethical proficiency increased over the entire training period (meant0 4.6, SDt0 0.5; meant1 4.8, SDt1 0.6; meant2 4.9, SDt2 0.2), with a statistically significant difference between measurements, assessed by repeated measures ANOVA with the Greenhouse-Geisser correction (Mauchly test P=.003; F1.68,94.3=6.54, P=.004; partial η²=0.11). ω²=0.09 and ε²=0.09 could be classified as a medium effect. Bonferroni-adjusted post hoc analysis showed a significant increase (P<.001) from the first to last measurement point (meanDiff –0.30, 95% CI –0.46 to –0.14).

Figure 2. Means, SDs, and post hoc results for the 4 PHIC4PHC (Public Health Informatics Competencies for Primary Health Care) domains. *P<.05, **P<.01.

The change in health information literacy was assessed as significant through repeated measures ANOVA (F2,102=5.71, P=.004; partial η²=0.1). ω²=0.08 and ε²=0.08 show a medium effect. Sphericity could be assumed (P=.361). Post hoc analysis with the Bonferroni correction revealed a significant increase (P=.002) from the first (meant0 4.3, SDt0 0.5) to last (meant2 4.6, SDt2 0.5) measurement point (meanDiff –0.29, 95% CI –0.49 to –0.10).

The confidence level for technology usage increased from one measuring time to the next (meant0 3.6, SDt0 0.8; meant1 3.7, SDt1 0.6; meant2 4.1, SDt2 0.6). None of the variables were normally distributed, as assessed by the Shapiro-Wilk test, but since the sample size was sufficiently large (n>30) and due to the robustness of the ANOVA with repeated measures, no further actions need to be taken [29]. The Greenhouse-Geisser adjustment was used to correct the violations of sphericity due to a significant result of the Mauchly test (P=.002) [30]. A repeated measures ANOVA with a Greenhouse-Geisser correction determined that the confidence level showed a statistically significant difference between measurements (F1.6,71.72=22.59, P<.001; partial η²=0.33). ω²=0.32 and ε²=0.32 could be classified as a large effect.

Bonferroni-adjusted post hoc analysis revealed significantly (P<.001) higher confidence scores in the comparison between the first and last measurement points (meanDiff –0.48, 95% CI –0.32 to 0.03), and significantly (P<.001) higher between the second and last measurement points (meanDiff –0.34, 95% CI –0.48 to –0.20).

The attitude toward (health) technology was assessed by TAM and UTAUT items. There was no statistically significant difference for the different measurement points of the TAM (F1.65,70.91=1.89, P=.166) as assessed by repeated measures ANOVA with the Greenhouse-Geisser correction (Mauchly test P=.007). The mean TAM score was already high in the first survey (meant0 6, SDt0 0.7) and remained at a similar height for the second (meant1 6.2, SDt1 0.7) and third (meant2 6.3, SDt2 0.8) surveys.

For the UTAUT scores, normal distribution (Pt0=.174; Pt1=.200; Pt2=.200) and sphericity (P=.080) could be assumed. The repeated measures ANOVA showed no significant differences (F1.65,70.91=1.89, P=.166). The UTAUT score remained—similar to the TAM score—high over the entire data collection period (meant0 6, SDt0 0.3; meant1 6.1, SDt1 0.7; meant2 6.2, SDt2 0.4).


Principal Results

The study results showed that the training was both successful and satisfactory for the participants of the certificate course “Certification of Digitalization Officers in Medical Practices and Psychotherapeutic Practices (Digi-Manager).” Although the Digi-Managers already started with a high level of self-evaluated digital literacy, this increased significantly from the beginning to the end of the training program. Hypothesis H1 was confirmed: after the training course, the Digi-Manager had significantly higher values in the cognitive proficiency, ethical proficiency, and health information literacy with a medium effect. The significant increase in technical proficiency even had a large effect. Hypothesis H2 was also confirmed, and attending the Digi-Manager training course significantly improved participants’ general confidence level for technology usage with a large effect. The attitude toward digital (health) applications remained stable at a high positive level over the entire course duration. Therefore, hypothesis H3 was rejected because the attitude toward digital (health) applications did not improve during and after the Digi-Manager training course. We assumed that this was because the practices had to actively apply for the course. Although the participants were finally drawn by lot, it is likely that practices that were interested in digital topics anyway applied more often and sent their most digitally savvy colleague. The very positive perception of the training—both for the knowledge and practical modules—also remained stable throughout the course. Age had no effect on most of the variables. The only variable that was negatively influenced by older age was self-confidence in use. This effect was stable for all 3 measurement times. Similar results of lower self-confidence when dealing with digital topics in older adults were found in other studies [20]. Our survey therefore reflects the state of research that there are hardly any age effects in the use of digital tools. Gender effects could not be examined because of the vast majority of female-identifying participants. However, the low participation rate of men reflects the real gender proportion of men in this occupational field, which was only 2% in the year 2023 in Germany, as reported by the federal employment agency (Bundesagentur für Arbeit).

To our knowledge, this is the first training program for medical assistants for general digital literacy that was scientifically evaluated. Multiple review papers show that training courses for health care staff teach mostly about specific technologies such as electronic medical records or telehealth [31,32], take place for the most part in an academic context [33], or are intended for physicians [34]. Many authors criticize the lack of sufficient training [34-36].

A competence measure study in the more digital-savvy country of Finland [37] showed that further education for health professionals is needed not only in Germany but all over the world. The level of digital competence among health care professionals also varies in other countries. Especially, human-centered remote counseling competence was identified as the category with the weakest score. Health care professionals’ knowledge of ethical, legal, and regulatory requirements, as well as privacy and security issues regarding digital tools, was named as a mandatory subject matter in training. In the study of Jarva et al [37], higher age was associated with lower evaluation of digital solutions as part of work and a decrease in self-evaluated competence. This was not further confirmed in our study.

Limitations

One limitation of the results is that, despite repeated reminders both digitally and in person, there was a high level of nonparticipation at the time of the last survey. Only 64% (64/100) completed the last survey. It cannot be ruled out that this might slightly distort the results as it is more likely that the committed participants, who got a lot out of the training anyway, participated until the end than those who perceived the training as less enriching. In future training courses, an attempt will be made to combine the evaluation with the last content-related work in the course in order to increase participation in the final evaluation.

As a further limitation on the part of the participants, it should be considered that, as already briefly mentioned above in the discussion of H3, a self-selection bias could exist through the application process and registration through the practice owner. Participants were either self-motivated to take part or were perceived by their practice owner as the “most suitable” and therefore probably the most interested person of the practice in digital topics.

Furthermore, it must be questioned whether the self-assessment questionnaires were really suitable for measuring competence. According to the results, competence in the domain of technical proficiency decreased when participating in this training course, which appears illogical. Since the questionnaire measures how capable the participants see themselves, this value can decrease if they learn what they do not yet know. Self-perception does not always match actual performance [38]. This is further backed up by another study that questions the suitability of self-assessment scales for measuring competence: “Perceived skills [...] do not predict actual performance,” as van der Vaart et al [39] stated in their paper, comparing the results of self-assessment scales that aim to measure the eHealth literacy of participants, with their actual performance in different skill tests [39]. In this study, correlations between the used eHEALS (eHealth Literacy Scale) measure and successfully completed tasks were nonsignificant and weak, and no group differences between participants who scored below and above the median in the performance tests for the eHEALS scores were found. Jarva et al [37] supported this thesis by using the specified term “self-evaluated competence” in their study. The reliability and validity of estimating one’s own competence have already been questioned by many authors [40-42]. Despite this, self-assessment of specific skills and knowledge is, to date, the most commonly used form of measuring digital competence [21]. The question arises as to whether the questionnaires—tested for quality characteristics such as reliability—are in fact simply measuring a different concept than competence. Ulfert-Blank and Schmidt [43] suggested digital self-efficacy as a possible characteristic that could be measured through these instruments, defined as “an individual’s perception of efficacy in performing tasks related to the use of digital system.” Bancroft et al [13] proposed that self-assessment of competence measures a mixture of competence and confidence and that both concepts are closely related and sometimes conflated, but also could be out of alignment, and a lack of confidence could hold back people who are per se competent. New paths must be found to measure the actual digital competence of health care professionals.

Future Research Directions

As mentioned above, new alternative ways to measure digital literacy/competence—besides self-assessment scales—must be found. One promising approach could be the use of performance measures, which were already used for the measurement of other concepts, like eHealth literacy [39] or data literacy [44,45].

It is noticeable among the participants of the training course that they all started with a very positive attitude. This is probably because there was an active application process for the training, and people who were already digitally interested were more likely to want to be trained as Digi-Managers. It would be interesting to see whether further training would lead to an improvement in the attitudes of people who are not yet so positively disposed.

Learnings for Future Courses

The success in positive learning outcomes and satisfaction of the participants shows the relevance of the continuation of the course. The training program will be carried on with slight changes, following the feedback of participants, instructors, and organizers. Future courses will be shorter in time to enable smaller practices to participate with fewer lost hours of their employees and more e-learning and web-based courses in order to travel less. That should also improve the day and time selection options. Course sizes are to be reduced to enable more active involvement of participants and, above all, to further support practical networking and the exchange of experience. A streamlined concept with consistent quality should focus on the unique selling points of the Digi-Manager training: the digital laboratory and maturity model. The additional focus on soft skills (project management, communication, and conflict management) is helpful for the effective transfer and realization of digital projects in practice.

Conclusions

The Digi-Manager program was a successful and long-needed training program for health care professionals in the German region of Westfalen-Lippe. More training programs and courses for health care professionals are needed not only in Germany but all over the world. The mixture of transferring theoretical knowledge and practical applications with reference to one’s own everyday work through soft skill training, the maturity model, and digitalization strategy results in a unique and effective further education concept.

Acknowledgments

This research was funded by the German Federal Ministry of Health (BMG, Bundesministerium für Gesundheit), grant ZMI5-2523FEP30B.

Authors' Contributions

All authors contributed to the conceptualization, funding acquisition, manuscript writing, and revision. SM provided overall supervision, and TG oversaw project administration. AM and SM were responsible for designing the methodology and conducting data analysis.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Equivalent of the ÄKWL certificate course and content of the different modules. ÄKWL: Ärztekammer Westfalen-Lippe (medical association of the Westphalia-Lippe region).

PDF File, 148 KB

Multimedia Appendix 2

Questionnaires of the web-based surveys.

PDF File, 581 KB

Multimedia Appendix 3

Inductively formed categories from free-text answers regarding the course modules.

PDF File, 176 KB

Multimedia Appendix 4

Correlation coefficients and significance values.

PDF File, 221 KB

Checklist 1

Checklist items of GREET (Guideline for Reporting Evidence-Based Practice Educational Interventions and Teaching).

PDF File, 309 KB

  1. Nazeha N, Pavagadhi D, Kyaw BM, Car J, Jimenez G, Tudor Car L. A digitally competent health workforce: scoping review of educational frameworks. J Med Internet Res. Nov 5, 2020;22(11):e22706. [CrossRef] [Medline]
  2. Yeung AWK, Torkamani A, Butte AJ, et al. The promise of digital healthcare technologies. Front Public Health. 2023;11:1196596. [CrossRef] [Medline]
  3. Stachwitz P, Debatin JF. Digitalisierung im Gesundheitswesen: heute und in Zukunft. Bundesgesundheitsbl. Feb 2023;66(2):105-113. [CrossRef]
  4. Digitalisierung und Datennutzung für Gesundheitsforschung und Versorgung – Positionen und Empfehlungen [Article in German]. German Science and Humanities Council. Jul 2022. [CrossRef]
  5. Weik L, Fehring L, Mortsiefer A, Meister S. Big 5 personality traits and individual- and practice-related characteristics as influencing factors of digital maturity in general practices: quantitative web-based survey study. J Med Internet Res. Jan 22, 2024;26:e52085. [CrossRef] [Medline]
  6. Weik L, Fehring L, Mortsiefer A, Meister S. Understanding inherent influencing factors to digital health adoption in general practices through a mixed-methods analysis. NPJ Digit Med. Feb 27, 2024;7(1):47. [CrossRef] [Medline]
  7. Ryan RM, Deci EL. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am Psychol. Jan 2000;55(1):68-78. [CrossRef] [Medline]
  8. Vitello S, Greatorex J, Shaw S. What is competence? A shared interpretation of competence to support teaching, learning and assessment. Cambridge University Press & Assessment. Dec 20, 2021. URL: https:/​/www.​cambridgeassessment.org.uk/​Images/​645254-what-is-competence-a-shared-interpretation-of-competence-to-support-teaching-learning-and-assessment.​pdf [Accessed 2025-07-30]
  9. De Leeuw JA, Woltjer H, Kool RB. Identification of factors influencing the adoption of health information technology by nurses who are digitally lagging: in-depth interview study. J Med Internet Res. Aug 14, 2020;22(8):e15630. [CrossRef] [Medline]
  10. Mannevaara P, Kinnunen UM, Egbert N, et al. Discovering the importance of health informatics education competencies in healthcare practice: a focus group interview. Int J Med Inform. Jul 2024;187:105463. [CrossRef] [Medline]
  11. Hübner U, Thyea J, Shaw T, et al. Towards the TIGER international framework for recommendations of core competencies in health informatics 2.0: extending the scope and the roles. In: Studies in Health Technology and Informatics, Volume 264: MEDINFO 2019: Health and Wellbeing e-Networks for All. IOS Press; 2019:1218-1222. [CrossRef]
  12. Medizinische/r Fachangestellte/r – Ausbildungsberuf [Article in German]. Bundesagentur für Arbeit. 2024. URL: https://web.arbeitsagentur.de/berufenet/beruf/33212 [Accessed 2024-10-19]
  13. Bancroft R, Challen R, Pearce R. Searching for a shared understanding of digital confidence in a tertiary context: a scoping review. J Learn Dev High Educ. 2024;(30). [CrossRef]
  14. Levine T, Donitsa-Schmidt S. Computer use, confidence, attitudes, and knowledge: a causal analysis. Comput Human Behav. Jan 1998;14(1):125-146. [CrossRef]
  15. Neunaber T, Meister S. Digital maturity and its measurement of general practitioners: a scoping review. Int J Environ Res Public Health. Feb 28, 2023;20(5):4377. [CrossRef] [Medline]
  16. Phillips AC, Lewis LK, McEvoy MP, et al. Development and validation of the guideline for reporting evidence-based practice educational interventions and teaching (GREET). BMC Med Educ. Sep 6, 2016;16:237. [CrossRef] [Medline]
  17. van Volkom M, Stapley JC, Amaturo V. Revisiting the digital divide: generational differences in technology use in everyday life. N Am J Psychol. 2014;16(3):557-574. URL: https://psycnet.apa.org/record/2014-54069-011 [Accessed 2025-07-30]
  18. Acilar A, Sæbø Ø. Towards understanding the gender digital divide: a systematic literature review. Glob Knowl Mem Commun. 2023;72(3):233-249. [CrossRef]
  19. Siddiq F, Scherer R. Is there a gender gap? A meta-analysis of the gender differences in students’ ICT literacy. Educ Res Rev. Jun 2019;27:205-217. [CrossRef]
  20. Helsper EJ, Eynon R. Digital natives: where is the evidence? Br Educ Res J. Jun 2010;36(3):503-520. [CrossRef]
  21. Mainz A, Nitsche J, Weirauch V, Meister S. Measuring the digital competence of health professionals: scoping review. JMIR Med Educ. Mar 29, 2024;10:e55737. [CrossRef] [Medline]
  22. Rachmani E, Hsu CY, Chang PW, et al. Development and validation of an instrument for measuring competencies on public health informatics of primary health care worker (PHIC4PHC) in Indonesia. Prim Health Care Res Dev. Jul 6, 2020;21:e22. [CrossRef] [Medline]
  23. Kuek A, Hakkennes S. Healthcare staff digital literacy levels and their attitudes towards information systems. Health Inform J. Mar 2020;26(1):592-612. [CrossRef] [Medline]
  24. Lewis JR. The system usability scale: past, present, and future. Int J Hum Comput Interact. Jul 3, 2018;34(7):577-590. [CrossRef]
  25. Razali NM, Wah YB. Power comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors and Anderson-Darling tests. J Stat Model Analyt. 2011;2(1):21-33. URL: https://www.nrc.gov/docs/ml1714/ml17143a100.pdf [Accessed 2025-08-19]
  26. Cohen J. Statistical Power Analysis for the Behavioral Sciences. Routledge; 1988. [CrossRef] ISBN: 9780203771587
  27. Okada K. Is omega squared less biased? A comparison of three major effect size indices in one-way ANOVA. Behaviormetrika. Jul 2013;40(2):129-147. [CrossRef]
  28. Sauro J, Lewis JR. Quantifying the User Experience: Practical Statistics for User Research. Morgan Kaufmann; 2016. ISBN: 978-0-12-802308-2
  29. Blanca MJ, Alarcón R, Arnau J, Bono R, Bendayan R. Non-normal data: is ANOVA still a valid option? Psicothema. Nov 2017;29(4):552-557. [CrossRef] [Medline]
  30. Girden ER. ANOVA: Repeated Measures. SAGE Publications; 1992. ISBN: 9780803942578
  31. Samadbeik M, Fatehi F, Braunstein M, et al. Education and training on electronic medical records (EMRs) for health care professionals and students: a scoping review. Int J Med Inform. Oct 2020;142:104238. [CrossRef] [Medline]
  32. Edirippulige S, Armfield NR. Education and training to support the use of clinical telehealth: a review of the literature. J Telemed Telecare. Feb 2017;23(2):273-282. [CrossRef] [Medline]
  33. Fernández-Luque AM, Ramírez-Montoya MS, Cordón-García JA. Training in digital competencies for health professionals: systematic mapping (2015-2019). Inf Prof. 2021;30(2). [CrossRef]
  34. Jimenez G, Spinazze P, Matchar D, et al. Digital health competencies for primary healthcare professionals: a scoping review. Int J Med Inform. Nov 2020;143:104260. [CrossRef] [Medline]
  35. Konttila J, Siira H, Kyngäs H, et al. Healthcare professionals’ competence in digitalisation: a systematic review. J Clin Nurs. Mar 2019;28(5-6):745-761. [CrossRef] [Medline]
  36. Kinnunen UM, Heponiemi T, Rajalahti E, Ahonen O, Korhonen T, Hyppönen H. Factors related to health informatics competencies for nurses: results of a national electronic health record survey. Comput Inform Nurs. Aug 2019;37(8):420-429. [CrossRef] [Medline]
  37. Jarva E, Oikarinen A, Andersson J, Pramila-Savukoski S, Hammarén M, Mikkonen K. Healthcare professionals’ digital health competence profiles and associated factors: a cross-sectional study. J Adv Nurs. Aug 2024;80(8):3236-3252. [CrossRef] [Medline]
  38. Zell E, Krizan Z. Do people have insight into their abilities? A meta-synthesis. Perspect Psychol Sci. Mar 2014;9(2):111-125. [CrossRef] [Medline]
  39. van der Vaart R, van Deursen AJ, Drossaert CH, Taal E, van Dijk JA, van de Laar MA. Does the eHealth Literacy Scale (eHEALS) measure what it intends to measure? Validation of a Dutch version of the eHEALS in two adult populations. J Med Internet Res. Nov 9, 2011;13(4):e86. [CrossRef] [Medline]
  40. van Deursen AJAM, van Dijk JAGM. Measuring internet skills. Int J Hum Comput Interact. Sep 17, 2010;26(10):891-916. [CrossRef]
  41. Merritt K, Smith KD, Di Renzo JC. An investigation of self-reported computer literacy: is it reliable? Issues Inf Syst. 2005;6(1):289-295. [CrossRef]
  42. Hargittai E. Survey measures of web-oriented digital literacy. Soc Sci Comput Rev. Aug 2005;23(3):371-379. [CrossRef]
  43. Ulfert-Blank AS, Schmidt I. Assessing digital self-efficacy: review and scale development. Comput Educ. Dec 2022;191:104626. [CrossRef]
  44. Larasati PE, Yunanta DRA. Validity and reliability estimation of assessment ability instrument for data literacy on high school physics material. J Phys Conf Ser. Jan 1, 2020;1440:012020. [CrossRef]
  45. Pratama MA, Lestari DP, Sari WK, Putri TSY, Adiatmah VAK. Data literacy assessment instrument for preparing 21 Cs literacy: preliminary study. J Phys Conf Ser. Jan 1, 2020;1440:012085. [CrossRef]


ÄKWL: Ärztekammer Westfalen-Lippe (medical association of the Westphalia-Lippe region)
eHEALS: eHealth Literacy Scale
GREET: Guideline for Reporting Evidence-Based Practice Educational Interventions and Teaching
KVWL: Kassenärztliche Vereinigung Westfalen-Lippe (association of statutory health insurance physicians of the Westphalia-Lippe region)
mHealth: mobile health
PHIC4PHC: Public Health Informatics Competencies for Primary Health Care
SUS: system usability scale
TAM: technology acceptance model
UTAUT: unified theory of acceptance and use of technology


Edited by David Chartash; submitted 03.01.25; peer-reviewed by Amirabbas Azizi, Julia Busch-Casler, May Chomali; final revised version received 30.06.25; accepted 02.07.25; published 29.08.25.

Copyright

© Anne Mainz, Timo Neunaber, Paula Cara D'Agnese, Alexander Eid, Tanja Galla, Christoph Ellers, Sven Meister. Originally published in JMIR Medical Education (https://mededu.jmir.org), 29.8.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic information, a link to the original publication on https://mededu.jmir.org/, as well as this copyright and license information must be included.