Developing effective learning strategies to strengthen mental health professionals’ capacities and deliver evidence-based interventions in their communities is urgent (Merzel, 2023). Mental, neurological and substance use (MNS) disorders represent a significant burden of disability (from 19% to 34% in the Region of the Americas) for individuals and communities globally (Pan American Health Organization, 2018; World Health Organization, 2021). Despite the high prevalence of MNS disorders, it has been estimated that between 75% and 95% of people affected by these disorders do not get access to appropriate treatment (Pan American Health Organization, 2018), due to minimal availability of trained health professionals, among other factors. Notably, it is estimated that there are only 13 mental health workers for every 100,000 inhabitants worldwide (World Health Organization, 2021).
To tackle this challenging scenario, global efforts have been developed in recent years to train health professionals. The World Health Organization (2016) presented an updated version of the Intervention Guide for Mental, Neurological and Substance Use Disorders in Non-specialized Health Settings (mhGAP) 2.0, with the main objective of teaching the assessment, management, and follow-up of priority MNS conditions through protocols for clinical decision-making. The application of the mhGAP Intervention Guide has been flexible according to regional needs and resources. For example, some studies have reported training of professionals with the complete guide composed of eight modules (Kokota et al., 2020), while others have prioritized depression disorders, psychoses, substance use, epilepsy and suicide, adapting the learning strategy to the needs of their social context (Siriwardhana et al., 2016).
The mhGAP Intervention Guide has been used to train, in a multidisciplinary perspective, social workers, nursing professionals (Iheanacho et al., 2014), medical doctors (Robles et al., 2019), medical assistants, students, research assistants and educators (Lasisi et al., 2017). In a recent review (Keynejad et al., 2018), most of the training programs referred in 33 studies lasted from two to three days in face-to-face models, and the strategies used mainly included verbal instruction, video tutorials, role-playing (Blanco-Vieira et al., 2017), seminars and discussions, and pre-post evaluations of knowledge and attitudes (Kokota et al., 2020).
Notably, due to the current social context brought about by the COVID-19 pandemic, several activities such as education programs, clinical interventions, and staff training, have been forced to move to on-line modalities. Moreover, the current times have promoted a fast technological development to extend the regional scope of training, education and evaluation through online and distance interactions without losing reliable measures of both knowledge and practical implementations (Walker et al., 2021). Considering the urgent need for developing effective training programs for health professionals to promote the acquisition of competences in primary care settings for reducing the gap in mental health care (Keynejad et al., 2021; Raj et al., 2021), the present work describes the development and effects of a massive training program in online modality for the mhGAP Intervention Guide, focused on knowledge and skills acquisition on assessment, management and follow-up procedures in non-specialized health professionals.
Methods
Participants
975 participants, all residents of Mexico, were recruited in a convenience sample through institutional invitations, as part of the National Training Program in Primary Care for Mental Health and Addiction (Programa Nacional de Capacitación en Salud Mental y Adicciones para Primer Nivel de Atención). Inclusion criteria for data analysis were that participants 1) completed the pre-post evaluation, 2) finished all the modules of the online course, and 3) indicated their sex, profession, and geographic region of residence. They had an average age of 38.43 years (SD = 9.72), 78.15% were female and 21.84% were male, and 51.79% were psychologists, 1.12% psychiatrists, 8.82% nurses, 17.53% physicians, 12.82% social workers, 1.12% students, 0.30% teachers, and 6.35% other.
Instruments
Knowledge Screening. We used a 24-item test from the mhGAP Training of Trainers and Supervisors (ToTS) Training Manual (World Health Organization, 2017) for the screening of knowledge about assessment, management, and follow-up procedures. The items were presented through a multiple-choice format and included questions on the main MNS conditions: Depression (3), Psychoses (3), Epilepsy (3), Child and Adolescent Mental and Behavioral Disorders (5), Dementia (4), Disorders due to Substance use (2), Suicide (1), Other Significant Mental Health Complaints (1), and Essential Care and Practice (2). Items are formulated to identify effective communication skills, clusters of symptoms, recommended treatments for the main MNS conditions, emergency case management, and follow-up.
Learning Activities. We developed a set of 11 interactive learning activities to reinforce information from each module of the mhGAP Guide: Essential Care and Practice with two learning activities, and one additional learning activity for each of the main conditions including Depression, Psychoses, Epilepsy, Child and Adolescent Mental and Behavioral Disorders, Dementia, Disorders due to Substance Use, Self-Harm/Suicide, Stress, and Other Significant Mental Health Complaints. Relevance of the content and its clarity were validated by four expert clinicians.
For the Essential Care and Practice module, the two interactive learning activities consisted of a five-item true/false questionnaire. Activity 1 required participants to observe a four-minute video of a clinical care session with a physician performing essential care practices. When the video was over, participants were asked to rate each statement true or false. Activity 2 was evaluated with five true/false statements about the same video, aimed at assessing communication skills. For the other nine modules, the interactive learning activity was a questionnaire of eight to 13 multiple-choice items about the characteristics of each disorder and its management. For these nine modules, the items evaluated identification of symptoms related to each disorder, identification of the recommended psychoeducational messages for persons with those disorders, and identification of the main interventions and follow-up for each disorder.
Programmed-Simulated Cases. We designed an innovative online evaluation strategy based on branching scenarios to assess participants’ decision-making during simulated interactions with persons with depression and substance use disorder. This strategy, implemented in Moodle®, simulated clinical interactions through short video clips. Each case was displayed as follows: the first screen presented a text vignette with general information about the person, including age, sex, educational level, occupation, marital status, and the main reason for consultation. A video then showed a person describing their situation, simulating a clinical care session. After this video, the participant was asked to decide among three forced options presented in text on the screen, and then another video was presented, which depended on the participant´s choice. After each video, textual feedback scored the skill level as “correct,” “partially correct,” or “incorrect,” with a detailed explanation of what was incorrect, as well as the expected response. We designed a total of ten short clips for each programmed simulated case to follow a logical sequence based on participants’ responses.
The cases were designed by four expert clinicians and questions and possible answers were based on a review of typical cases of depression and alcohol abuse (Félix Romero et al., 2021). The cases were thus close to real primary care cases, providing ecological validity in the design of the interactions in the branching scenario. The items for both cases, including dialogue, questions, and forced choices, were validated by ten experts: three psychiatrists and seven psychologists, all with graduate degrees and with an average of 9.1 (SD = 4.2) years of experience in mental health care. They assessed the clarity and relevance of the items, and the adequacy and coherence of the response options, showing high interobserver agreement: 0.96 for the depression case and 0.92 for the substance use case.
We created this evaluation set to identify not just the general knowledge acquisition about effective communication skills, clusters of symptoms and recommended treatments for the main MNS conditions and emergency case management, but specific skills of participants to interact with persons with MNS conditions and to make decisions about the assessment, management and follow-up components of the mhGAP Intervention Guide.
Procedure
Participants were invited by their institutions as part of the National Training Program in Primary Care for Mental Health and Addiction, which aims to reduce the gap in mental health care through the training of non-specialized staff who are the first contact with the health system and have the possibility of managing MNS conditions. Participants who accepted the invitation were offered a certificate that attests the hours completed, the topic of the training and the training institutions.
Participants signed-up for the course using Google Forms® which asked for general information about their professional profile, institution, and contact information. Registration also included a written informed consent that explained the training objective, its duration, procedures, methodology and information about personal data confidentiality and management, emphasizing the voluntary nature of their participation, ensuring that quitting would not have repercussions on their jobs, and highlighting the benefits for their professional practice and skills. Once they completed the registration forms, they were enrolled in an online platform based on Moodle® for the training in the mhGAP Intervention Guide.
Each participant received a welcome letter with a personal ID and password, and the details and instructions to enter and use the platform. The welcome letter indicated the self-management nature of the course, the time availability of the course and a schedule to optimally organize study times. This letter also specified the requirements to obtain the course certificate, such as to complete all the Learning Activities (including initial and final evaluations) and to obtain a grade equal or greater than eight in the range 0-10.
At the beginning of the online course, participants completed a pre-evaluation that consisted of Knowledge Screening and the two Programmed-Simulated case activities to evaluate their baseline knowledge and skills to perform the mhGAP procedures.
Once participants completed the pre-evaluation, they moved on to the ten self-managed modules that included the 11 Learning Activities above mentioned. The modules were organized as follows: 1. Essential Care and Practice, 2. Depression, 3. Psychoses, 4. Epilepsy, 5. Child and Adolescent Mental and Behavioral Disorders, 6. Dementia, 7. Disorders due to Substance Use, 8. Self-Harm/Suicide, 9. Stress, and 10. Other Significant Mental Health Complaints. These ten modules had the same design: each topic was theoretically developed through a detailed description of the main concepts and procedures in the assessment, management, and follow-up of each MNS condition. This information was presented using text, images, and supporting modeling videos. Additional specialized documents were also available in the platform to extend information on each topic. By the end of each module, participants were required to answer one of the learning activities above-mentioned, such as the true/false questionnaires to evaluate their level of knowledge on the reviewed MNS condition.
The mhGAP Intervention Guide online course was flexible in a way that allowed participants to complete all the modules or skip some of them, although they were encouraged to finish the course and they only received a certificate when completing the entire course. The decision to skip modules was individual and it was related to the participant´s profiles, for example, those in the medical field might be more experienced in some disorders as compared to those from nursing or social work staff. Additionally, the fact that the course was part of their work activities could limit the available time to complete it and led the participants to prioritize some modules based on the perception of its usefulness in their professional practice.
At the end of the 10 learning modules, participants completed the final evaluation which comprised a set of three tests: a post-test of the Knowledge Screening and the two Programmed-Simulated cases. We gave the certificate to those who completed the Learning Activities and the pre-post evaluation and for those who did not finish the course, we offered to register them in a future edition.
Results
The main objective of this study was to determine the effect of the Mental Health Gap Action Programme Intervention Guide online course on participants’ knowledge and skills in the assessment, management, and follow-up of major mental, neurological, and substance use disorders. We divided the data analysis into two parts: 1) an evaluation of participants’ level of knowledge and skills before and after the training, and 2) an analysis of the effectiveness of Programmed Simulated cases as a strategy to improve participants´ skills, employing cluster analysis with machine learning and a logistic regression model.
Pre-Post Performance
The statistical analysis was performed with R 3.6.3 base functions (R Team, 2016), lme4 (Bates et al., 2015), and ggeffects (Lüdecke, 2018) specialized CRAN packages. We explored whether participants increased their knowledge and skills by comparing their scores before and after the mhGAP Guide online course with three variables corresponding to Knowledge Screening, a Programmed Simulated Case of Depression, and a Programmed Simulated Case of Substance Use Disorder.
To determine the degree of improvement in the Programmed Simulated Cases, two delta (Δ) variables were calculated for each participant for each case, equal to their post score minus their pre score. The scores were minmax rescaled to positive scores ranging from 0 to 1, where a score of 1 was given to the maximum performance improvement among participants and 0 to the minimum.
Because normal distributions could not be assumed for the pre and post scores due to a highly skewed pattern, we used a nonparametric Wilcoxon test to explore these differences (Figure 1). In all cases, the post scores were higher than the pre scores, indicating improvement in participants’ knowledge and skills after completing the online course. The post knowledge screening score (M = 0.95, SD = 0.06) was found to be significantly higher (W = 475800, p < 0.001) than the pre knowledge screening score (M = 0.36, SD = 0.09). Comparably, the post score for the Programmed Simulated Case of depression (M = 0.87, SD = 0.14) was significantly higher (W = 49000, p <0.001) than the pre score (M = 0.76, SD = 0.16), and the post score for the case of substance use disorder (M = 0.92, SD = 0.011) was also significantly higher (W = 67846.5, p < 0.001) than the pre score (M = 0.89, SD = 0.12).
Cluster Analysis of Performance on Programmed Simulated Cases
Given that our results suggested that the Programmed Simulated Cases were an effective tool for training professionals in assessment, management, and follow-up of depression and substance use disorder, we further explored how this strategy helped improve participants’ skills. We divided participants into two groups: those with greater improvements in the Programmed Simulated Cases, and those with lesser improvements. To do this, we created clusters of participants determined by their compound delta (Δ) scores from both cases. These compound Δ scores quantified the magnitude of improvement.
For this analysis, we first assumed Δ values from the Programmed Simulated Cases to be the Cartesian axes of Euclidean space. Then, we used an unsupervised machine learning algorithm (k-means) to cluster participants depending on their proximity to two major Gaussian grouping centers (i.e., centroids) of improvement (high and low). K-means randomly places as many centroids as needed within a multidimensional space. Then, the fit of the centroids is improved by so that the distance between each centroid and its surrounding data points (in this case, the compound Δ scores), is minimized. This let us draw a decision line to show which centroid group any new data point would belong to. To best identify the locations of the Gaussian centroids, we first assumed the existence of two improvement clusters: 1) greater improvement and 2) lesser improvement. We then ran 50 simulations with randomly located centroids. In each simulation, cluster centers adjusted their location to fit all our data. The simulation that best fit the actual data was used to identify participants as higher or lower performers (Figure 2).
The cluster identity was then used as a dependent variable in logistic regression models to determine whether participants’ sex, profession, institutional affiliation, or social vulnerability rating were predictors of their high or low performance (Figure 3).
A social vulnerability rating was calculated to explore the effect of socioeconomic inequality on the effectiveness of the mhGAP Guide online course. This variable was determined by the participants’ self-reported residential postal code. Based on this information, we gave a continuous social vulnerability rating to each participant, as defined by the Mexican National Institute of Statistics and Geography (INEGI) for each region of the country. This rating captures characteristics such as economic income, discrimination experienced, geographic segregation, and access to essential services.
The first characteristic on the alphabetical list was chosen as the intercept or level of comparison: “men” for sex, “nursing” for profession, and “civil organization” for institutional affiliation. For the analysis of social vulnerability, the intercept was defined as the lowest social vulnerability rating (-1.55). In all four cases, the logistic regression was corrected through Firth correction. We found no evidence for differential improvements as a consequence of sex (men β0 = 0.34, t=10.32, p<0.001; women β1=0.08, t=2.11, p=0.03), profession (nursery β0=0.47, t=8.96, p<0.001; medicine β1=-0.05, t=-0.86 p=0.38; others β1=0.09, t=1.14, p=0.25; psychiatry β1=-0.1, t=-0.68, p=0.49; psychology β1=-0.09, t=-1.73, p=0.08; social work β1=-0.02, t=-0.33, p=0.73; student β1=-0.01, t=-0.10, p=0.91; and teacher β1=-0.47, t=-1.63, p=0.10), institutional affiliation (civil organization β0=0.3, t=1.92, p=0.05, government β1=0.08, t=0.48, p=0.62; ministry of health β1=0.12, t=0.77, p=0.43; others β1=0, t=0.0, p=1; and universities β1=-0.02, t=-0.1, p=0.91), or social vulnerability ratings (lowest social vulnerability β0=0.39, t=12.74, p<0.01; highest social vulnerability β1=-0.02, t=-0.72, p=0.47). However, we did find that as social vulnerability increased, so did the estimate variability, which suggests that participants with higher social vulnerability ratings were associated with less certainty with being part of the high-performance group.
Discussion
Non-specialized health care providers face the challenge of bringing effective treatment closer to people with mental, neurological, and substance use disorders (World Health Organization, 2021). To achieve this goal, efforts have been made to include evidence-based procedures from training staff in the best practices for mental health care (Keynejad et al., 2021). Consistent with recent reports (Amsalem & Martin, 2021), the present study shows that an online course on the Mental Health Gap Action Programme Intervention Guide increases knowledge and skills about the assessment, management, and follow-up of priority conditions in mental health care.
The lack of specialized training and continuing education programs are two of the most frequent barriers to implementation of mental health interventions, as reported by health professionals themselves (Selin et al., 2020), and could lead to lack of knowledge, negative attitudes and beliefs about evidence-based practices (Selin et al., 2020), and poor perception of institutional support from authorities (Keen et al., 2021). Typical training includes a review of topics delivered though printed material and oral presentation by a trainer in face-to-face interactions (Herchenröther et al., 2021). Among other limitations, there are a limited number of trainees and time for this training. To these limitations have been added the need for remote learning strategies as a result of the COVID-19 pandemic. Online resources during the pandemic have increased the ability of educational institutions to provide training in physical and mental health care.
In addition to the specific knowledge imparted, one of the main opportunities provided by an online course is expanding the reach of training. Our study included participants from a variety of professional areas, such as medicine, psychology, psychiatry, nursing, and social work, both men and women, and from all over Mexico, a large country. The results showed no evidence of a differential effect in participants’ performance related to sociodemographic variables, suggesting that the training strategy proposed is effective in providing knowledge and skills to health care providers, regardless of their profession, sex, institutional affiliation, or location. This is an important finding, because previous studies with face-to-face training have reported mixed results with participants of different professional profiles, frequently favoring medical staff (Kruse et al., 2020; Selin et al., 2020). The generalizability of our training course across professions contributes to the objectives of the Mental Health Gap Action Program, facilitating the dissemination and transfer of useful resources for mental health care in non-specialized health care settings.
Despite the documented advantages (Yuhanna et al., 2020) of online training resources, there are difficulties in promoting their adoption. Previous research (Patel et al., 2021) has explored participants’ perceptions of online trainings, and has found that health professionals complain about the lack of resources for participation. One common complaint was that they had to use their own resources, such as computers, and also money and personal time to respond to training demands (Dumford & Miller, 2018). Another frequent problem, especially in low-income countries (Rosário et al., 2021), has been a lack of digital access, which sometimes makes it difficult for people to use such resources. We thus included in our analysis an exploration of the effect of socioeconomic level, based on a social vulnerability rating, on the effectiveness of this training in acquiring skills. We found that, in general, social vulnerability was not a strong barrier to participants’ performance, although we did observe that participants with greater social vulnerability showed less certainty of improving their scores. This lack of socioeconomic effect suggests that this kind of strategy is useful for reaching professionals who otherwise might not participate in such training because of difficulties in scheduling, taking time off from employment, or paying for transportation. However, it is important to further consider and analyze how such socioeconomic factors may affect knowledge and skills acquisition (Keen et al., 2021).
Our online platform focused not only on providing information, but also on strategies to evaluate the acquisition of this information and basic skills for assessment, management, and follow-up procedures. While the knowledge screening and learning activities were used as evaluation resources, the Programmed Simulated Cases additionally served as a strategy requiring participants to apply their knowledge in more realistic cases. Recently, simulated cases or simulated patients have been used as a tool for the assessment of clinical competences as part of training and research in clinical settings (Kühne et al., 2021). Given that they are standardized and ecologically valid (Pheister et al., 2017), simulated cases ensure equal opportunities for students to demonstrate their skill levels in a comparable manner that is appropriate for statistical analysis (Kühne et al., 2021). They allow for practice and feedback in a controlled environment, reducing the probability that they will make mistakes with real patients (Yap et al., 2021).
Simulated cases have traditionally been implemented in face-to-face interactions, with actors or persons trained to simulate patients with a variety of health issues, such as depression, self-harm, or suicidal acts (Ay-Bryson et al., 2020; Osborn & Cash, 2021). This has been a very effective strategy, but in most cases, it is an expensive one. Our alternative of Programmed Simulated Cases uses an online branching scenario (Mashaal et al., 2020) to reduce human error and allow the screening of skills in a short time and in a standardized situation. Moreover, the branching scenario provides participants with immediate feedback on their performance after every choice they make.
Although we presented evidence of how the use of technology could facilitate delivering an education program for mental health staff, it is worth mentioning some limitations of our approach. Particularly with Programed-Simulated Cases, we use two of the most common MNS conditions: depression and substance use; because of the origin of our sample, we could suppose they had more experience in the management of substance use cases, so, their performance in both cases differed, being better at substance use case since the beginning. This makes necessary to think in a different methodology of presentation, including more MNS conditions and maybe presenting them in a random arrangement. Future studies should also consider implementing strategies to avoid participants dropping out of the course. Our strategy provided the opportunity for participants to skip modules and choose which ones to finish. This flexibility is important considering the evidence (Dobson et al., 2008) on how such protocols can be strategically adapted where resources are limited to present only major components. This could make it possible for participants to complete a shorter but equally effective course, and to prioritize topics, adapting the content to their needs.
Conclusion
Knowledge and skills for assessment, management, and follow-up of mental, neurological, and substance use disorders are essential for the implementation and adoption of evidence-based practices. This study demonstrates the usefulness of an online training to promote knowledge and skills acquisition in health care professionals, as well as the great potential of evaluation strategies in which participants can learn, practice, receive feedback, and verify their competence levels, giving them the opportunity to implement effective interventions, develop best practices, and truly reduce the care gap for people with mental health conditions.