Six mCFIR sessions were held between August 2019 and January 2020 with four facilitators moderating them. There were 51 participants attending the mCFIR session, where 49 responses were recorded through Qualtrics and two surveys were either missing or not recorded. mCFIR surveys originally captured on paper surveys were manually inputted into Qualtrics post-session.
Participants were selected from a diverse group of stakeholders with the help of site leaders in order to have a comprehensive representation of the community (Table 2). A policy maker role was assigned to a participant if they were affiliated with health authorities or held a leadership position with relations to policies and/or finance. An external stakeholder role is assigned to a participant who works within the health or mHealth sector pertinent to the health issues the clinic addresses but is not a member of the HCP/clinic. The Implementation Manager is the individual in the clinic who coordinates or manages the WelTel implementation on site. Healthcare provider (HCP) can be either physicians, nurses, or medical social workers who are using WelTel to communicate and follow-up with patients. Patients are the end user of WelTel, the one receiving the messages and calls to their devices.
More participants attended the mCFIR session in East Africa than in Canada. The majority of participants fell under the end user categories, HCPs and patients. More interest was requested from external stakeholders in Rwanda to attend the mCFIR session to better understand and assess WelTel’s intended health setting for scale up purposes. The variability in the type of participants attending each mCFIR session site is reflected in the results. The TB Clinic and Haida Gwaii Hospital had no patient attendance. The patient Constructs in the Haida Gwaii session were answered by the attending participants from the perspective of the patient. The Haida Gwaii hospital site sited next steps as redoing the mCFIR focus group with patient participants.
Each site was asked to identify health issues and implementation goals at the beginning of the mCFIR sessions. Most of the goals identified revolved around access to HCPs outside of regular visits and treatment follow-up (Appendix A-2).
Importance & Performance Scoring
During the mCFIR sessions, each participant was asked to rate the Performance and Importance of each Construct on a scale of 0 to 10 (Figures 1-5). The heat map in Figure 1 presents the scores of Performance and Importance reported per site for each of the five Domains. Scores are displayed following a turquoise spectrum (from pale to dark turquoise). The pale turquoise represents the Domains that scored lowest in terms of Importance and/or Performance , and the dark turquoise represents the Domains that scored highest for Importance and/or Performance .
Nearly all sites rated all Domains high for Importance (Figure 1). Notably, a darker gradient is observed across Domain 4A “End users - HCP” for all sites (mean=8.9; STD=0.5). Comparatively, we observe a lighter gradient across Domain 2 “outer settings” for all sites as well (mean=7.6; STD=0.2). Oak Tree gave the highest ratings for most Domains (mean=9; STD=0.7). As for the overall Performance , we observe the darkest color gradient for Domain 4A “end users HCP”, (mean=7.6; STD=0.7). We also observe a lighter gradient across Domain 2 (outer settings), (mean=5.6; STD=1.8), and Domain 5 “implementation process”, (mean=,6.4; STD=1.4). The Rwanda and Wamba sites rated Performance highest for most Domains. Given no patients attended the mCFIR sessions in the TB Clinic, the team chose not to fill out Domain 4B “end users patients”, represented in the graph as a white ‘X’ box.
Performance and Importance scores per site
Figure 2 represents the rating of the various Constructs of each site to better understand which areas act as either facilitators or barriers when implementing WelTel’s platform in each of the intended settings.
For the Maralal site in Kenya, a total of 6 out of the 23 Constructs were rated on the higher end of the scale. These 6 Constructs include: Domain 1 “Comparative Advantage”, Domain 3 “ Acceptance”, Domain 4 “Benefit Perception” and “ Privacy”, Domain 4B “Benefit Perception” and “Language”. As for the Wamba site, 19 out of the 23 sites were rated high in Performance . The Constructs that rated on the lower end of the Performance scale were Domain 2 “Stakeholder Engagement”, Domain 4A “Training”, Domain 4B “Accessibility” and “Language”. As for Rwanda, 22 out of 23 Constructs were rated high in Performance . Domain 2 “Stakeholder Engagement” was the Construct with the lowest rating.
As for the Canadian sites, Oak tree rated 11 out of 23 Constructs on the lower end of the Performance scoring scale. The Constructs were Domain 1 “Affordability”, Domain 2 (all three Constructs), Domain 3 “Organizational Support”, Domain 4A “Privacy”, Domain 4B “Accessibility”, Domain 5 (all 4 Constructs). TB Clinic rated 10 out of 23 Constructs on the lower spectrum of the Performance scoring scale. The Constructs were with regards to Domain 1 “Affordability”, Domain 2 “Stakeholder Engagement” and “External Support”, Domain 3 “Acceptance”, Domain 4 (all three Constructs), Domain 5 (all Constructs except “Execution”. For the Haida Gwaii site, most Constructs were rated on the lower spectrum of Performance except for Domain 1 “Comparative Advantage”, and Domain 4A “Privacy”.
Performance and Importance scores by participant type
Figure 3 presents the reported scores of Importance and Performance per participant type for all three East African sites. The highest Importance ratings were provided by healthcare providers (HCPs) and policy makers, followed by external stakeholders.
In terms of Domains, Domain 4A “End users HCP”, Domain 1 “Intervention Characteristics”, and Domain 3 “Inner Settings” received the highest scoring. The highest Performance scores were provided by policy makers followed by patients and HCP. External stakeholders scored lowest in terms of Performance . Similarly, WelTel implementation managers scored most Domains lowest except for Domain 1 “Intervention Characteristics''. Domain 2 “Outer Settings” was ranked lowest by WelTel managers and ranked highest by policy makers. Whereas, Domain 4B “Patients” was ranked lowest by external stakeholders and highest by patients.
High and low Constructs for the three East African sites
Figure 4 represents the Constructs rated high or low in terms of Performance and Importance per participant type for all three East African sites. The heat map presents the Constructs that are perceived as either facilitators or barriers from the perspective of all participant types. External stakeholders rated 9 out of 23 Constructs high in Performance . These Constructs pertained to Domain 1 “adaptability”, Domain 2 “External Support”, Domain 3 “Acceptance”, Domain 4 “Benefit Perception” and “Privacy”, Domain 4B “Benefit Perception”, and “Language”, Domain 5 “Execution”. As for HCP, they rated 6 out of 23 Constructs low in Performance . These Constructs were in relation to Domain 1 “Affordability”, Domain 2 “Stakeholder Engagement” and “Scale-up Support”, Domain 3 “Organizational Support”, Domain 4B “Accessibility”, and Domain 5 “Evaluation”. Patients rated 7 out of 23 Constructs low in Performance . Constructs were the following, Domain 1 “Adaptability”, Domain 2 “Stakeholder Engagement”, Domain 3 “Organizational Support”, Domain 4A “Benefit Perception”, Domain 4B “Language”, Domain 5 “Intervention Planning”, “Evaluation”. Policy makers rated 4 out of 23 Constructs low in Performance . The Constructs were in relation to Domain 2 “External Support”, Domain 4A “Training”, Domain 4B “Privacy”, Domain 5 “Evaluation”. WelTel implementation managers rated 7 out of 23 Constructs low in Performance . The Constructs were with regards to Domain 2 “Stakeholder Engagement” and “Scale-up Support”, Domain 4A “Training”, Domain 4B “Accessibility”, Domain 5 “Intervention Planning” and “Execution”.
Performance of Domain 1 “Affordability” was rated highest by WelTel implementation managers. Domain 4B “Training” and “Privacy” were rated highest by patients and lowest by external stakeholders.
Overall Performance rated against implementation goals
Figure 5 presents the reported Performance scores for “Goal Attainment” and “Impact Assessment” per site. At the end of the mCFIR sessions, the team revisits the implementation goals identified at the beginning of the session and are asked to rate their overall Performance in achieving their desired goals and outcomes. A total of 5 of the 6 sites filled out these two Constructs due to time constraints; with 28 entries recorded for the goal attainment scale and 26 recorded for the impact assessment. Only half of the participants were able to complete the survey and fill out the last two Constructs. The highest score given for Performance for goal attainment and impact assessment was from Rwanda. The lowest Performance rated was from TB Clinic.
Qualitative Analysis
Participant inputs during the mCFIR session, summarized in the tree map in Figure 6, were first divided into two major categories, 1) Strengths & Benefits, 2) Barriers & Suggestions. Subsequently, sub-themes were Constructed for each category. The Participant responses included a combination of evaluation of the intervention (WelTel) and evaluation of the implementation process itself. The larger the area size on the tree map, the greater the proportion. Table 3 and Table 4 highlight some of the statements made by the participants during the mCFIR session.
Strengths & Benefits
The main sub-themes discussed by participants regarding the first major category, Strengths & Benefits of WelTel's implementation were the following:
-
Timely Diagnosis & Response – Participants shared the convenience of communicating and addressing health issues in a timely matter from home. A mother from Maralal, Kenya said that the platform “is real-time”, and that she was able to communicate with her HCP whenever she faced a health issue. Policymakers mentioned how the WelTel's platform has assisted them with “timely identification of opportunistic infections”.
-
Cost-Effectiveness – Policymakers highlighted the advantage of not requiring additional human resources for the implementation of the digital health platform WelTel. Communicating with patients through the platform has been incorporated into their care process. Patients did not incur any costs when texting their HCPs which has been considered a motivation for enrollment.
-
User-Friendliness – The implementation team managers and other end-users including clinicians reported the ease of using the platform. Patients did not require training as they only needed to reply via SMS to the check-in messages.
-
Security & Safety – Patients are the only ones who understand the intention of the “How are You?” message received. They have highlighted how their privacy is respected since the language of the message does not reflect their health status.
-
Appointment Attendance – The use of WelTel texting service to remind patients of their appointments has been highlighted as a benefit by external stakeholders as patients tend to lose their health cards that hold their appointment dates.
Barriers & Suggestions
As for the second major category, Barriers & Suggestions, several sub-themes emerged. Issues regarding phone accessibility, literacy, partnerships with stakeholders, staff training, and scale-up of the program were discussed as major barriers to the implementation of WelTel:
-
Phone Accessibility – Some patients share phones with a family member. This has been highlighted as a potential barrier as these patients might not be reached at all times.
-
Literacy – Literacy is a challenge amongst certain patient groups. It creates a barrier by hindering patients' ability to text back to the platform and share their issues and concerns.
-
HCP Training – Further training has been requested by HCPs to independently train new staff members on the digital health intervention being implemented.
-
Scale-up – Participants expressed the desire to scale up the project to other health departments and regions.
I. mCFIR Tool Experience & Effectiveness
This section reports on the use of the mCFIR as an implementation science tool in a focus group like setting. A total of four trained facilitators were present to moderate the focus group discussions. Semi-structured informal interviews were held over videoconference to understand the experience of applying mCFIR as a tool to facilitate the discussion on implementation assessment. In this section of the paper, the process followed by facilitators and the research team to collect, analyze, and share data with the implementing clinics will be described.
Pre-mCFIR session
A set of slides were prepared to provide a background on the digital health platform being discussed, the field of implementation science, and the mCFIR tool. A note taker was assigned to assist the facilitators by taking observational notes of the session and discussion. Multiple rounds of mock mCFIR sessions were held with the UBC mHealth Research Group to pilot the tool, the Domain questions, and estimate session length. The team concluded that the mCFIR tool would require approximately 2 hours to be completed with 8 stakeholders. The mCFIR surveys were built into Qualtrics survey software. Sites were identified if they were currently implementing the digital health tool of interest. Participant recruitment was conducted through the clinic staff with the medical director of the clinics identifying patients and HCPs who might be interested and available to participate. External stakeholders and policy makers were either identified by the clinic staff or by the research team staff. In the 3 Canadian sites, only patients participating were given honorariums for their attendance. All participant types in East Africa were given honorarium to compensate for expenses incurred or time spent to attend. The sessions were audio recorded if all participants provided consent. Tablets were made available by the research team for the sessions conducted in Canada. The session facilitators shared the consent form and survey links with the participants prior to the session for convenience.
mCFIR session
At the beginning of the session, the facilitator collects consent from participants, including consent to record the discussion. Afterwards, the participants are asked to introduce themselves, their profession and experience with the digital health platform being discussed or any other digital health platforms. The facilitator goes through the set of slides to provide background to the purpose of the session. Afterwards, the participants are asked to collectively identify implementation goals the team would like to work on in the upcoming 4 to 6 months. After identifying the goals, the facilitator guides the discussion using the mCFIR tool. One Construct at a time was presented in the form of a question. During the group discussion, participants are encouraged to put the survey aside, and share their thoughts with the group. After discussing a certain Construct, the facilitators ask the participants to score the Importance and Performance of the Construct being discussed anonymously on the Qualtrics survey. This process was repeated for each Construct, by order of Domain. At the end of the session, the participants are asked to rate the goal attainment and impact assessments of their goals and outcomes. The sessions’ duration varied from 2 to 3.5 hours. The note taker’s role during the session was to support the facilitator, keep track of time, and take notes of the discussion being held as well as any observations.
Post mCFIR Session
After the session, the facilitator and note taker met to reflect on the session, share notes and develop a summary report of the discussion. The report was intended to be shared with the research team, clinic directors, and participants. Additionally, the report includes a snapshot of the session and major points brought up by the participants. Implementation goals identified by participants are highlighted in the report with the aim to guide implementation activities until the next mCFIR session. The mCFIR session is encouraged to be held every six months to 1 year to reassess the goals identified and identify new goals.