With this study, we set out to explore the current and potential use of EMR data for PHC performance measurement in the Canadian context. We aimed to capture the state-of-the-art of EMR data use as well as to gain practical insights for furthering its potential. To do so, we consulted both the literature and firsthand insights of system leaders, clinicians and researchers. We observe the following main findings.
First, while jurisdictions remain at varied stages [15], recognition of the importance and potential secondary uses of EMR data is common. Nonetheless, while nearly 15 years since the initial launch of a pan-Canadian PHC indicator set and almost a decade since its updating to include EMRs as a possible source, EMR data is used in only a handful of initiatives for performance measurement. Instead, a number of other data sources for PHC performance measurement continue to be relied on. This is predominately physician billing or other administrative sources such as census, laboratory and registry data and survey data. This finding is in line with recent international studies, signalling electronic health systems are yet to be leveraged to their full potential [13, 58]. These sources are in use for macro-level measurement across jurisdictions, be it in ad hoc reports, programme-specific monitoring and annual health system performance measurement, and at the micro-level as panel reports like in Alberta, Ontario, and Saskatchewan. It means, EMR data as a source for performance measurement is only a fraction of the total activity.
Where EMR data is in use, this is predominately geared towards performance measurement in the context of the micro-level, for use by individual clinicians and their teams. The EMR-based initiatives also equip affiliated physicians, their practices and networks with comparable data to generate research. EMR data for executives to manage and improve organizations is less established, though its potential is demonstrated by BIRT and D2D. Uses of EMR data for system performance improvement are not yet leveraged. This is despite its advantages, especially when linked with other data sets, to assess performance, identify problems such as unwarranted variation, and enable smarter resource allocation [13, 59]. Further to diversifying the performance measurement uses of EMR data, we note patients and the public are not among EMR data users at present, as the reporting across initiatives is not publicly available, nor is consistent patient access to their EMRs common practice.
The six different initiatives making use of EMR data for measurement and improvement demonstrate there is not a singular approach to do so. The initiatives vary in their contexts, including the target PHC practice model and affiliated EMR vendors, but also in their approaches to extract, standardize and return analyzed information to their users. In terms of the EMR-sourced indicators by each initiative, the range of indicators extend beyond the original 2012 pan-Canadian indicator set [38, 39], in particular with regards to chronic disease management and prescribing. Ways to update and broaden a pan-Canadian set of indicators that can potentially be sourced from EMR data should be explored together with continued investment in minimum data standards.
New initiatives in the past five years like HDC Discover and Insights4Care, as well as greater EMR coverage across jurisdictions, suggest the possibility for a quickening pace of change. The pan-Canadian nature of EMR vendors may facilitate the adoption of existing tools in other jurisdictions. Moreover, the COVID-19 pandemic has underscored the importance of timely, aggregated data for the system to monitor cases [41] as well as the potential use of EMR data in PHC to observe sudden changes in visits and to proactively reach patients [60].
To dramatically accelerate the use of EMR data will require more assertive action. The lessons for enabling EMR data use described by initiatives attest to the valuable experience and expertise that lies within the system and can be leveraged (Table 2), like advancing privacy and data sharing agreements.
The recurrent themes call for: defining a clear vision together with key stakeholders and focusing on the standardization of EMR data at the pan-Canadian level, as has been underscored elsewhere [14, 35, 61–63]; advancing beyond EMR adoption where still needed and investing in workforce competencies at all levels for the professionalization of performance measurement; and, considering updating the core set of pan-Canadian PHC indicators to fully account for the potential of EMR data as a source. The implementation of EMR-sourced performance measurement and quality improvement should also leverage the insights of relevant international examples like the United Kingdom [64] and the Netherlands [65]. In particular, the further exchange of good practices around the handling of privacy and data sharing agreements and data capture in EMRs of virtual care services, mental health and addiction encounters, and socioeconomic status, appear needed.
Strengths and limitations
To our knowledge, this is the first study to systematically explore and describe examples of EMR data use for performance measurement in the Canadian context from a health care performance intelligence perspective. The study was enriched by the wide-reaching engagement of experts across Canadian jurisdictions and of different profiles (stakeholders and clinician/researchers). Additionally, given the acceleration of electronic health information system improvements brought on by the COVID-19 pandemic, our findings are of particular relevance to ensure sustained, system-wide improvements are pursued.
Findings of this study should be understood in the context of three primary limitations. First, the target diversity in perspectives of informants was not met in all jurisdictions. While significant efforts were made for consistency in representation, the availability of informants, range of stakeholders and presence of research networks ultimately varies considerably by jurisdiction. The impact of this limitation was mitigated through the triangulation of existing sources and expert advice. Second, the process of classifying indicators involved a degree of subjectivity as our definition was broad and for this reason, we limited comparisons to indicator titles. Third, the analysis of key considerations was conducted by independent thematic coding. To limit the risk of overlooked considerations while also mindful of the burden the COVID-19 pandemic has placed on informants, a subset of the original informants reviewed these results.