a. Methods Overview
Following Institutional Review Board (IRB#: 1710876219) and Regenstrief Institute (RI) Data Management Committee approvals, we identified an LTOT cohort pulled from clinical notes within Indiana University Health (IUH), a large healthcare network in Indiana with 3,541 staffed beds and 2,563,086 outpatient visits across 18 facilities (25). OUP was determined using diagnostic codes from patients’ health records and a text-mining algorithm was applied to the clinical notes. Finally, we compared the two approaches using frequency and rate of OUP per 1,000 LTOT patients. Additionally, we compared health care utilization (outpatient visits, emergency department visits, hospitalizations, cumulative hospitalization days).
b. Sample Definition
Our sample consisted of adult (age >= 18 years) patients who visited an IUH facility and received LTOT, which we defined as patients prescribed 70 days of supply within any given 90-day period, between 1 January 2013 and 31 December 2014 (24 months). We excluded patients with active cancer (ICD-9 codes 140.x 172.x, 174.x 209.xx, 235.x 239.xx, 338.3) to focus on LTOT for non-cancer chronic pain. Patients with schizophrenia were also excluded due to the documented high percentage of opioid dependence among this population (ICD-9 code 295.9) (26).
c. Variables of Interest
We compared patient characteristics of interest based on the OUP phenotype and included: demographics (age, gender, race, and ethnicity), alcohol abuse, non-opioid abuse, tobacco use, mental health disorders, and hepatitis C. For this study, mental health disorders were identified as depressive disorder (ICD-9 codes 296.2x, 296.3x, 300.4, 311), suicide attempt or other self-injury (ICD-9 codes E95x.x, E98x.x), or anxiety disorder (ICD9 codes 300.0x, 300.21, 300.22, 300.23, 300.3, 308.3, 309.81).
d. Report Type Selection
Due to the large number of clinical report types generated and included in the EHR, we included only report types that were deemed most relevant to clinical and data experts to OUP identification. Nine report types were selected as most relevant for OUP identification. The query returned 142,971 reports: Emergency Department Doctor Progress Notes (48,898), Emergency Department Discharge Notes (28,637), Primary Care Doctor Outpatient Progress Notes (26,669), Visit Notes (21,868), Discharge Summaries (11,731), History and Physicals (2,759), Admission History & Physicals (1,390), Preadmission History/Physicals (576), Primary Care Doctor Outpatient History and Physical /Initial Consulta (443). The top 5 most common report types generated over 96% (137,802) of total reports, while the bottom 4 least common were only responsible for 4% of reports generated. Due to labor constraints, the bottom 4 report types (Admission History and Physical, Preadmission History/Physical, Primary Care Doctor Outpatient History and Physical /Initial Consult) were excluded.
e. Identify Patients with OUP
We used two methods to identify OUP: 1) text-mining and 2) ICD-9 approaches.
I. Identify OUP using a Text Mining Process
The process of identifying OUP using text mining involved 2 main steps: 1) algorithm development to flag potential positive reports; and 2) validation using semi-assisted manual review.
1- Algorithm Development to Flag Potential Positive Cases Using nDepth™
We applied the text-mining package provided by nDepth™ to parse medical notes to detect OUP. nDepth™ is an NLP tool designed by the Regenstrief Institute in Indianapolis to extract data from the Indiana Network for Patient Care (INPC), which is a healthcare database managed by the Regenstrief Institute on behalf of the Indiana Health Information Exchange (IHIE) (27, 28). As IUH is part of INPC, nDepth™ was used to query patients’ clinical notes to identify possible opioid use problems. To develop the algorithm, 2 keyword lists adopted from the literature were entered into nDepth™ (24). The first list was comprised of opioid terms (e.g., Vicodin, Opiate), and the second list was comprised of problem terms (e.g., addiction, abuse) (Appendix A.1). nDepth™ creates state machines, an algorithm that parses report types for certain criteria, to check for all possible combinations of the 2 lists within a 5-word distance. Flagged results are automatically checked for whether the statement is negated, hypothetical, historical, or experienced. This process itself was adopted from the ConText algorithm developed by Harkema et al. (2009) (29). Thus, if the system determined the patient statement was not negated or deemed hypothetical or historical, those flags were deemed as experienced (considered positive).
2- Semi-Assisted Manual Review Validation Process
Flagged reports were reviewed by 2 trained reviewers who used a semi-assisted manual review. In case of disagreement, an expert physician acted as a third reviewer to resolve the dispute. To avoid overlooking any signs of opioid use problems, nDepth™ was programmed to highlight 27 suggestive phrases in the flagged reports. These phrases were collected from the literature and modified based on common clinical dialog in Indiana (Appendix A.2). In cases where there was more than one flagged report per patient, the system randomly selected one type of the flagged reports to be reviewed per patient. The criteria chosen to determine OUP were adopted from a study by Carrell et al. (2015) and listed in Table 1. Of note, as using marijuana medically and recreationally is currently illegal in Indiana, its use was treated as concurrent use of an illicit drug during manual review. Other modifications were also adopted based on initial reviews of subsets of patients’ clinical reports.
Table 1 The criteria for identifying opioid use problems in patients’ clinical notes
No.
|
Criteria for opioid use problems
|
1
|
Substance abuse treatment, including referral or recommendation
|
2
|
Methadone or suboxone treatment for addiction
|
3
|
Obtained opioids from nonmedical sources
|
4
|
Loss of control of opioids, craving
|
5
|
Family member reported patient’s opioid addiction to clinician
|
6
|
Significant treatment contract violation
|
7
|
Concurrent alcohol abuse/dependence (not remitted)
|
8
|
Concurrent use of illicit drugs
|
9
|
Current or recent opioid overdose
|
10
|
Pattern of early refills (not an isolated event)
|
11
|
Manipulation of physician to obtain opioids
|
12
|
Obtained opioids from multiple physicians surreptitiously
|
13
|
Opioid taper/wean due to problems, lack of efficacy (not due to expected pain improvement)
|
14
|
Unsuccessful taper attempt
|
15
|
Rebound headache related to chronic opioid use
|
16
|
Concurrent use of unauthorized narcotics (polypharmacy)
|
17
|
Physician states opioid abuse/overuse/addiction or listed ICD codes for opioid abuse/dependence
|
II. Identify OUP Using ICD-9 Codes
Two definitions from the literature utilizing ICD-9 codes were combined to create a case definition of OUP: opioid abuse and dependence (304.00, 304.01, 305.50, 305.51, 304.71, 304.02, 304.70, 305.52) and opioid poisoning (965.0, 965.00, 965.01, 965.02, 965.09, E850.0, E850.1, E850.2) (Appendix A.3). These definitions were combined to minimize the chance of systematically creating type II errors by capturing the wider spectrum of opioid use problems among the study population.
f. Analysis
Frequency tables are used to describe population characteristics. Population characteristics for positive OUP identified by text mining and ICD cohorts were compared. Chi-squared and Fishers exact tests were used for categorical data analysis and independent t-tests were used to compare means for continuous variables.