The “Final Rule” dictates allocation of livers in order of decreasing medical urgency (i.e. sickest first) while avoiding futile liver transplantation (LT) (2, 3). The current allocation system in the US stratifies pre-transplant illness severity based on the MELD score, which predicts three month waitlist mortality with C-statistic of 0.78–0.87 (4, 5). MELD, however, is a poor predictor of post-transplant mortality (C-statistic = 0.44–0.53) (6, 30). Other previously described pre-LT clinical scoring models either do not correlate with outcomes or require knowledge of donor and intraoperative information for calculation, which are not known prior to donor allocation. Multiple models have been described, all with c-statistic ≤ 0.7 for prediction of post-LT outcomes. Rising organ demand in conjunction with increasing recipient severity of illness necessitates a reliable method to risk-stratify critically ill patients based on their pre-LT severity of illness to avoid futile liver transplantation.
We have previously described the Liver Immune Frailty Index (LIFI), a biomarker panel based on HCV IgG status and plasma levels of MMP-3 and Fractalkine, which quantifies pre-LT immune dysfunction (a.k.a., immune frailty) and predicts risk of post-LT futility (23). Whether this model outperforms other conventional clinical scoring models was not known. Here, we find that LIFI significantly correlates with liver transplant recipient mortality at 3- and 6-months, as well as at 1-, 3-, and 5-years post-transplant. In addition, LIFI shows superior discrimination (highest C-statistic) of 1-year post-LT mortality compared to all other risk scores, regardless of biologic MELD.
MELD and other conventional clinical scoring tools rely on laboratory values as surrogates for illness severity (8–15, 17, 18); however, these disregard the immunological status of patients at the time of LT. Infection is the leading cause of mortality within the first year following liver transplant, and ongoing infection risk likely results from persistent immune dysfunction following liver transplant. Pretransplant immune dysfunction in cirrhosis arises from the physiologic and metabolic alterations associated with progressive liver decompensation. This leads to cirrhosis associated immune dysfunction (CAID), which is characterized by deficiency in both innate and adaptive immunity, resulting from chronic immune system stimulation of liver injury, pathogenic infections, and gut-derived antigens (1). Chronic immune stimulation and exhaustion of metabolic substrates ultimately induces an inappropriate compensatory anti-inflammatory response. In the setting of severe decompensation, cirrhotic patients exhibit impaired immune response to bacterial challenge, which can result in severe systemic infection, multi-organ failure, and short-term mortality (31, 32) In its most severe form, immune frailty, pre-transplant immune dysfunction likely persists post-transplant and is exacerbated by immunosuppressive medications.
Prior clinical scoring systems have failed to capture the risk imparted by this severe state of ongoing immune dysfunction. This is a critical flaw that limits their clinical utility, as, infection is the leading cause of early post-transplant mortality. Of previously described models, three have shown the best sensitivity and specificity for predicting post-LT outcomes. These include the SOFT, BAR, and the UCLA-FRS scores (13, 14, 17). The SOFT score (14, 18) and BAR score (6) were created from patient-level data from the UNOS database, which despite its statistical power, lacks granularity to capture variables of immune dysfunction and infection risk. In addition, both scores require knowledge of donor characteristics and fail to consider recipient comorbidities, which are critical risk factors considered before waitlist placement (33). For that purpose, the UCLA-FRS score was created. This index was created through retrospective assessment of single center data, albeit at the center with the largest longitudinal liver transplant experience in the US. The single-center study design improved granularity, allowing inclusion of comorbidity history through adjusted Charlson comorbidity index (CCI) and cardiac risk. In addition, it is the only score to include any markers of pretransplant immune dysfunction; as, pre-transplant sepsis within 30 days of transplant likely reflects immune dysregulation (17). The original derivation of the UCLA-FRS, however, included only patients with MELD ≥ 40. Follow-up validation studies have demonstrated subpar performance in patients with lower pre-transplant severity of illness (threshold of MELD at 30, c-statistic of 0.65) (6). Thus, an objective and replicable system which considers immune dysfunction is necessary to improve pre-transplant risk-stratification of post-LT mortality.
Our recently described LIFI score stratifies patients into high-, moderate-, and low-risk of post-LT mortality. Patients with high-LIFI had a 1-year post-LT mortality of 58.3% compared to 1.4% in low-LIFI recipients (23). With a c-statistic of 0.84 in our cohort, LIFI is emerging as a potentially superior tool to support and guide clinical decision-making to avoid futile outcomes in high-risk LT recipients. Of note, LIFI offers superior discrimination of patient risk of mortality regardless of pre-LT MELD. Other clinical models have failed to accurately forecast outcomes in the low MELD cohort. Patients receiving liver transplant at lower MELD scores commonly have MELD exceptions, allowing their waitlist prioritization, with exception most commonly being granted for hepatocellular carcinoma. This may suggest that LIFI is able to discriminate not only the risk of mortality due to immune dysfunction relating to sepsis, but LIFI may also correlate with the risk of mortality related to recurrent cancer. Given that immune dysregulation allows tumor cells to escape immune surveillance, persistent immune dysfunction following liver transplant may increase a recipient’s risk of developing de novo or recurrent disease. Additional studies are necessary to delineate this relationship further.
There are several limitations to our findings. First, the LIFI was internally validated using granular patient-level data and immunologic assessment from patients at only two transplant centers. In addition, the LIFI was calculated via boot-strapping techniques, which does not consider changes in patient population while modeling prediction (34, 35). A large multi-center validation cohort is necessary to verify the model. In addition, due to the use of a limited patient cohort, we were not able to perform a multivariate prediction of 1-year post-LT mortality using components of the different pre-transplant scoring models given the small numbers of events at 1 year. LIFI includes HCV IgG status in its calculation. HCV likely figures more heavily into the risk score given that the discovery cohort used spanned the era of introduction of direct-acting antiviral medication when transplant was more common for HCV. As patient demographics change, we may see an era effect in significantly associated immune biomarkers, resulting in LIFI score adjustment. Finally, there is potential for selection bias given that certain subgroups were excluded during creation of LIFI, including re-transplant recipients, patients of advance age, and patients with fulminant hepatic failure. Additional analysis is necessary to evaluate LIFI in these cohorts.
In conclusion, LIFI predicts patient survival and is the only score to significantly correlate with mortality in both high and low MELD recipients. Pre-LT assessment of immune dysregulation may be critical in predicting mortality after LT and may optimize selection of candidates with lowest risk of futile outcomes.