Abstract
Background and objectives Identification of patients at risk for AKI on the general wards before increases in serum creatinine would enable preemptive evaluation and intervention to minimize risk and AKI severity. We developed an AKI risk prediction algorithm using electronic health record data on ward patients (Electronic Signal to Prevent AKI).
Design, setting, participants, & measurements All hospitalized ward patients from November of 2008 to January of 2013 who had serum creatinine measured in five hospitals were included. Patients with an initial ward serum creatinine >3.0 mg/dl or who developed AKI before ward admission were excluded. Using a discrete time survival model, demographics, vital signs, and routine laboratory data were used to predict the development of serum creatinine–based Kidney Disease Improving Global Outcomes AKI. The final model, which contained all variables, was derived in 60% of the cohort and prospectively validated in the remaining 40%. Areas under the receiver operating characteristic curves were calculated for the prediction of AKI within 24 hours for each unique observation for all patients across their inpatient admission. We performed time to AKI analyses for specific predicted probability cutoffs from the developed score.
Results Among 202,961 patients, 17,541 (8.6%) developed AKI, with 1242 (0.6%) progressing to stage 3. The areas under the receiver operating characteristic curve of the final model in the validation cohort were 0.74 (95% confidence interval, 0.74 to 0.74) for stage 1 and 0.83 (95% confidence interval, 0.83 to 0.84) for stage 3. Patients who reached a cutoff of ≥0.010 did so a median of 42 (interquartile range, 14–107) hours before developing stage 1 AKI. This same cutoff provided sensitivity and specificity of 82% and 65%, respectively, for stage 3 and was reached a median of 35 (interquartile range, 14–97) hours before AKI.
Conclusions Readily available electronic health record data can be used to improve AKI risk stratification with good to excellent accuracy. Real time use of Electronic Signal to Prevent AKI would allow early interventions before changes in serum creatinine and may improve costs and outcomes.
- acute renal failure
- clinical nephrology
- hospitalization
- electronic health records
- acute kidney injury
- biomarker
- risk assessment
- vitals signs
- Acute Kidney Injury
- Algorithms
- Area Under Curve
- creatinine
- Demography
- Early Intervention (Education)
- Electronic Health Records
- Humans
- Inpatients
- Patients’ Rooms
- Probability
- ROC Curve
- Risk
- Sensitivity and Specificity
Introduction
AKI is a common clinical syndrome in hospitalized patients and associated with increased costs and short– and long–term morbidity and mortality (1–6). With the incidence of AKI on the rise (7), the last decade has seen an increased focus on attempting to predict the development of AKI earlier than the current gold standard, serum creatinine (SCr) (8,9). Earlier detection would provide clinicians with increased opportunity to manage AKI in hopes of decreasing AKI severity and its associated morbidity and mortality (2,4–6).
Although several AKI prediction systems have been published, none of these risk scores have been universally adopted. Many of these risk assessment tools focus on predicting the AKI requiring RRT (10–14). Additionally, these tools often focus solely on critically ill intensive care unit (ICU) patients or those undergoing specific cardiovascular procedures, with scores often requiring clinical information that may not be readily available at the time of assessment (10–14). Finally, although commonly used in other critical illness risk assessment scores (15,16), to our knowledge, no AKI risk scores have incorporated inpatient vitals and nonrenal laboratory values.
With the majority of inpatient AKI occurring outside of the ICU (3) but the majority of investigations occurring in the ICU, nephrologists and inpatient physicians are in need of a tool to detect ward-based AKI in its earliest stages. Such a tool would enable evaluation of patients at highest risk for AKI before SCr elevation and facilitate earlier interventions that could minimize AKI severity and duration (17). Additionally, given a recent call to increase the utilization of physiologic parameters in the care of patients with AKI as well as evolving data on optimizing hemodynamics in those at risk or with early AKI (18–20), we hypothesize that the inclusion of such data into a risk assessment tool will improve clinical care and AKI outcomes. As such, we aimed to develop a risk prediction algorithm for hospital ward–based (i.e., non-ICU) patients using vital signs and readily available clinical laboratory data from the electronic health record (EHR) in a large multicenter database.
Materials and Methods
Study Population
All adult patients hospitalized on the wards at five hospitals from November of 2008 to January of 2013 were included in the observational cohort study (21,22). Patients were excluded if they had no documented SCr measurements, had an SCr>3.0 mg/dl on ward admission, or developed AKI before their ward stay. The study protocol was approved, with a waiver of consent granted on the basis of minimal harm and general impracticability, by the University of Chicago Institutional Review Board (IRB#16995A) and NorthShore University HealthSystem (IRB#EH11–258).
Data Collection
Laboratory values, vital signs, and patient characteristics were obtained from the Clinical Research Data Warehouse at the University of Chicago and the Electronic Data Warehouse at NorthShore University HealthSystem. The data were time and location stamped, and the observation time for each value in the dataset was the time that it was made available in the EHR (e.g., the result time for laboratory values). Nonphysiologic vital sign values were changed to missing for the purposes of this study (these included respiratory rate >70 or <1 breath per minute, heart rate >300 or <1 beat per minute, and temperature >44°C or <32°C) (21).
AKI and Baseline Creatinine Definitions
AKI was defined according to the Kidney Disease Improving Global Outcomes (KDIGO) criteria (23). Baseline SCr was defined as the first SCr measured on hospital admission, and this baseline value was updated on a rolling basis as per the KDIGO SCr criteria. eGFR was calculated on the basis of the Modification of Diet in Renal Disease equation (24). As part of the algorithm validation process, 50 charts were randomly audited to determine if the AKI status was correctly identified. In this audit, our code had 100% accuracy to identify patients meeting the KDIGO SCr AKI criteria.
Statistical Analyses
Demographic characteristics were compared between included patients who developed AKI and those who never developed AKI using t tests and chi-squared tests as appropriate. To develop the prediction algorithm, the dataset was divided into a derivation set (60%) and then, prospectively validated in the remaining 40% of the data. The datasets were sectioned by date at each site to simulate a prospective validation of the model. Because vital signs and laboratory values are updated periodically during a patient’s admission, a discrete time survival model was used. This method involves separating time into discrete intervals (e.g., every 12 hours) and then, uses logistic regression for model estimation. The values closest to the beginning of each time interval are used to predict that outcome for that time interval. If no values were available during an interval, then the most recent variable value was carried forward; if no previous value was available, the median value across the entire cohort for that variable was imputed (21). This method has been shown to provide similar results to standard Cox regression and can easily handle time-varying predictors, such as vital signs and laboratory results.
The predictor variables included in the model were demographics (age and sex), vital signs (respiratory rate, heart rate, temperature, pulse pressure index [pulse pressure-to-systolic BP ratio], systolic and diastolic BP, oxygen saturation, and mental status), and laboratory values (basic metabolic panel: sodium, potassium, carbon dioxide, anion gap, glucose, calcium, BUN, and SCr; complete blood count: white blood cell count, hemoglobin, and platelets; and liver function test: total protein, albumin, total bilirubin, and aspartate aminotransferase). Continuous variables were modeled using restricted cubic splines with knot placement as recommended by Harrell (25). This method involves creating a series of connected lines to model the risk of the outcome for each variable, with cubic terms used for each individual line. Thus, the likelihood of the outcome can be high at both low and high values of a variable. A variable importance plot was created by calculating the chi-squared value for the inclusion of each variable and scaling the values for a maximum of 100 for the most important variable. We also analyzed the performance of a model that only contained the only slope of SCr to detect AKI within the next 24 hours. Furthermore, we compared our derived model with the electronic Cardiac Arrest Risk Triage (eCART) score and the Modified Early Warning Score (MEWS), which are two early warnings scores that have been shown to predict adverse outcomes (e.g., ICU transfer, cardiac arrest, and inpatient mortality) in ward patients (22,26,27). Area under the receiver operating characteristic curve (AUC) was calculated using probabilities from the derived discrete time logistic regression model for the prediction of SCr–based KDIGO AKI within 24 hours of each observation in the validation dataset. Subgroup analyses were conducted across baseline eGFR groups and severity of AKI. Analyses were performed using Stata, version 14.1 (StataCorp., College Station, TX), and two–tailed P values <0.05 denoted statistical significance for all comparisons.
Results
Our database consisted of 269,999 inpatient adults with at least one measured vital sign. After excluding those without an SCr measurement (n=52,508), those with an admission SCr >3.0 mg/dl (n=11,305), and those who developed AKI before arriving on the ward (e.g., in the ICU; n=3225), the final cohort consisted of 202,961 admissions (Figure 1). Of this final cohort, 17,541 (8.6%) went on to develop any AKI. Table 1 displays the clinical characteristics of those with and without AKI in our cohort. Compared with those never developing AKI, those who developed AKI were older (mean [SD] of 70 [16] versus 63 [19] years; P<0.001), were more likely to be black (19% versus 16%; P<0.001), were more likely to be men (49% versus 43%; P<0.001), and had a higher admission SCr (mean [SD] of 1.3 [0.6] versus 1.0 [0.4] mg/dl; P<0.001).
Flow diagram demonstrating the size of the original cohort, rationale for those subjects who were excluded and the size of the final cohort.
Clinical demographics and outcomes of final cohort stratified by AKI status
Table 2 shows the AUC for the prediction of the development of at least stage 1 AKI in the next 24 hours. The model that contained only patient SCr, BUN, and their ratio had an AUC of 0.69 (95% confidence interval [95% CI], 0.68 to 0.69) for the prediction of at least stage 1 AKI within the next 24 hours. This increased to 0.74 (95% CI, 0.74 to 0.74) with the addition of basic metabolic panel, complete blood count, liver function tests, and patient vitals and demographics. There was a stepwise increase in the ability to detect AKI of all stages with the inclusion of more variables in the model. Of note, a model that only contained the slope of SCr provided an AUC of 0.65 (95% CI, 0.64 to 0.65) for at least stage 1 AKI. The final model displayed a stepwise increase in AUC across all AKI stages, performing best in those with stage 3 AKI (AUC of 0.83; 95% CI, 0.83 to 0.84). Supplemental Table 1 shows the AUCs for the derivation and validation cohorts, which were similar. Figure 2 represents a variable importance plot for the final model, which shows that SCr and BUN were the two most heavily weighted variables in the full model.
Areas under the receiver operating characteristic curves for the prediction of Kidney Disease Improving Global Outcomes serum creatinine–based AKI in the next 24 hours for models with different included variables in the validation cohort
Importance of variables in the final model scaled to a maximum of 100. The figure shows the importance of each variable in the final model. The variables are weighted according to their chi-squared value in the final model. Serum creatinine (Cr) had the highest chi–squared value and therefore, was assigned a value of 100. On the basis of this assignment, all of the remaining variables were scaled according to their respective chi–squared values. As shown, BUN and heart rate (HR) were the second and third most heavily weighted variables in the model. Alk Phos, alkaline phosphatase; AST, aspartate transaminase; AVPU, alert, voice, pain, unresponsive (mental status measure); Bicarb, serum bicarbonate; DBP, diastolic BP; Hb, hemoglobin; ICU, intensive care unit; O2sat, oxygen saturation; PPI, pulse pressure; RR, respiratory rate; SBP, systolic BP; Temp, temperature; WBC, white blood cell count.
The use of cubic splines in Electronic Signal to Prevent AKI (E-STOP-AKI) makes the coefficients for individual variables difficult to interpret; thus, we also investigated a simple linear model for the prediction of AKI. This linear model had lower accuracy than the cubic spline model across all three AKI stages (AUC of 0.73 versus 0.74 for stage 1, 0.74 versus 0.77 for stage 2, and 0.82 versus 0.83 for stage 3, respectively). The individual variable coefficients for the linear model are in Supplemental Table 2.
Table 3 provides data around several cutoffs for the final model and their sensitivity, specificity, positive predictive value, and negative predictive value to predict stages 1 and 3 AKI within the next 24 hours. Several cutoff values provide high sensitivity and specificity, with a cutoff of ≥0.010 providing a sensitivity of 70%, a specificity of 66%, and a negative predictive value of 98.7% for at least stage 1 within the next 24 hours. In a time to AKI analysis, ≥0.010 predicted the onset of stage 1 AKI a median of 42 (interquartile range, 17–104) hours before the eventual rise in SCr. Figure 3 shows the cumulative percentage of patients who reached a cutoff of ≥0.01 before developing stage 1 AKI. Additionally, Figure 3 illustrates the cumulative percentage of patients reaching the same threshold who did not develop AKI over the 72 hours after admission to the wards.
Sensitivity and specificity values for different probability cutoffs of the final model for detecting stages 1 and 3 AKI in the next 24 hours
Cumulative percentage of patients reaching a cutoff of ≥0.01 in the 72 hours before stage 1 AKI. The plot shows the cumulative percentage of patients who reached a cutoff of ≥0.01 before developing stage 1 AKI (solid line). The dashed line represents the cumulative percentage of those patients reaching the same threshold without AKI over the course of the first 72 hours of admission. Data from within 1 hour of AKI were omitted from the original graph, and thus, time zero results are not included.
Subgroup Analyses
Table 3 provides the sensitivity and specificity for the model to predict the development of stage 3 AKI in the next 24 hours, with multiple cutoffs providing sensitivity and specificity >70%. The aforementioned cutoff of ≥0.010 provided a sensitivity of 82% and a specificity of 65% for stage 3 AKI and predicted the development of stage 3 AKI a median of 35 (interquartile range, 14–97) hours before the first diagnosis of SCr AKI.
We also investigated the ability of the algorithm to predict AKI in the cohort stratified by admission eGFR. The algorithm performed similarly in those with admission eGFR ≥90 ml/min per 1.73 m2 and those with admission eGFR between 60 and 89 ml/min per 1.73 m2, with AUCs of 0.73 (95% CI, 0.71 to 0.74) and 0.73 (95% CI, 0.73 to 0.74), respectively. The algorithm was slightly more accurate in those with eGFR between 30 and 59 ml/min per 1.73 m2: 0.74 (95% CI, 0.74 to 0.75).
Comparison with Other Ward–Based Scores
Supplemental Table 3 compares the performance of E-STOP-AKI with two other ward–based risk stratification scores (the MEWS and the eCART score). For each stage of AKI, E-STOP-AKI outperformed these other scores as measured by a higher AUC.
Discussion
We have constructed a risk algorithm that can improve risk stratification for the future development of AKI on the hospital wards. E-STOP-AKI performs progressively better for more severe AKI, reaching an AUC of 0.83 for those who go on to develop stage 3 AKI. This ability to improve AKI risk stratification over a day earlier than the current gold standard, SCr, could permit early identification of patients at high risk of adverse outcomes and perhaps, decrease AKI duration and severity. This strategy to improve AKI risk stratification before changes in SCr has been an area of intense investigation over the last decade (8). Importantly, our model does not require any additional laboratory testing or biomarker measurements, because it is entirely on the basis of clinical data from the EHR collected as part of routine patient care.
We have chosen to focus our attention on ward-based AKI as opposed to the more commonly investigated ICU–based AKI. Although tools to predict adverse ward–based events, such as ICU transfer and inpatient mortality, have been validated over the last several years (27–29), limited tools exist for ward-based AKI. Not surprisingly, our data show that these ward–based risk assessment scores (the MEWS and the eCART score) did not perform as well as E-STOP-AKI for predicting AKI.
Several EHR–based initiatives have attempted to improve the identification and risk stratification of ICU patients at risk for AKI (30–32). Kashani and colleagues (30) showed that an electronic sniffer could identify ICU patients with AKI with excellent sensitivity and specificity. Although this electronic algorithm accurately identified those with KDIGO AKI within 15 minutes of meeting the criteria, it was entirely ICU based and not designed to provide an earlier diagnosis of AKI. More recently, Wilson et al. (33) randomized inpatients to receive electronic alerts at stage 1 AKI compared with usual care. This prospective, randomized trial did not show a difference in maximum change in SCr or need for RRT; however, this trial did not link the electronic alert to any clinical intervention outside of provider notification. Additionally, the electronic alerts were linked to changes in SCr and thus, may have been too late to alter the clinical course of AKI. As such, we have developed an algorithm that can identify those at highest risk for the development of AKI before they meet the consensus definition of SCr-based AKI (23). Real time risk assessments are increasingly more common in clinical medicine and may allow nephrologists the greatest opportunity to alter the course of impending AKI (27–29).
Our group has already implemented a system to identify ward-based patients at high risk for cardiac arrest and ICU transfer (the eCART score) (21,22,27,34). Real time use of the eCART score has been shown to identify patients who will experience cardiac arrest or ICU transfer over 30 hours before standard recognition by a rapid response team (27). After being identified, these patients receive a clinical intervention anchored by a team of ICU nurses and physicians that provides consultative ward–based care. E-STOP-AKI may be used in a similar fashion, with high-risk patients receiving early AKI–centered care. In a mixed ward and ICU cohort, Balasubramanian et al. (35) showed that patients who received an early nephrology consult at the time of SCr-based AKI went on to develop lower peak SCr (less severe AKI) as well as have shorter duration of AKI. These data combined with other evidence linking improved outcomes for those with kidney injury who are cared for by a nephrologist (36–38) point to the pairing of our early warning risk algorithm with early nephrology–centered care to improve outcomes. Models, such as E-STOP-AKI, may transform the way that inpatient care is delivered, bringing the expert clinician (nephrologist) to the bedside much earlier (36–48 hours) compared with the traditional method of waiting for the delayed increase in SCr. This delay is further compounded by the primary treating physician often not calling for a formal consultation until stage 2 or 3 AKI is present (39).
Alternatively, E-STOP-AKI could be used in concert with biomarkers of AKI. Despite evidence of several biomarkers of AKI increasing before SCr (6,40–43), to date, there are limited data showing that interventions on the basis of biomarkers levels lead to a meaningful change in patient outcomes (44,45). The majority of these biomarkers have been tested, validated, and approved for use in ICU patients, and little is known about their utility in ward-based patients. We envision using E-STOP-AKI as a screening tool to determine which patients would benefit from biomarker measurement or other AKI evaluation and testing.
E-STOP-AKI was more accurate in those with stage 3 AKI compared to with stage 1 AKI, and it was accurate over a wide distribution of baseline renal function. The ability to identify impending AKI in those with decreased baseline renal function is crucial, because those with CKD are at increased risk for the development of AKI (11,46,47). This connection between AKI and CKD is readily apparent within E-STOP-AKI, with SCr and BUN being the most heavily weighted variables in the algorithm (Figure 2).
Our study has several strengths; most importantly, we have derived and validated E-STOP-AKI using a large, multicenter cohort of a diverse urban and suburban patient population, which increases external generalizability (21,22,34). Additionally, we used the current consensus definition of SCr-based AKI (KDIGO) and were able to stringently apply the SCr criteria to our cohort (23). Finally, we also compared the performance of our model with previously validated ward–based risk scores.
Our study has several limitations. We were not able to apply the urine output portion of the AKI criteria to our cohort; however, this is a common limitation of many large–scale investigations of inpatient AKI, because the ability to amass and analyze hourly urine outputs is difficult. This is especially true in ward patients who do not uniformly have indwelling urinary catheters (40,48,49). Additionally, we were limited in that we did not have access to SCr values and patient comorbidities before hospital admission and thus, used the admission SCr as the reference value. There is much debate in the AKI literature about the best way to establish baseline SCr given that this crucial piece of data is often missing, with no one method showing superiority or gaining universal acceptance (50–52). As such, similar to the method used by Wilson et al. (33), we chose to exclude all study participants with an initial SCr ≥3.0 mg/dl in an attempt to minimize the erroneous misclassification of those with AKI on admission. Furthermore, our data remain relatively limited in that we do not have medications and other interventions performed during the hospital stay as part of our model. Knowing that CKD, prior AKI, diabetes, and hypertension have all been shown to be risk factors for AKI (11,46), it remains unclear how these clinical data would affect the performance of our model. Finally, we did not have the ability to discriminate which patients received RRT in our cohort.
In conclusion, we present a novel risk assessment tool for the prediction of AKI in hospital ward patients. E-STOP-AKI, which includes demographics, vital signs, and laboratory tests, outperformed a model of SCr and BUN alone and can be used to identify patients at high risk for the future development of AKI over a day earlier than changes in SCr. Real time implementation of E-STOP-AKI requires no ancillary or AKI–specific blood or urine tests. In the future, we plan to pair E-STOP-AKI with AKI biomarkers and AKI-centered care to optimally identify patients who are likely to develop the most severe forms of AKI.
Disclosures
J.L.K. was supported by a Norman S. Coplon Grant from Satellite Health Care (San Jose, CA). D.P.E. and M.M.C. have a patent pending (ARCD. P0535US.P2) for risk stratification algorithms for hospitalized patients. In addition, D.P.E. has received research support and honoraria from Philips Healthcare (Andover, MA), research support from the American Heart Association (Dallas, TX) and Laerdal Medical (Stavanger, Norway), and an honorarium from Early Sense (Tel Aviv, Israel). D.P.E. has ownership interest in Quant HC (Chicago, IL), which is developing products for risk stratification of hospitalized patients.
Acknowledgments
We thank Timothy Holper, Justin Lakeman, and Contessa Hsu for assistance with data extraction and technical support; Nicole Twu for administrative support; Christopher Winslow; Ari Robicsek; and Robert Gibbons for introducing us to discrete time survival analysis.
This research was funded, in part, by institutional Clinical and Translational Science Award grant UL1 RR024999 (principal investigator: Julian Solway). M.M.C. is supported by Career Development Award K08 HL121080 from the National Heart, Lung, and Blood Institute.
Preliminary versions of these data were presented at the Annual Meeting of the American Society of Nephrology on November 3–8, 2015 in San Diego, CA.
Footnotes
Published online ahead of print. Publication date available at www.cjasn.org.
This article contains supplemental material online at http://cjasn.asnjournals.org/lookup/suppl/doi:10.2215/CJN.00280116/-/DCSupplemental.
- Received January 9, 2016.
- Accepted July 22, 2016.
- Copyright © 2016 by the American Society of Nephrology