BACKGROUND
Clinicians informally assess changes in patients' status over time to prognosticate their outcomes. The incorporation of trends in patient status into regression models could improve their ability to predict outcomes. In this study, we used a unique approach to measure trends in patient hospital death risk and determined whether the incorporation of these trend measures into a survival model improved the accuracy of its risk predictions.
METHODS
We included all adult inpatient hospitalizations between 1 April 2004 and 31 March 2009 at our institution. We used the daily mortality risk scores from an existing time-dependent survival model to create five trend indicators: absolute and relative percent change in the risk score from the previous day; absolute and relative percent change in the risk score from the start of the trend; and number of days with a trend in the risk score. In the derivation set, we determined which trend indicators were associated with time to death in hospital, independent of the existing covariates. In the validation set, we compared the predictive performance of the existing model with and without the trend indicators.
RESULTS
Three trend indicators were independently associated with time to hospital mortality: the absolute change in the risk score from the previous day; the absolute change in the risk score from the start of the trend; and the number of consecutive days with a trend in the risk score. However, adding these trend indicators to the existing model resulted in only small improvements in model discrimination and calibration.
CONCLUSIONS
We produced several indicators of trend in patient risk that were significantly associated with time to hospital death independent of the model used to create them. In other survival models, our approach of incorporating risk trends could be explored to improve their performance without the collection of additional data.