08 November 2022

VAHI Data Analytic Fellow Dr Graeme Duke

The hospital standardised mortality ratio (HSMR) is arguably the simplest and yet the most controversial of all hospital metrics. 

It is the simple ratio of observed deaths to expected deaths, where a result above the benchmark is indicative of higher than expected deaths and possibly a lower level of patient safety.  

The HSMR is controversial because it has so often failed to deliver [1] After all, death is an infrequent event (0.5%) in acute care, even in multi-day separations (1.5%) and the majority are expected and unavoidable. Evidence for a consistent relationship between quality of care and the HSMR remains elusive[1]. To compound this the HSMR has frequently been mis-specified, misleading, and misinterpreted. The number of HSMR options and the absence of a ‘gold standard’ attest this uncertainty.  

For these reasons, VAHI has temporarily withdrawn the HSMR from its suite of monitoring tools in the Programmed report for integrated service monitoring report (PRISM).  

So, why persist with hospital mortality at all? 

The ultimate indicator

There are several good reasons to retain the HSMR, not the least of which is that death is the adverse outcome that most patients prefer to avoid. It is simple to define and easy to identify. Moreover, the most common place of death in Australia is an acute health service, accounting for 45% of all deaths in Victoria.[2] While most deaths are expected and unavoidable it is important to know that unexpected deaths have not increased. The HSMR is one way of monitoring unexpected" deaths. 

Without continuous monitoring we do not know if unexpected deaths are increasing or decreasing. The conclusion that yesterdays deaths were all unavoidable is no guarantee of the same tomorrow, or next year. The (false) assumption that healthcare standards will inevitably improve is wishful thinking. If this were true, there would be no need to monitor. 

There is also compelling evidence for an HSMR from major healthcare scandals over the past 50-years within Australia and internationally. These scandals had two features in common: an increase in unexpected deaths and the absence of monitoring. There is evidence that continuous monitoring may have identified the misconduct sooner and, possibly, saved lives [3,4]. 

The HSMR has many practical applications beyond monitoring of hospital outcomes, including epidemiology, clinical research, and reassuring healthcare managers, clinicians and the communities we serve of the high-standard of care in acute health. 

A way forward – calibration, validation and adjustment

Will we ever find a reliable HSMR? There are many elements to an ideal HSMR. One of the most crucial is regular tuning (calibration) to contemporary data. Many have been calibrated to data now many years old. Changes in casemix, models of care, and improvements in therapeutics, cause the HSMR to drift and loose accuracy, requiring regular recalibration to remain valid. 

The HSMR model should adjust for all patient-related factors (present prior to admission) that influence hospital outcome. Not all of these are readily available. Equally important, the HSMR should exclude hospital-related factors that affect outcome and therefore dependent on the quality of clinical care we wish to measure. For example, if hospital complications are incorporated into the model, they create the false impression that poor-performing hospitals with more complications appear to have sicker patients producing a misleading low HSMR value.  

The HSMR must be clinically and statistically valid. This requires expertise in coding, clinical interpretation, data science, statistics, and reporting. Results should be presented in an easy to interpret visual format, especially for those without clinical or statistical expertise.

Many of the current HSMR models fail to meet one or more of these requirements. Hence the current VAHI project to re-design our HSMR models, retaining the simple and essential elements while minimising the controversy. As George Box famously stated All [HSMR] models are wrong but some are useful.[5] 

Dr Graeme Duke has joined VAHI as Data Analytic Fellow, in addition to his roles as Deputy Director, Eastern Health Intensive Care Services in Melbourne, and clinical lead for intensive care research. To get in touch with Graeme about how we can collaborate to use data more effectively, contact [email protected]. 

  1. Pitches DW et al. What is the empirical evidence that hospitals with higher-risk adjusted mortality rates provide poorer quality care? A systematic review of the literature. BMC Health Serv Res 2007;7(1):91.
  2. Available at https://www.abs.gov.au/statistics/research/classifying-place-death-australian-mortality-statistics
  3. Spiegelhalter D et al. Risk-adjusted sequential probability ratio tests: applications to Bristol, Shipman and adult cardiac surgery. International journal for quality in health care. 2003;15(1):7–13.
  4. Pilcher DV, et al. Risk-adjusted continuous outcome monitoring with an EWMA chart: could it have detected excess mortality among intensive care patients at Bundaberg Base Hospital. Critical Care and Resusc 2010;12(1):36–41.
  5. Box GEP. Science and Statistics. J Am Stat Assoc 1976;71(356):791-799