13 June 2023

VAHI Data Analytic Fellow Dr Graeme Duke

Were you ever called to the Principal’s office at school to explain your behaviour, when the real provocateur escaped unnoticed? How did you feel?

I imagine, like me, you did not say to yourself: “I am looking forward to this challenge. I will approach it as a beneficial character building learning experience.”

If you are like me, you were looking for any excuse to blame others and somehow prove your innocence, despite the evidence of the black eye!

Reacting to the appearance of ‘red flags’ or ‘potential outlier’ signals arising from your hospital’s reports may well trigger a similar response.

“It’s not my fault. It’s a data problem. My patients are just sicker. It’s hard to fly with the eagles when you work with turkeys.” How should we approach this and respond to the “Please explain” request?

One approach to ‘character building’ experiences is the mindset that if there is a real problem then we want to know about it so together we can correct it. I have not yet met anyone who did not want to improve patient care.

If, on the other hand, it is a false alarm, then it is still worth reviewing and identifying the cause so that the source (poor documentation, data quality, analytic or reporting method, etc) can be addressed and reduce similar ‘false alarms’ in the future.

Earlier this year a ‘red flag’ at a major hospital was found to be due to an error in a single data element. The characteristic features of the ‘red flag’ alerted staff and helped identify the unknown source of error.

While there will always be ‘bad apples in every barrel’ it’s very rare that human error is the cause of red flag, outlier signals, or accreditation failure. In over one-half of these events the source lies in the limitations of the data, its analysis or reporting method.

The remainder arise from shifts in services, casemix, or resource deficiencies (staff, facilities, equipment, etc). In this situation the ‘red flag’ may be a powerful objective argument to redress those deficits and risks.

As alluded to in my April column, a systematic approach is needed. Don’t assume your data are correct until checked. While it may not be feasible to access the details of the statistical analysis, it is worth seeking to understand data definitions, inclusions, and exclusions. For example, those hospitals that admit palliative care patients under the acute (rather than palliative) care stream may have a higher than expected mortality.

Be aware of your hospital’s rate of coding statistical separations (transfer from one care type to another within the same site). Several major clinical indicators are currently based on the episode, rather than the entire period of hospital care.

Jurisdictional benchmarks derived from episode data may create two opposing illusions. Those hospitals that code a higher frequency of statistical separation may appear to have a shorter average length of stay (aLOS), lower hospital acquired complication (HAC) rates, and lower hospital mortality.

Conversely, those health services that code statistical separations less frequently may appear to have longer aLOS, higher HAC rates, and higher mortality.

Better models are currently under development. Until then we should approach every “Please explain?” as a character building experience.

Dr Graeme Duke is VAHI’s Data Analytic Fellow, in addition to his roles as Deputy Director, Eastern Health Intensive Care Services in Melbourne, and clinical lead for intensive care research. To get in touch with Graeme about how we can collaborate to use data more effectively,contact[email protected]