Behind the Acquired Hospital Condition Data Curtain.

1-percentHAC Reduction Program: How valid is the evaluation construct?
Perhaps not so much.

An increasing number of private organizations attempt to measure the quality and safety of hospital care. I have already expressed my growing concern about the validity and utility of such ratings which seem to have lives of their own. Hospitals are spending a fortune to collect and report on a variety of ever-changing indicators and to improve their ratings. When the scores are good, hospitals use them to market their services. When scores are not-so-good, hospitals either make no public comment, criticize the system, or offer putative explanations why their hospitals face greater challenges than others. This selective use of quality scores in advertising has always seemed a little hypocritical to me. Is it immaterial that a hospital can be ranked as both worst and the best of something at the same time? Things are not that compartmentalized within hospitals.

The future is now.
Of all the rating systems, the ones that count the most are those required by federal law and which are now being used by Medicare to determine payments to hospitals. Nascent efforts to shift from our current pay-for-volume model to pay-for-value-or-outcome are becoming operational. Selected measures of “quality” are now being used to penalize or reward hospitals (and other healthcare providers). The Hospital Acquired Condition Reduction Program is impacting hospital’s bottom lines and reputations right now. One can only hope that we are doing the right things. I wish I felt more confident that we are.

Looking under the hood.
When I began to look at the actual data underlying the HAC, I was immediately struck by the non-uniformity of the data collection process. For background, a given hospital’s total Hospital Acquired Conditions (HAC) score is a composite of two different domains. Domain 1, making up 35% of the total, is itself a composite of the 8 performance indicators in the AHRQ PSI 90 panel tallied over a two year period ending June 30, 2013. Since these indicators are harvested from Medicare fee-for-service hospital billing data, every participating Medicare hospital should in theory have a score whether they want one or not. Hospitals receive a score of from 1 to 10 reflecting their deciles. A score of 10 means a hospital is in the worst 10% for a given measure.

Not all are at risk.
Maryland Hospitals or hospitals with too little data to analyze do not receive scores. In this first iteration, there are are 46 Maryland and some 52 non-Maryland hospitals spared from the possible 1% payment deduction along with the hundreds of specialty hospitals and Medicare Critical Access Hospitals, regardless of their quality of care.

The rest of the score counts the most.
A Domain 2 score makes up the remaining 65% of the total HAC score. It is comprised of two indicators from the Centers for Disease Control NHSN data set. These are Central Line Associated Blood Stream Infections (CLABST) and the Catheter Associated Urinary Tract Infections (CAUTI). These data are abstracted from clinical records (not billing records) where it seems to me there may be more room for subjectivity and correspondingly less objectivity. Each of these two indicators is scored separately from 1 to 10 as above. It is within this domain that the greatest non-uniformity of data reporting is present. Indeed, of 3300 scored hospitals nationally, some 1020 hospitals did not report central line blood stream infections and 669 did not report urinary tract infections from indwelling urinary catheters. No Domain 2 score at all was available for 662 hospitals! When both Domain 2 indicators are available, they are averaged for their (majority) portion of the total score. If only one of the two infection rates is scored, it alone becomes the Domain 2 score. Other caveats and scoring rules are involved. If this sounds confusing, it is because it is. Details of the scoring system can be found here.

Why so many holes in the data?
There are several reasons for the incomplete data collection. First of all, there is an element of voluntary participation in the Domain 2 infection reporting system that I do not fully understand— apparently hospitals are deemed to have given their permission for their infection rates to be used if they participate in a separate federal program. Additionally, hospitals that have no adult, neonatal, or pediatric intensive care units do not have to report Domain 2 events to the MAC program at all!  [As if intravenous or urinary catheters are not used on general medical-surgical wards!]  Moreover, if a given hospital has fewer than 1 of each infection event in the two-year reporting period, it receives no score for that category. [It would be a medical miracle or a triumph to have 1 or fewer Domain 2 events in any hospital!]  As a consequence of these and perhaps other exclusions, I am unable to determine which of the potential reasons for the absence of a score might be operative for a given hospital. In all other forms of clinical study, incomplete or non-uniform collection of data is one of the most serious flaws limiting any conclusions that might be drawn. So it is also in quality and safety rating systems that seek to compare hospitals with one another. As is our practice, lets look at the breakdown of data for Kentucky hospitals for examples of the possible effects of missing data.

Just the facts, Ma’am.
Sixty-five of Kentucky’s 130 or so hospitals were at risk to have their Medicare payments reduced. Ten of these had HAC scores greater than 7.0 and are even now receiving 1% less from Medicare. Review a list of all Kentucky hospitals and their scores here. Inspection of the scores reveals that there were different pathways to the penalty. Most of the 10 had poor scores in both domains, but hospitals like UofL, UK, or Jackson Purchase had average Domain 1 scores that were offset by very bad Domain 2 scores.

Rockcastle County (which just escaped the penalty) had a Domain 1 score of 7.0 but for one reason or another did not have to report for Domain 2. Would its total score have been improved or worsened if it did? Might it be to the advantage of some hospitals not to report scores when they can do so? A score of 7 was the most common single-digit integer HAC score awarded overall. The odds of retaining this penalty-sparing score of 7.0 are enhanced by not having to average in a higher Domain 2 score— another potential advantage to hospitals without Domain 2 reporting.

Missing data can be a big deal.
Elsewhere down the list of Kentucky hospitals which were spared the 1% cut this time around are other examples of the potential impact of missing data. Hospitals like Georgetown, Jewish-Shelbyville, Pikeville, Highlands, or Mount Sterling had the worst possible Domain 1 scores but were shifted into a safe average total range by excellent Domain 2 scores. Conversely, Jenny Stewart and Greenview had the best possible Domain 1 scores, but were almost pushed into penalty range because of terrible Domain 2 scores. Which of these two domains is more important? Can it really be said that being in the lowest 10% of the composite Domain 1 or 2 score can be so easily mitigated by a good score in the other domain?

If the total HAC score can be so dependent on the contribution of one discordant domain or the other, than what are we to make of those hospitals with missing numbers? What kind of pressure is being put on hospitals to make their records look good one way or another? Are hospitals that make an honest effort to accurately bill and report quality measures being penalized? Are some U.S. hospitals “gaming” the quality-reporting system in the same way that hospital billings have been gamed to the financial advantage of hospitals? Ultimately, can it be said that the existent HAC Reduction program is fair to hospitals’ finances or to their reputations? In my opinion, the answer is no, it is not fair. Can someone make the case that it is?

Whether such scores as above can fairly distill the fantastically complicated operations of a hospital into a simple number valid enough to determine payments is in my mind an unproven hypothesis. Even if it were, is taking away resources going to make things better or worse? It is not that I think that the various quality or performance indicators being used here are without value— indeed, they are important objective outcome measures that we need more of. Hospitals with high infection rates need to do better or explain themselves. Similarly, hospitals with high rates of bed sores, wound dehiscence, postoperative infections, or falls are being shown where they need to be better. Patients with a choice or their payers should be watching how hospitals react to their ratings. Increased clinical transparency is here for good.

Some of my readers may interpret my concerns about the validity or utility of current comprehensive quality-scoring systems as a reason to ignore them altogether, or at least not to attach any undue negative connotation to a poor showing. I cannot let every hospital off the hook so easily. As long as some hospitals continue to brag about the good ratings they might have received from Medicare or one of the many other quality-evaluating organizations, they get little sympathy from me when they get an unfavorable rating, nor should they from my readers.

Peter Hasselbacher, MD
President, KHPI.
Emeritus Professor of Medicine, UofL
December 29, 2014

Addendum:
Why was a HAC cutoff chosen that penalized fewer than the worst 25% of hospitals?

Recall that there was some confusion, indeed errors in national reporting on how many hospitals received the 1% penalty. According to “Table 17” in which CMS gives its official notice of penalized hospitals, 3300 eligible acute care hospitals received a HAC score and 724 of these (21.9%) were awarded the 1% reduction. The average score of all was 5.41 and the 75th percentile was at 7.0. By statute and regulation, CMS may apply the 1% penalty to the “worst performing quartile” with respect to risk-adjusted HAC quality measures. This is not the same as dinging the the top 25% of hospitals with the worst scores which would be 825 hospitals! (I think I got this right!)

The 75th percentile of the scores themselves, above which the penalty would apply, was 7.0 . However, because there were so many hospitals (143) with a score of exactly 7.0, if all hospitals with 7.0 or more were penalized, the total number (867) would exceed 25% of all eligible hospitals. By virtue of the way the total HAC score is constructed, and with so many Domain 2 scores missing, there were only 187 different discrete scores awarded of which a disproportionate  number were simple integers. A score of 7 was the single most common one.  CMS may have anticipated this result and has been careful to state that a top quartile “may be subject to a 1% penalty.” As far as I am aware, the law does not prohibit CMS from penalizing more than 25% of all hospitals, but it would make for bad press and angry providers. Given that this is the first year of the program’s full implementation, the solution chosen seems reasonable.

I thank Jordan Rau of Kaiser Health News for this illumination!

One thought on “Behind the Acquired Hospital Condition Data Curtain.”

Comments are closed.