Hospital Safety Scores: The Empire Strikes Back!

Hospitals react to release of Safety Scores.

Earlier this month I commented on the release of a new hospital safety rating system developed by the Leapfrog Group, one of the most respected organizations attempting to promote hospital safety and quality. It gave many of America’s hospitals a letter grade for safety. The results for Kentucky were surprising to me, and perhaps also to the hospitals themselves. Reaction by hospitals to Leapfrog’s Hospital Safety Scores has been largely predictable. Those hospitals getting an ‘A’ were proud to announce it, while hospitals doing less well fell back on the usual excuses rebuttals that the data supporting the scores was too old, unreliable, irrelevant, or otherwise flawed. If those excuses are valid, then we are all in trouble, because 16 of the 26 measures evaluated come directly from information provided by the hospitals themselves to the United States Government and are used in the Centers for Medicare & Medicaid Services (CMS) public Hospital Compare database. If the state-of-the-art of large-scale medical quality measurement is that bad, than we need to start over!

The reaction of the American Hospital Association (AHA), the hospital industry’s major lobbying group, was quite unrestrained. In a letter earlier this week to the Leapfrog Group, AHA’s President Rich Umbdenstock let go with more than both barrels. He declared that the “scorecard’s assessment was neither fair not accurate,” and that “no one should use it to guide their choice of hospitals.” He goes so far as to suggest the methodology and choice of measures do not even meet the Leapfrog Group’s own established standards! Leapfrog’s President replied with a more restrained but very convincing rebuttal that is fun to read. Note that while I am more than willing to give the AHA credit for keeping the welfare of all Americans in mind, their primary job is to protect the interests of its member hospitals.

Attached to the AHA letter was a slightly more detailed three-page critique of the Hospital Safety Score methodology that went so far as to accuse Leapfrog of manipulating the data. It implied methodological issues including that the CMS quality measures used were somehow relevant only for rare and “potentially” serious complications of surgical or other care. Personally I do not see proper use of antibiotics during surgery, preventing blood clots and pulmonary emboli, decubitus ulcers, falls and trauma, blood stream infections, post-surgical pulmonary failure, rupture of surgical incisions, deaths in surgical inpatients, removing urinary catheters promptly, and the like as particularly rare. These medical complications definitely have more than just the “potential” to do harm, and are things that both doctors and patients have a right to be afraid of.

The primary source of information for ten of the items in the evaluation panel came from a Leapfrog Hospital Survey that was voluntarily provided by the hospitals themselves. However, not all of the hospitals receiving a safety score filled out the survey. In Kentucky, it appears that only 10 hospitals did so, including the five Norton hospitals in Louisville which made up half of the ten! I give these Kentucky hospitals special credit for their commitment and courage to open their operations to the public. No other Louisville hospital participated in the Leapfrog survey. (Here is a list of the ten with their Safety Scores.) For those hospitals who chose not to participate in the determination, Leapfrog uses alternate sources of information including an annual AHA Hospital Survey that is not freely available to the public. (I can’t afford to buy it either!)  The AHA seemed offended that Leapfrog did not give equal weights to its survey questions related to computerized computer entry of physicians orders, and whether hospital ICUs were staffed solely by specially trained intensive care physicians. Leapfrog counters that the AHA questionnaire was less rigorous than theirs. Leapfrog made no secret of its methodology of scoring hospitals that choose not to fill out their survey and emphasized that not participating was no barrier to getting a score of ‘A.’ Nonetheless, the fact that alternate sources of information were used for some hospitals is the most credible criticism offered by AHA.

In my opinion, Leapfrog deflected AHA’s few criticisms convincingly, and even devastatingly. AHA made a specific claim that Yale- New Haven Hospital, a well-known teaching hospital in Connecticut, should have received an A, but got a C solely because it did not fill out the Leapfrog survey. Leapfrog was obviously reluctant to criticize an individual hospital, but was forced to point out that Yale- New Haven sank on the basis of other indicators of safety, just as our own hospitals in Louisville did. I will not go through the other technical points one-by-one here unless someone asks. You can read the short letters from the AHA and Leapfrog yourself. I plan to compare the actual component scores of our local hospitals and selected others, including Yale- New Haven’s, in a future article.

What I see confirmed in this whole process is how difficult it is for outsiders like us to obtain information about the quality and safety of medical care. Collecting, reporting, and analyzing the information is an expensive and difficult undertaking for both hospitals and evaluators. I do remain skeptical about self-reported survey information because the temptation to present one’s self in a better light is powerful. People can go to jail for misrepresenting data to Medicare’s “Hospital Compare,” but even the CMS data can be hard to interpret. I give Leapfrog lots of credit for attempting to help the public understand the significance of hospital quality and safety indicators, otherwise the entire national enterprise is worthless. Can we have a better system of ensuring and disclosing hospital quality and safety? Undoubtedly! Is it more difficult to do so in the hodgepodge of healthcare provider systems we have today? Indubitably! Should we give up trying? Absolutely not!

Peter Hasselbacher, MD
President, KHPI
Emeritus Professor of Medicine, UofL
June 28, 2012