The Joint Commission Publishes List of Top Performing Hospitals.

Who better should know how hospitals are doing?
Kentucky gets a few nods. 

At the end of September, the Joint Commission released its list of “Top Performers on Key Quality Measures for 2012.” The Joint Commission accredits all hospitals in the US, and does the bulk of data collection for Medicare’s Hospital Compare. A total of 45 accountability measures in 8 clinical areas were evaluated for some 3500 hospitals. To make the Top Performers list, a hospital had to have performed a required action 95% of the time for each indicator individually and in aggregate. Since these same quality measures are central to the rating systems of several different organizations one might expect that the lists of top hospitals would more-or-less agree with each other. To my initial observation, any agreement seems to be less rather than more. When different sets of quality measures are applied to different subsets of hospitals, the subsequent results may not be easy for us in the trenches to interpret or use.

Kentucky is home to 18 of the 620 top hospitals. Our fair share would have been 31. As previously reported, the only hospital in Louisville making the list is the Robley Rex VA Hospital. As seemed to be the case with the Leapfrog Safety Scores and Consumer Reports safety evaluations, smaller and rural hospitals seemed to have a better chance of looking good. Of the hospitals on the Joint Commission list, 43 were psychiatric hospitals, including 2 in Kentucky. A list of the Kentucky Hospitals is available from KHPI on request.

Summary of observations:

  • Most high-profile, and virtually all teaching hospitals failed to make the list.
  • It appears that few if any safety net hospitals made the list either, but I cannot yet tell how few.
  • Some hospital systems successfully contributed several of their individual hospitals to the list.
  • Correlation of standing in the Joint Commission list with ratings of hospital quality and safety from the lists of other organizations does not appear to be very good.

and comments:

  • We need to be confident of the reliability of self-reported hospital data.
  • Merger of hospitals can diminish the usefulness of hospital ratings.
  • Transparency is important in the financial relationships between the rating organizations and the institutions being rated.
  • Does what is being measured really matter?
  • Does quality in the limited number of measured processes and outcomes “trickle down” to the rest of a hospital’s patients?
  • How might the rating process itself distort the provision of healthcare in undesirable ways?
  • Are there too many different rating organizations slicing and slicing the same information?

Additional Discussion.

Teaching hospitals nearly absent in the list.
I went through the list to see how many of the 620 hospitals I recognized– it was only a handful. I looked particularly for teaching hospitals to see if there might any validity to the claim that teaching hospitals are handicapped in these quality evaluations. In fact, I could recognize only two teaching hospitals in the entire top-dog list: Duke University Hospital, and Creighton Medical Center- St. Joseph. While I probably missed some minor teaching hospitals that host a few trainees, from my perspective as a former group chairman of the Association of American Medical Colleges, I did not recognize any by name. Strikingly, none of the nation’s many high profile teaching hospitals in New York, Massachusetts, California, Pennsylvania, Maryland, Illinois, or Connecticut made the list. These states contain some of the most famous hospitals in the world, but were not considered “top performers” by their own accrediting organization!

As a lifelong academician, the suggestion that teaching hospitals cannot provide the same quality of care as other hospitals troubles me greatly. I believe that it is impossible to deliver high quality education in an institution that does not provide high quality care. Claims (or excuses) that teaching hospitals or safety net hospitals cannot look good in head-to-head comparisons with other hospitals include the fact that the patients are sicker, that the socioeconomic overlays of their patients magnify their illnesses and complicate their treatment, that the institutions do not have enough money to do their job, or even that they are staffed by the least experienced or inadequately supervised physician-trainees. All of the above and more may be valid explanations, but whatever the causes, perceived quality disparities in teaching hospitals cannot be allowed to stand unchallenged or unfixed.

Do hospital systems do better?
Some other patterns seemed to emerge from my initial review. In California, hospitals from the Kaiser Hospital System blew the competition away. Our own Robley Rex Hospital Hospital was joined by 11 other VA hospitals in the nation making it one of the larger hospital systems to appear on the Top-Gun List. I suspect that healthcare systems as these may do well because they have more control over their staff and physicians than other institutions! With reference to the paragraph just above, you will not be surprised that large teaching hospitals have very little control over their doctors. It is easier to herd cats.

Differences between evaluations.
I assumed I would see a fair amount of concordance between the Joint Commission and Leapfrog Safety Score lists for our Kentucky hospitals. I was quite surprised that this was not the case. Of the 16 acute care hospitals on the Joint Commission’s top performer’s list, only 8 hospitals appear at all on Leapfrog’s list of 49 rated Kentucky hospitals and of those on both lists, only 4 received ‘A’s. Of the other four, two received ‘B’s, and two received only ‘C’s from Leapfrog. It appears to me that some of the hospitals evaluated by the Joint Commission were small Medicare Critical Access Hospitals that were not evaluated by Leapfrog because there is no federal requirement for these tiny limited-service community hospitals to report quality data. In fact, this category of hospital can apparently achieve Joint Commission accreditation even if their quality measures are terrible! So much for protecting the public!

Merged for some things but not others?
There were some other quirks in the two lists. Two of the several St. Elizabeth Hospitals in Northern Kentucky (Florence and St. Thomas) were top performers in the Joint Commission list but neither appears with any rating at all in the Leapfrog list. (St. Elizabeth of Covington was rated with a ‘B’ in the Leapfrog list and does not appear in the Joint Commission list.) Are these hospitals being lumped together for some reporting purposes but split apart for others? How is a person to know whether rating organizations are lumpers or splitters when using their recommendations?

I have discussed earlier and will again the fact that hospital mergers make it difficult to evaluate individual hospitals within those systems. For example, in Louisville, all five Norton Hospitals appear as one in Medicare Compare and some other rating systems. The same this is true for Jewish & St. Mary’s Hospitals. This is because they use the same Medicare Provider number to bill the government. While there may be financial advantages to merged hospitals such as leveraging Medicare bonuses for teaching beds or Medicaid patients, it can hardly be assumed that the hospitals involved provide identical clinical care. With all the hospital mergers going on around the country, such lumping together dininishes the specificity of quality or safety evaluations, and therefore the value to the patient or physician is diminished.

What are we measuring?
It may be validly argued that I am comparing apples to oranges. The Joint Commission list discussed above makes reference to Quality Measures, and the Leapfrog list and Consumer Reports list to Safety Scores. What is the difference between quality and safety? (What does ‘Excellence” mean for that matter? We hear that term all the time in hospital marketing.) I confess that the differences at this level remain obscure to me, especially since both determinations draw substantially from a common set of indicators. Are these distinctions without utility? Is it possible to have a high-quality hospital if it is not safe? What good is a safe hospital that does not provide high-quality care? If a hospital is not safe nor of measurable high quality, can it ever be excellent? Is it fair to assume that multiple hospitals reported as a single entity provide care of the same quality? I began my recent considerations of this subject with an assumption that evaluation of quality is better than no evaluation. Like all medical hypotheses however, this must be proven. Surely, if it is not easy to understand how measurements of “quality” are conducted or what they mean, how useful can such efforts be? I have the feeling we are still in the very early stages of this national endeavor.

Can all hospitals claim some sort of quality award?
As I have gone recently to many individual hospital websites, it seemed like a majority displayed some sort of commendation or quality award on their home page. Many of the awarding organizations were unknown to me. If any or every hospital cound get an award, it would be very confusing for us consumers.

For example, St. Joseph’s Hospital London highlights that it was named as one of the 50 Top Cardiovascular Hospitals for 2011 by Thompson Reuters. We are told that the hospital “more than exceeds the standards for advanced quality care” as an accredited Chest Pain Center. We are told that the healthcare rating organization, HealthGrades, found that St. Joseph London was the number one hospital in Kentucky for Overall Cardiac Care and Cardiac Surgery for two years in a row and in the top 5% nationally. The hospital does not appear on the Joint Commission Top Hospital list nor on the Consumer Reports safety lists. It received low scores from Consumers Report for overall quality. On the Leapfrog Safety Score list, it received a ‘B.’

In the lists for 2013 prepared by Truven Health Analytics (which I understand to be the successor company to Thompson Reuters above), St. Joseph London no longer appears as a top cardiac hospital. The only Kentucky hospital on the company’s 100 Top Hospital list for “current performance and fastest long-term improvement” is the Owensboro Medical Health System, which did not appear on the Top Performer list of the Joint Commission, and which received only a ‘C” from Leapfrog. It received low quality scores from Consumer Reports. Yet all of these rating organizations make use of the same basic set of information collected by Medicare! What is someone like me to make of all of this?

Accrediting and rating as a business.
The accreditation and rating of hospitals has become big business. Hospitals realize that to be competitive and to negotiate for higher payments that they are going to have to not just claim, but to prove that they are doing a good job. Both governmental and private healthcare payers have already initiated payment systems that are linked to exactly the quality indicators we are talking about in this series of articles! This is a very expensive undertaking! Hospitals spend a large fortune collecting and reporting all these numbers. The reporting requirements change in real time as do the number of entities requesting reports. Some raters charge hospitals for the privilege of collecting their information, and then may charge them again for using the results in their advertising! All the players stand to make or lose huge sums of money and reputation. We need to be confident we have it right. I am not yet confident but would like to be. At the very least, I believe the public has a right to know of any financial exchanges between the evaluators and the evaluees.

Are reported results trustworthy?
Because the financial implications are so great, it is not surprising that some research is now concluding that many hospitals go beyond “gaming the system” to exaggerating if not falsifying their reported results. We have long known that doctors and hospitals can be reluctant to report bad news. (In Kentucky we have a special problem because internal quality evaluations are discoverable in civil litigation.) Even if reported accurately, hospitals are allowed to decide themselves just what categories of information (if any) they submit. In essence, they can design their own report cards. To use metaphorical comparisons of healthcare to education, we all know that cheating occurs in school from elementary to professional levels. College degrees can be purchased by mail or on the Internet. Healthcare institutions are no more ethical or honest than the individuals who make them up. We as consumers must have confidence that systems of quality evaluation that are based on self-reporting and on which we depend represent the full, the whole, and nothing but the truth. I am not there yet.

Good in one thing or many?
Can we generalize from evaluations of specific clinical diagnoses or processes? The whole concept of defining specific standards for processes of care for a limited number of diagnoses is that there will be some “trickle down” of quality to other areas of the hospital and other disorders. This hoped-for result is not unreasonable, but because another possibility is also quite possible, it needs to be proven. It is possible and perhaps likely that the special attention given to limited areas results in a relative decrease in others. I offer another example from the field of education. In both elementary and medical schools, I and other educators have argued that “teaching for the test” narrows the scope of both engagement and knowledge for the student. So it can be in medicine when grades are given as part of a highly standardized and predictable evaluation.

Important outcomes or just busy work?
The discussion of whether it is more important to measure processes of care or outcomes of care has been going on for as long as I have been involved in clinical medicine. The accountability measures in the Joint Commission evaluation are all processes of care. Did the heart attack patient get aspirin on admission and beta-blockers at discharge? Did the pneumonia patient get the right antibiotic and on time? Were they immunized against pneumonia? These measures are on the list because we mostly think they are the right things to do, and that they probably lead to better clinical outcomes for our patients. Doctors who object to outside review of their actions like to call this “cookbook” medicine. Most of the rest of us call this using best medical practices or checklists. What is not among the Joint Commission’s accountability measures are clinical outcomes like what percent of heart attack patients survived for 30 days, or how many urinary tract infections occurred because catheters were not maintained properly, or how many patients developed blood clots in their veins or which traveled to their lungs. Such outcomes are the results that matter most to patients and which we doctors and hospitals are ultimately trying to impact. Outcome measurement, in my opinion, is not substantially different than medical research. For the results to be reliable, the evaluation must be planned carefully, is often complicated or difficult, and may be expensive to do correctly. My thinking about process vs. outcome measurement is that both should be attempted. Outcomes are what we are trying to effect with our treatment processes. We must be prepared to put our resources of money and manpower behind those processes that have the most impact. We are likely never going to able to do everything humanly possible and will need the evidence to choose wisely.

Enough! This has gotten too long again. I am in favor of independent external review of hospitals but do not think we are where we need to be. At the very least, I would not yet personally feel confortable selecting a hospital based only on any of the scoring systems I have seen. What do you think? If you are an expert, we need your input and advice.

Peter Hasselbacher, MD
President, KHPI
Emeritus Professor of Medicine, UofL
October 7, 2012