Objects Accidentally Left in the Body After Surgery in Kentucky

retained-itemRetained surgical objects:  A useful marker of hospital quality and safety?

A few weeks ago, the national newspaper, “USA Today,” reprinted a Courier-Journal exclusive by Andrew Wolfson about the phenomena of “angioplasty-abuse” prompted by a Kentucky lawsuit but increasingly evident nationwide. This week, the C-J returned the favor by running a shorter version of a piece by Peter Eisler of USA Today about “retained surgical items,” that is, the accidental leaving of foreign objects inside the body after surgery- usually in the abdomen or chest. Everyone has heard stories about surgical sponges or instruments left behind– perhaps to be discovered later when symptoms or complications occur. It is a medical mistake that is never supposed to happen, and a red flag that a hospital or surgical center may not have requisite safety procedures in place. Because of this, the frequency of retained items is included in most of the safety and quality evaluations of hospitals. That is what attracted my attention for this blog.

As an internist and rheumatologist, I stuck countless needles into people’s joints and other out of the way places– but I am not a surgeon. I can sort-of understand how objects might be left behind in the chaos of some disaster, but I will leave it to my surgical colleagues to tell us in the comments whether such an occurrence is ever excusable. In malpractice cases, when a forgotten sponge or clamp is discovered, it is informally called a “smoking gun.” The time-honored “sponge-count” is well recognized to be an inadequate safety measure. Increasingly, hospitals are using high-tech approaches to track the whereabouts of wayward sponges and the like, but many other hospitals have somehow concluded that the effort is not cost-effective or of sufficient priority. The responses in the article of a hospital-industry spokesperson suggesting that the use of modern effective methods to prevent these uncommon but predictable events might be deferred in favor of other priorities left me unimpressed, especially given the trend of so many hospitals to boast about their new and increasingly high-tech surgical machinery! In any event, such “never should happen” events now figure large in payment policy of Medicare and other medical insurers to the point that payments are denied to hospitals for these and related hospital-acquired complications. After all, why pay for inadequate medical services? I doubt however that many executive salaries are docked to cover the losses. Most patients will probably still get a bill and some will never regain their health. Unfortunately we all end up paying for bad care one way or another.

How common is the problem?
There is much uncertainty over just how uncommon “left-behinds” are. Uniform reporting is not required nationally. What Medicare does is count the number of times hospitals enter a diagnosis of retained object on their billing sheets, but as we will note below, not even all Medicare patients, let alone patients covered by Medicaid or private payers are included in the count. (It is likely that people in Medicare Advantage plans have similar frequencies of this hospital acquired complication.) It is estimated that there may be between 4500 and 6000 instances yearly, enough for 100 or more per state. I believe this estimate is low. Medicare’s calculated national rate is 0.028 per 1000 hospital discharges within the traditional Medicare population. Note that the denominator is hospital discharges, not abdominal surgeries.

Given that consideration of the validity and applicability of quality and safety ratings is one of the current themes in this Policy Blog, my first impulse was to look at what was being reported in our own state of Kentucky. Once again I come away disappointed and confused.

Not all hospitals are counted.
Because all of the quality and safety rating systems of which I am aware depend largely on data collected by Medicare, I went directly to the source. Anyone can download the entire database that underlies Medicare’s Hospital Compare. This is not to say the data is in a form that most people can use. Even as a better-than-average computer jockey and a former instructor of statistics, I can barely penetrate the numbers. Details about where the numbers come from and how they are handled statistically are difficult for me to find.

Attached is Medicare’s most recent list of Kentucky hospitals with their reported frequency of retained surgical items. Even without considering the actual rates, one must be immediately disappointed. Of the 94 acute-care hospitals, a specific rate is presented for only 65. Almost one third of all Kentucky hospitals are exempt from having to report operating room mistakes. Exempt hospitals include the small, so called “critical access hospitals.” I suggest that “critically omitted hospitals” might be a more appropriate designation in this regard. Outpatient surgical centers where the vast majority of surgical procedures are performed nowadays are also not required to report their mistakes to the Feds.

Hospitals for which rates are reported.
Of the 94 reporting hospitals, 59 (91%) are said to have an incidence rate for retained objects of zero! Only 6 hospitals had a rate that was greater than zero. In order of increasing frequency these are: Norton Hospitals Inc., Baptist Hospital East, Jewish Hospital & St Mary’s Health, Owensboro Medical Health System, Lourdes Hospital, and Williamson ARH Hospital. The rates range from 0.025 to 0.325 per thousand Medicare Part A hospital discharges. Only one of these 6 hospitals had less than the national average rate of 0.028 per thousand discharges. As a physician, I once practiced in 3 of these, had major surgery in one, and sent loved ones for medical care in two others. Even trying to keep an open mind that a reputation for quality is a thing different from quality itself, these results just do not make sense to me! I am still trying to understand. Here are some possible considerations.

Not all patients are counted and not all are the same.
Patients in Medicare Advantage Plans (Medicare C or Managed-Care) do not have their safety and quality items available for analysis by Medicare. The private managed care companies that administer plans are not held to the same reporting and accountability standards as are hospitals in the traditional Medicare program. Therefore, as I understand it, the bulk of the safety, quality, and satisfaction reporting on which the whole rating enterprise depends is derived from patients who have remained in fee-for-service Medicare and who are admitted to a limited number of acute care hospitals. Occurrences in small or specialty hospitals or in outpatient centers are not tallied. There are many reasons to believe that this selected patient population is not fully representative of all hospitalized patients. Inpatients are sicker than outpatients. Critical Access Hospitals see patients who are less sick– indeed they are required by law to transfer the really sick elsewhere. Traditional fee-for–service Medicare patients are thought by many to be more sick than those recruited into managed care plans. Medicare patients like me are older than non-Medicare patients and therefore sicker on average than those in other groups. It is not unreasonable to believe that there are apples and oranges being compared in this basket.

In a small hospital even one case is a disaster.
The size of a hospital operation has a tremendous effect on its rates. Does a rate of zero mean that there were absolutely no retained surgical objects? The rates in the table are reported to three decimal places. Does a zero only tell us that the rate is less than 0.0005 (rounded down to 0.000) – less than one in a million discharges? Of course, no hospital has a million discharges in a year. The national average retained object rate may be as low as 28 cases per million. With frequencies this low, most hospitals will not have a case in a reportable year by chance alone. That makes it noteworthy when a hospital reports even a single case. A hospital with a larger surgical volume will of course, on average over time, have more items left behind.

Examples of how rates vary with hospital volume.
The number of discharges at individual hospitals of non-psychiatric acute care patients for 2011 is available in Kentucky’s Hospital Utilization and Services Report. It is instructive to see what a single instance of retained surgical object does to a hypothetical rate per 1000 discharges. Note that the discharge numbers listed here are for all patients, not just traditional Medicare patients which usually vary widely on either side of 50%. For this ballpark illustration, I will ignore the patient mix for a given hospital (which seems to be a state secret!).

For example, Williamson ARH Hospital had the worst appearing rate. However, with only 2,928 total discharges, a single instance of retained object would yield a rate of 0.342 per 1000, not very different from the actual reported value. Perhaps on the next report Williamson will show a zero and another small hospital will pop up on the left-behind list! Thus, for our small hospitals, a bad showing or a good one in a given year for this rare complication depends as much on luck as it does quality. For a hospital like the Norton system with a massive 60,615 annual discharges, a single instance would yield a hypothetical rate of 0.016. Still, why don’t we see some non-zero scores for larger hospitals in Lexington or in other larger Kentucky hospitals? My guess is that the hospitals on the present list had only one or at the most two retained objects following surgery during the most recent year.

Are smaller hospitals shielded against looking bad?
The problem of how to handle hospitals with small numbers of patients is well known to Medicare’s statisticians. In a 2012 report to the federal Centers for Medicare and Medicaid Services by experts titled, “Statistical Issues in Assessing Hospital Performance,” it is stated that, “… unless the analysis includes some form of stabilization, hospital performance estimates associated with low-volume hospitals will be noisy,” which is to say, reflect randomness rather than actual performance. It takes very sophisticated (impenetrable to me) statistical modeling to stabilize out out the risk for small hospitals, but in so doing, it may make the little guys look better than they have a right to be. This balancing procedure, done, in the name of fairness, is used for reporting mortality rates for heart attacks, pneumonia, and the like. I do not know if it is used for the reporting of retained objects– or for any of the other quality and safety parameters. In my opinion, this attempt to be fair to smaller hospitals further compromises the meaningful use of these measures by the public and professionals alike. There are other ways to report or interpret the actual numbers, such as a rolling average over several years. Why not just give us the raw counts of instances over one or more years? Transparency should be our goal. We should not be afraid of real data. The trend to boil down a large amount of information into a single “score” obscures too much. We could even have it both ways.

Less careful or more honest?
Finally, in all fairness to the hospitals in which items were left behind, I must ask myself the question, do these six hospitals have weaker protections and less careful doctors and nurses, or are they just more honest in their reporting? When I was Chairman of University Hospital’s Pharmacy and Therapeutics Committee, I learned firsthand how difficult, nay, impossible it was to get professionals to report medication errors. Medical errors in general are still woefully underreported. Hospitals that make an effort to “come clean” are rare enough that when one is identified, it makes national news. It is bad enough that understandable and even forgivable human errors can lead to career-altering consequences and malpractice suits, but if one is not going to get paid in the bargain, there is an additional powerful incentive for things to get overlooked. I certainly do not accuse any of the hospitals in Kentucky of failing any reporting responsibility that they might have had. I have no evidence to do so. Indeed, because I believe in transparency, accountability, and justice in medical care, I am searching for independent confirmatory support for these results and the entire quality-evaluating enterprise.

Not apparently possible in Kentucky.
I thought I found a way, but so far have not been able to proceed. As it happens, every hospital and outpatient surgical center in Kentucky must send a copy of its final bill to the Department of Public Health in Frankfort where the intent is that it be used to further the safety and quality of healthcare for Kentuckians. These are the same standardized forms that hospitals send to medical insurers in order to get paid. It is illegal to falsify or omit essential information. Included are items such as all relevant diagnoses, all procedures performed, information on how much was charged, and whether an insurer or patient was billed, whether the service was delivered in an inpatient or outpatient setting, and more. This is a particularly useful source of information because it is collected on all patients regardless of who is paying the bills. When I was a Fellow in the Cabinet for Health Services in Frankfort in the 1990s,I was able to study all the patients in Kentucky who had breast surgery for any reason. The results were eye-opening, were included in quality reviews for Kentucky Medicaid, and helped inform the federal Patient’s Bill of Rights and other legislation when I worked in Washington. Kentucky’s yearly database is stripped of specific patient identifiers before it is made available to the public. However, users of the data are currently prohibited from identifying any specific hospital or provider even though I can find no such requirement in Kentucky Law. I believe this is a policy oversight and plan to investigate the matter further.

Still not there yet.
I am a firm believer in evidence-based, affordable, equitable, and just healthcare within a system that is transparent and accountable. I am tired of seeing medical services sold as though they were soap powder, or hospitals creating their own demand. I am not afraid of reliable facts driving policy decisions. I support our early efforts to inform the public and professionals alike about the quality of health care being offered. Although I am grandiose enough to think I am better qualified than the average citizen to interpret hospital quality and performance, I am far from satisfied with the reliability and applicability of the subsets of information currently available. For now, my sense is that at this early stage, our ability to measure and disseminate information about the quality of medical care is not as good as we think.

Peter Hasselbacher, MD
President, KHPI
Emeritus Professor of Medicine, UofL
March 2013