The Power of a Summary

Hospitals generate reams of patient safety-related data.  But that alone doesn’t make them accountable.

There is power in that data– the power to arm patients and purchasers with the information they need to demand better.  But in the unorganized, unsummarized aggregate, the data are not so powerful. Not to patients anyway.  Obviously, individual patients don’t have the time, inclination or expertise to decipher, organize, summarize and promote the hospital data on their own.  Therefore,  the hospitals’ data are effectively invisible to them.

The hospital data only realizes its potential power in the marketplace when boiled down into something that can be understood by patients at-a-glance, because a glance is all that most of us are willing to give the subject.  Only when boiled down will the hospital data be accessible enough to drive purchasing decisions.

And that is what a national patient safety group called Leapfrog did this week when it summarized hospitals’ patient safety data into school-like grades.  Casting judgements about hospitals is perilous business, because hospitals are fiercely defensive institutions that understandably prefer to promote their miracles over their mistakes.  Though Minnesota hospital leaders were very courageous a few years back to begin publicly disclosing their medical errors, hospital advocates in Minnesota pooh-poohed Report Card Day:

“It’s really a repackaging of what’s publicly available,” (Minnesota Hospital Association (MHA) data expert Mark) Sonneborn said.

I really should have tried that one when I was a lad.  “Chill mom, that “D” in Social Studies is actually just a repackaging of information that has been available to you all semester.”

Yes, the data behind the grades is available from the U.S. Department of Health and Human Services.  So, if I understood which measures were most meaningful, and I don’t, it would have been technically possible for me to construct the spreadsheet that the Leapfroggers did, and make some kind of a comparison on my own.

But the practical reality is that I never did, and never would.  Life is just too busy to summarize all the data impacting my life.  And even if I was geeky enough to do my own little patient safety data research project, the effort would only benefit me, and not the rest of the country.

MHA is correct that Leapfrog’s methodology is just “repackaging.” But the grades will drive quality improvements much faster than the status quo way of managing the data.  Because whether a hospital got an “A” or a “F” grade, the minute hospital leaders know that easily understood grades are going to be regularly appearing in the hometown news media and competitors’ marketing materials is the moment they start investing more effort, thought and resources into patient safety improvements.    With the advent of publicized grades, they now know that consumers and purchasers will use their new found knowledge to vote with their feet, and their pocketbooks.

Markets work if consumers are informed, and the beauty of the grades is that they are simple enough to do that.  Lifesaving work is most often done by the miracle workers in hospitals wielding scalpels, microscopes, medications, lasers, gauze, latex, disinfectants and needles.  To be sure, these folks are heros.  But lifesaving work can also be done, indirectly, by data jockeys wielding spreadsheets and press releases.  Leapfrog, I give you an “A.”

– Loveland

21 thoughts on “The Power of a Summary

  1. Rob Levine says:

    Are you kidding? So – after seeing these report cards, will you go to a different hospital? My guess is you’re more or less locked in by insurance, location, or connection to a doctor. People don’t want to be consumers in every area of their lives. They don’t have the information, inclination, time, nor expertise to make decisions like this, let alone the institutional ability to. Conservatives expect people to be knowledgeable consumers about EVERYTHING, but people just want to go to the doctor when they are sick, and to the nearest hospital when they need to. Same applies to schools. Many things in this world are just not amenable to “market” judgements. Nor should they be.

  2. Joe Loveland says:

    Yes, I will go to a different hospital based on patient safety ratings. With plenty of “A” hospitals in the area, why take a loved one to a “C” hospital? My plan allows for choice of specialists and hospitals. Obviously, many consumers can’t or won’t shop, but the reality and perception of even a small percentage of patients shopping impacts decisions with hospital leaders. I saw it when I worked for a hospital company. U.S. News has rated hospitals for years, and hospital admistraters chase those ratings aggressively, and have improved quality as a result.

  3. PM says:

    And, it is always possible (maybe even likely) that insurance plan administrators might change their affiliations with hospitals within their plans based on these ratings.

    FWIW, I have a number of friends who work at HCMC, and they are all talking about how the Mayo Clinic is just now moving into the metro area:

    Mayo announced yesterday that Fairview Red Wing Health Services has signed a tentative agreement to become part of the Mayo Clinic Health System. Mayo is expected to formally acquire the Red Wing operation July 1st. The Fairview system includes a hospital in Red Wing, clinics in Zumbrota and Ellsworth, Wisconsin, along with a senior living center, home care and hospice services.

    People in the hospital/medical world are certainly paying attention to things like this.

  4. PM says:

    As a counterpoint, ratings systems are not always good–think about US News and World Report and its ratings system impact on colleges.

    Or the No Child Left Behind emphasis on standardized testing and school rankings based on those.

    But problems with specific ranking/sorting/rating systems are not an argument against all such systems, or against the quantification of information in general.

    1. Rob Levine says:

      My point exactly. Without delving into these ratings how can we trust things like this? I’m NOT saying we shouldn’t try to make decisions about quality, but the institutional versions we’ve experienced over the past decade do not inspire confidence. AAA rated MBS bonds anyone? As PM says, the US News rankings of colleges, or, more recently, the National Council on Teacher Quality’s rankings of education programs and state laws. Beyond that, isn’t it a bit simplistic to rate a hospital with one overall grade? WHAT aren’t/are they safe at? If the place isn’t safe for, say, heart bypass surgery should that stop me from going there for a colonoscopy? Maybe, but how would I know?

  5. john sherman says:

    If you live in much of ND, SD or western MN, your choice is Sanford or a guy using eagle feathers and sage smoke. Nor can I imagine that when my daughter totaled her car on 35W she should have been able to have the ambulance pull over while she comparison shopped.

    Obviously there’s a problem when one of twenty patients get health care associated infections, 100,000 a year die and the cost is $45 billion, but I’m not certain the report card is the solution. Some things like bed sores can be totted up and statistically analyzed usefully. However, the same isn’t true of rare surgeries on patients of widely varied conditions.

  6. PM says:

    Both of what John and Rob says is true–but that is only pointing out some potential limitations. No rating system is perfect–and that lack of perfection is NOT a reason to denigrate all ratings systems. Rather, it is a reason to continually critique them, to point out their shortcomings, to constantly improve them. Perfection is not possible, but improvement certainly is. And, over time, these systems will become more and more useful. Best situation is when there are competing ratings systems–like all of the various ratings systems that exist for cars. Consumer Reports emphasizes certain things (safety, reliability, etc.) while Car and Driver emphasizes yet other things (technology, fun, performance). Depending on what is important to you, you can pick and choose.

    :Let a 1,000 ratings bloom!

  7. Joe Loveland says:

    1) No ratings can ever be perfect, but feel free to keep waiting. If you’d rather fly blind than fly with the best available, but imperfect, navigation system, that’s okay. Obviously, no one is forced to use ratings.

    2) The argument that ratings are worthless unless everyone or most people use them doesn’t hold up. Because some consumers use ratings, and because they get touted in marketing, they influence the related decisions of the rated.

    3) I’m for a single payer health care system, but as long as we’re stuck with a shitty kinda sorta market-based system, I’m all for tapping into market forces — better informed customers and purchasers putting demand pressure on health care providers — to do whatever we can to improve quality and costs. But I’m certainly not arguing for a market-based system over single payer. Just because someone utters the word “markets” doesn’t automatically mean they wear Cato Institute cufflinks. I don’t. (Does my Cato Institute money clip count?)

  8. john sherman says:

    I agree on the single payer part; otherwise, people are put in the situation of wondering whether they should incur a 20% financial penalty by going out of network for the difference between a B- and a B.

    I also agree that ordinary mortals cannot decode the stuff coming out medical facilities even when they can get. Still, reducing everything to A to F can create a problem of oversimplification. First there is the fallacy of misplaced concretion, that is, the belief that something you can put a number to, no matter how, is somehow more real than that which you can’t.

    People also really need to have some sense of the data underlying a grade; small samples produce inherently unreliable results. There’s a sometime similar problem in evaluating teachers. Mill refers to the man who “when asked whether he agreed that six and five made eleven replied that he wouldn’t answer until he knew what use was to be made of the answer.” I feel somewhat the same about medical facility grades.

  9. Newt says:

    US News & World Report hospital rankings are so deeply flawed as to be laughable, but much of the world relies on their methodology and interpretations as gospel.

    I prefer word-of-mouth from real patients.

    1. Joe Loveland says:

      A lot of folks might assume that the U.S. News rankings are done by reporters. They aren’t. They are done by Research Triangle International, a well respected research operation.

      In any field, the rated always question the ratings. So questioning is to be expected. Consumer Reports, JD Power, Energy Star and the rest are always being questioned. Among other things, that’s a pretty good indication that they matter, and impact quality-related behavior.

      I’m all for folks questioning. Questioning is good. But just like with global warming and other research issues, the fact that the quality of our knowledge will always be evolving doesn’t mean that we have no knowledge. We do have a boatload of knowledge about hospital patient safety, and we need to make it useable, so that it can drive improvements.

  10. Newt says:

    Wait, a brief digression: Did you just say that “knowledge will always be evolving”? Does that mean man-caused global warming isn’t “settled science”?

    1. Joe Loveland says:

      On global warming, I personally would say something like “overwhelming scientific consensus,” rather than “settled science.”

      A great piece on this subject:

      The phrase “the science is settled” is associated almost 100% with contrarian comments on climate and is usually a paraphrase of what ‘some scientists’ are supposed to have said. The reality is that it depends very much on what you are talking about and I have never heard any scientist say this in any general context – at a recent meeting I was at, someone claimed that this had been said by the participants and he was roundly shouted down by the assembled experts.

      The reason why no scientist has said this is because they know full well that knowledge about science is not binary – science isn’t either settled or not settled. This is a false and misleading dichotomy. Instead, we know things with varying degrees of confidence – for instance, conservation of energy is pretty well accepted, as is the theory of gravity (despite continuing interest in what happens at very small scales or very high energies) , while the exact nature of dark matter is still unclear. The forced binary distinction implicit in the phrase is designed to misleadingly relegate anything about which there is still uncertainty to the category of completely unknown. i.e. that since we don’t know everything, we know nothing.

      In the climate field, there are a number of issues which are no longer subject to fundamental debate in the community. The existence of the greenhouse effect, the increase in CO2 (and other GHGs) over the last hundred years and its human cause, and the fact the planet warmed significantly over the 20th Century are not much in doubt. IPCC described these factors as ‘virtually certain’ or ‘unequivocal’. The attribution of the warming over the last 50 years to human activity is also pretty well established – that is ‘highly likely’ and the anticipation that further warming will continue as CO2 levels continue to rise is a well supported conclusion. To the extent that anyone has said that the scientific debate is over, this is what they are referring to. In answer to colloquial questions like “Is anthropogenic warming real?”, the answer is yes with high confidence.

      But no scientists would be scientists if they thought there was nothing left to find out. Think of the science as a large building, with foundations reaching back to the 19th Century and a whole edifice of knowledge built upon them. The community spends most of its time trying to add a brick here or a brick there and slowly adding to the construction. The idea that the ‘science is settled’ is equivalent to stating that the building is complete and that nothing further can be added. Obviously that is false – new bricks (and windows and decoration and interior designs) are being added and argued about all the time. However, while the science may not be settled, we can still tell what kind of building we have and what the overall picture looks like. Arguments over whether a single brick should be blue or yellow don’t change the building from a skyscraper to a mud hut.

      1. Newt says:

        Joe – this won’t be settled here, but I read reports like this one

        http://www.powerlineblog.com/archives/2012/06/is-the-united-states-actually-getting-warmer.php

        and I wonder how learned, literate people reconcile enormous discrepancies in scientific data.

        I wonder, too, why there is such a political push for the public to ignore massive inconsistencies (and there are many) and I am left to conclude that movements are political rather than scientific.

        Proponents will cite the concensus of professional organizations as proof positive of certain trends, but they are loathe to address inconsistencies. I am sorry, I just can’t turn off my brain like you do.

  11. Mark Sonneborn says:

    I know I’m a little late to the party, here, but I am the person quoted in that article from MHA — just ran across your blog. The Leapfrog methodology, it turns out, is deeply biased toward hospitals that fill out their voluntary survey (only about 1/4 of hospitals fill it out nationally & only 11 hospitals in MN). An example — if you answer the Leapfrog survey that yes, you do utilize something called Computerized Prescriber Order Entry (this helps reduce medication errors), you get 100 points for that question. If you are a hospital that has CPOE but didn’t fill out the Leapfrog survey (Leapfrog knows this by accessing another publicly-available survey through the American Hospital Association), then you get only 15 points for that question. Plus, Leapfrog is selling the rights to use its name in the advertising for A hospitals.

    Consumer Reports just released its report card yesterday, and some of the C hospitals in Leapfrog are near the top in this report (and there are also A hospitals that do poorly). All of the data used in the CR report are also based on entirely public data. It’s all in how you choose to weight the different measures that gives you a score. That’s what I mean by re-packaged — the reader is looking at the biases of the repackager in these reports. As a consumer, it is very confusing. Personally, I think it is a worthy goal to “composite” the multitudes of quality measures, but what we have so far should not enough for me or any other consumer to base our decisions on.

    1. Rob Levine says:

      I rest my case. Joe, why don’t you tell us why you love metrics so much, especially when so many of them in our present society are either misleading, poorly done, or outright false. You made this same mistake in your post saying Dayton should sign the teacher seniority bill.

  12. Joe Loveland says:

    Mark, thanks for stopping by. I appreciate the perspective. Sorry for the delay. I was deep in Zion Canyon with my family when this was posted.

    Remember, I didn’t write a post about “Leapfrog’s specific methodology is the truth, the light and the way.” I wrote a post about “summarizing information makes that information powerful in the marketplace, and that is why ratings so often change the behaviors of the rated.”

    The inevitable methodology questioning, which happens with every rating organization (see the letters-to-the-editor of every edition of Consumers Reports ever printed), doesn’t change my mind about this post’s assertion.

    While I didn’t post about methodology and am not a researcher by trade, I have an uninformed personal opinion on the criticisms raised. First, I don’t find it troubling that the rating agency gives the rated organization a zero if they don’t get the rating paperwork returned…even if the raters could have uncovered the information with a bit of detective work. If a hospital doesn’t want a zero, the answer is to fill in the blank, not to complain about the zero.

    And if this group charges for using their name in advertising to pay for the costs associated with the Ratings, as JD Powers and other rating organizations do, that also doesn’t necessarily trouble me. Unless there is some kind of “you pay us, or we will ding your rating” quid pro quo thing going on. or undue profiteering, that seems like it could be a reasonable way to finance the ratings operation.

    The one thing I don’t know about the methodology is whether the rating organization is measuring the correct things. That is, are they measuring X, when X has no correlation with improved health outcomes? I don’t know. But that is the type of methodology issue I worry about most with these kinds of ratings.

    If they’re guilty of that, I’d encourage experts like Jon and Rob to shout about those criticisms from the mountaintops. I’ll cheer them on. That kind of critiquing is important for either improving the ratings or discrediting unscrupulous raters.

    But again, my post was not about applauding or condemning the specific ratings methodology model. I’m not qualified to do that. My post said that ratings matter a lot, because the act of summarizing and promoting the summary has proven to be very powerful in the marketplace.

    1. Mark Sonneborn says:

      Well, there’s a couple things here for me to respond to. First off, the biggest thing that bothers me is that two raters can take essentially the same set of available data and come to two wildly different conclusions (and btw, there are several others besides Leapfrog & Consumers Reports). This is obviously confusing for the consumer, but it’s also potentially paralyzing for those being measured — which one do you believe?

      Second, the last decade and a half has been the rise of the era of transparency for hospitals — there are now around 100 measures being publicly reported. This, as well as accompanying financial incentives, has led to a hospital focus on improvement, because, as you rightly point out, transparency is a stimulus for behavior change.

      However, the health care consumer has mostly been not data-driven. There may be a few, and they may be increasing, but by and large, the availability of data has not altered consumer behavior much at all.

      These efforts to composite the measures are further attempts to engage the consumer — maybe if a consumer can just see a score at-a-glance, they’d be more likely to use it to make decisions. Call me a skeptic on this front.

      I also don’t think that those responsible for improving care are motivated differently by an overall summary score. I think the compositing does perhaps make hospital PR departments take note, and that may in turn cause resources to be deployed on filling out yet more surveys (the Leapfrog survey takes over 100 manhours to complete), not to mention marketing campaigns for the winners. However, in my opinion, the work of performance improvement will always be at the individual measure level. An overall letter grade doesn’t change that.

      Lastly, if you accept that CPOE is important, then it shouldn’t matter whether you find that a hospital has deployed it in your own survey or somebody else’s — you should get full points, shouldn’t you? This is a disservice to the reader, in my opinion.

    2. Rob Levine says:

      “But again, my post was not about applauding or condemning the specific ratings methodology model. I’m not qualified to do that. My post said that ratings matter a lot, because the act of summarizing and promoting the summary has proven to be very powerful in the marketplace.”

      Well at least we know where you worship – the marketplace. Thanks for the honesty.

      1. Joe Loveland says:

        The choice here isn’t between a marketplace and no marketplace. I wish we had a single payer system, but the fact is, at the moment we have a marketplace, and there is no political will to go single payer right now. That’s unfortunate, but that is reality.

        Therefore, the choice before us is between a less informed marketplace and a more informed marketplace. I do prefer the latter, and the information does have to be summarized in order for it to be usable for the maximum number of consumers in the marketplace.

        Think ratings can’t influence consumers’ choices and manufacturers’ quality-related decisionmaking? Consider this Honda Civic case study.

        Consumer Reports Ratings work because they are summarized into a bottom line (In the case of cars, “Recommended” and “Not Recommended”). If the data were supplied en masse without a bottom line summary, the impact would be much smaller. For the same reason, hospital GRADES have more potential to influence quality improvements than less summarized databases housed in the bowels of the bureaucracy or trade associations.

        Yes, consumers are much less empowered in the hospital marketplace than they are in the car marketplace. But if ratings swing the decisions of even just a fraction of one percent of total customers, that lost of business can influence decisionmakers to take quality more seriously. In fact, senior exectuives’ ego bruising from low grades sometimes can be enough to influence decisionmakers to invest more time and resources into quality improvement.

Comments are closed.