I thought doing quality assurance (QA) at my EMS agency was going to be a simple matter.
I was wrong.
A decade ago the QA person at my agency resigned. Because I was a willing, warm body I got the job. I didn’t get any training on how to do QA, but I also didn’t think I needed any – besides, it was just going to be a matter of reading reports and telling people when they screwed up.
Over the past decade, I have learned many lessons while doing QA; more than a few of them have been about me as a person than how to do quality assurance. Being blinded by ignorance (along with a side of arrogance) I was certain in my approach – treatments could be labeled right or wrong, protocols were followed or they weren’t and providers were either good or bad at their job. If you made a mistake the solution consisted of write-ups, remedial training, and discipline. I made complex flow charts to grade medical errors by the level of harm to the patient. The level of harm dictated the actions that followed. Level 2b yellow was bad but not as bad as a level C3 orange. God help you if you were found guilty of a level 4 red event.
The Dunning-Kruger effect was strong with me. There is nothing simple about doing quality assurance for EMS. From time to time I try to share what I have learned from doing this job. What follows is something that might have been better as two separate posts, or maybe you can view it as a two-for-one deal.
The Outcome Bias
Almost every decision you make is a gamble.
From the most inconsequential decision to major life decisions to clinical decisions about patients, we are gambling. The odds can be heavily weighted in your favor, the deck can be stacked, but the role of chance, karma, bad luck, Murphy’s Law, fate or whatever else you want to call it can’t be completely removed from the outcome. Allowing an event’s outcome to change how the provider’s decision-making process is viewed is known as an outcome bias.
A good decision can have a bad outcome.
A bad decision can have a good outcome.
This isn’t to say that outcomes don’t matter, they do – we’re not talking about nihilism here, but we can’t let the outcome override the evaluation of looking at how the decisions were made. We need to look at how provider’s decision-making process unfolded with the information they had at that time. Grading the severity of an error based on the patient’s outcome is relying on luck as a metric.
Holding people who make good decisions accountable for outcomes they can’t control is unfair and won’t do much besides pissing off people; it certainly won’t make your service better or safer. Letting a bad decision be dismissed as “no-harm, no foul” because of a good outcome (or lack of harm) is also using luck as a metric. Using “no-harm, no-fowl” thinking is easier; there are far fewer difficult conversations, everyone can simply sweep it under the rug and move on with their lives. Almost everyone falls for the no-harm, no-foul trap the first few times when they are new to doing QA.
The outcome bias in action:
There has been a rash of patients with bad gallbladders and enough comorbidities that the surgeons at the local hospital refused to operate on them. These patients, like some sort of nocturnal creatures, only venture to the ER well after the sun has set. Perhaps it is the final fatty dinner that pushed their taxed gallbladders over the edge, maybe they suffered all day hoping the symptoms would go away and when faced with an enduring another long night of suffering they come to the ER. They have been causing me to reevaluate my own thinking and to look at how the outcome bias may be influencing me and my coworkers.
My phone rings at midnight, the ER physician on the other end apologetically tells me they have a patient with gallstones and that surgery won’t touch them. They need to transfer this patient tonight because they can’t admit them here.
“Can it wait until 6 a.m.?” I mumble into my phone. “We can have a crew there at 6 a.m.” I am half asleep and I know I am not really selling this well.
I elicit some sympathy when explaining the situation to the doctor-driving after being awake for 24 hours is something we only want to do if absolutely needed, not because we are lazy, but because it is dangerous. He isn’t comfortable with the patient staying in the ER in case things get worse, he believes the patient needs to go now, wanting the patient to be in a hospital that can fix them if things go south. I see his logic, I don’t know if he is right, but I see his point and I am too scared to tell him flat-out no in this situation… in case I am wrong. I tell him we’ll be there in the hour.
I slam two cups of weapons-grade coffee from the Bunn, stop at the Loaf and Jug for an energy drink and go to the ER. It is sometime after 2 a.m. when we leave the sending hospital.
We arrive at the receiving hospital just after 4 a.m. No surgical team greets us; there is no sense of urgency, we are ushered upstairs, the patient is admitted to a med-surg floor for observation. A nurse and a tech that both look more tired than we show up. The nurse tells the patient that the doctor will see them in a few hours, maybe around 8 a.m. We get our signatures and head out.
Getting to the hospital is never the problem, the return trip is. There is a point where caffeine ceases to have an effect no matter the amount. Driving home we watch other for micro-sleeps. We have to trade turns driving a few times to make it back.
It is easy to label these transfers as “bullshit” because the patient appears that they could have waited until 6 a.m. to leave. It’s tempting to say, “I knew it. I told you, it should have waited until morning. The receiving hospital did nothing for them.” Occasionally the sleep deprived EMS provider may describe the sending physician in even more colorful language. The next day I beat myself up and think I should have stood my ground better and simply said “No. We will be there at 6 a.m.”
It is chance, a roll of the dice, that things did not end differently producing a different outcome. If the patient’s gallbladder ruptured during the transfer and they went into septic shock, the paramedic would certainly think the doctor made a good call on getting them out of there when they did. Had the ER doctor decided to sit on the patient until 6am and in the middle of the night and their gallbladder perforated there would be outrage and people would be asking why the hell didn’t they transfer them out hours earlier when the surgeon said to? Had one of us dozed off behind the wheel, crashed, and killed the patient, or all of us, my decision to leave at 2 a.m. might be viewed differently.
While risk can be mitigated using clinical judgement, prognostication, knowing baseline occurrences and using Bayesian inference methods, nothing can protect us entirely from the randomness of the universe.
Was the decision to send the patient in spite of the sleep deprived driver worth the risk? I still do not know. I don’t think it was bullshit by any means but I am left with some pretty severe cognitive dissonance about the whole thing.
Fighting the outcome bias takes a level of open mindedness that is a constant struggle. At least for me, it is. When mistakes are made it is hard to not have a knee jerk reaction, to let the outcome influence how you view things and defaulting to the fast and easy answer of incompetence, or worse.
Changing the culture of your agency is not an easy task; most of what is out there is cheap grace and lip service. Getting people to understand, and more importantly, believe in the ideas here, that outcomes do not necessarily reflect good or bad choices can be a monumental task. People will resist. They won’t believe you. The ego mounts a strong defense when confronted with something that challenges its assumptions on how the world works. Seeing in shades of gray is not an innate skill to most people.
So what are we talking about here? Sometimes it is easier to say what it isn’t. It isn’t a few members of the admin team boasting about adopting a culture of safety, it is not making your staff do an on-line training on the acronym du-jour, nor is it purchasing the latest iteration of the just culture algorithm software. None of those things are inherently bad, but unless providers truly believe it and commit to it, they are just putting lipstick on a pig.
The real test is when people are exhausted, when people are “fried” and frustrated and angry and have been up for too long, can they still be objective and not let emotions override the thought process? Can they talk about these things without getting pissed off or defensive? Can a provider apply this to themselves when they made all the right decisions but there was still a bad outcome, avoiding the trap of getting mired in self-doubt and playing the “what-if” game? Can they be open-minded enough to understand that the post-mortem on a decision is the way to make better decisions and not about judging them as a person?
I should have a better conclusion here, but sometimes, you get what you pay for and this is a free blog.