**The scenario**

Your EMS agency has decided to test the staff for COVID-19 antibodies. The results of each person will be kept private — you will only find out your results, not those of others.

Your county has 20,000 people in it. The public health department performed 10,000 tests for COVID-19 infection with a viral PCR tests last week. There have been no new cases in the county since then. Out of the 10,000 tests done, 500 people were positive. Assume that this test is 99.999% valid and were not concerned about false positives or false negatives with this test. You just need to take this number at face value for now. 500 out of 10,000 people had COVID in your county.

You remember feeling sick in February, but because testing was not available then, you were never tested. You don’t know if you actually had it or not this past winter. A few of your workers think they might have been sick then, but most were not sick last winter. You did not have a true exposure that you know of at work. You were wearing the proper PPE every time you treated a patient with COVID-19 this spring at work and you are a diligent hand-washer.

Two days later your COVID 19 Antibody test comes back positive.

But you are a curious person. Your mind creates countless what-if scenarios. You decide to get the antibody test done—data is showing that perhaps up to 40% of those with COVID-19 never have symptoms or are paucisymptomatic.

The package insert for the antibody tests your agency uses states it is 95% sensitive and 95% specific. That seems pretty reliable, doesn’t it?

**What are the chances that you really did have COVID-19 based on the antibody test results?A) 99% chance you can trust the resultsB) 95% chance you can trust the resultsC) 63% chance you can trust the resultsD) 50% chance you can trust the resultsE) 5% chance you can trust the results**

What would you say the chances are that you actually had COVID 19? In order to really understand what these results mean we need to discuss a few terms first.

__Testing:__

Tests should accurately discriminate between those who have a disease and those who do not have a disease. It’s tempting to think of things in binary terms here; either they have the disease and test positive or they don’t have the disease and test negative. But that is not how things work. Complex problems rarely have simple answers.

** Test results:**When testing for a disease or condition there are four possible results, aside from things like inconclusive results, which for the sake of clarity we are not going to discuss.

**True positive:** persons with the disease that test positive for the disease. **True negative:** persons without the disease that test negative for the disease.**False positive:** persons without the disease but they tested positive. **False negative:** persons with disease but the test failed to detect it.

We can take the four results and put them in a table called a 2 x 2 contingency table, also know as a confusion matrix.

**Prevalence: **What percentage of the population has this condition? Take 1000 people and see how many of them have the disease you’re testing for and that is a good estimate of prevalence. Of course, you need to make sure the sample is representative of the population you are testing.

**Specificity: **How well a test can identify those who do not have the disease. A false negative is probably the scariest result in healthcare. That nagging cough you’ve had all week—is it just allergies or is the start of COVID? You are supposed to work in a few days and don’t want to give your coworkers the ‘Rona.

Specificity is important because it tells us how much faith we can put in the negative result from a test -how well does the test identify those who DON’T have the disease.

To calculate this value take the number of true negatives and divide it by the number of true negatives and false positives. Put another way the number of negatives divided by the number of all the people who really don’t have the disease.

**Sensitivity: **How well a test can identify those with the disease. To calculate this value, divide the number of true positives by the number of true positives and false negatives (people the test missed to detect) combined. You could design a test that is 100% sensitive, but the rates of false positives could be staggering as well. 100% sensitivity is best explained by thinking of your paranoid friend who thinks every thing everyone says is about them. They are 100% sensitive, but frequently misidentify things as being about them when they aren’t.

**Half of the results will be false positives. **

Testing in this scenario is the same as tossing a coin. Does this make the results meaningless. Maybe? It depends on a few other things.

If the test becomes better, if the sensitivity or specificity is increased, it changes how much we can trust the results. There are several examples that follow. A word of caution should be said; many manufacturers put sensitivity and specificity numbers in their information that make the tests look near perfect – real life often shows that these numbers are much lower than initially claimed.

A great test for a disease with a prevalence 5% is pretty reliable. But what happens if the disease is not quite as commonplace as we thought? Even if we have the new, improved test, dropping the prevalence from 5% to 1% we end up with results that are essentially meaningless again, at 50/50.

** Putting it all together: Bayes Theorem.**We want to know how likely is it that someone has a disease after a test, in this case we want to know how likely is it that someone

*really*has COVID-19 antibodies and not a false positive.

With two pieces of information we can compute the chances that someone has a disease based on a test.

Remembering the contingency table from earlier, tests can produce both an actual positive and a false positive. How do we figure out if someone is a false positive or if they are a true positive?

We need to know two pieces of information before we can do this.

**Pre-test probability: **

What are the chances a person has the disease before any testing is done? The answer could also be a damn good guess that the febrile patient with a pericardial friction rub heard on auscultation is going to end up having pericarditis.

When there is no other information available prevalence can be used to establish this. If there is more information available this can be a well reasoned or evidence based number, but it might just be a well educated guessing.

If all things are equal, and there is no more information to add to the equation, then prevalence in a population will be the pre-test probability.

If a random person was snatched off the street and you knew nothing about them, you would say their pre-test is the same as prevalence, roughly 3-5% for having had COVID. If the same person was in the ICU on oxygen, proned, and had ARDS following a flu-like illness last week their chances are much higher than 5%, it seems that something like a 50% pre-test probability would be closer to the truth.

Taking a healthcare worker who had a flu like illness this winter but never got tested for COVID-19 the pretest probability is certainly higher than the prevalence for all people. The more information we can add to this scenario, the more accurately we can determine a pre-test probability.

**Likelihood ratio:**

How certain can we be that a positive result is an actual positive and not a false positive?

Likelihood ratios are broken down into a positive likelihood ratio and a negative likelihood ratios. How much should test results change our beliefs about reality? How much does a positive or negative result change how we view our initial diagnosis?

If we are fairly certain someone is having an AMI but their ECG is normal, how much do we adjust our beliefs? Not that much. The reason being that a negative ECG is not that powerful for ruling out ischemia.

To calculate the positive likelihood ration you need to know how often the test produces false positives. Taking the true positive rate and dividing it by the false positive rate tells you how much trust should be placed in a positive result. The negative likelihood ratio is the opposite – what do we do with a negative result?

To calculate the negative likelihood ratio you need to know how good the test is at detecting those who do not have the disease (specificity). Divide the false negative rate by the true negative rate to find out how much you should trust a negative result.

**The Fagan Nomogram:**

Nomograms are three column charts. Drawing a line between two columns of numbers leads to a third column of numbers.

You want to know what * your* results of the COVID 19 antibody test mean rather than some random person’s results. All things being equal you start out with a 3-5% chance of having this disease, based on prevalence alone- but are all things equal? Probably not. If you can add new information to the question you answer changes. Updating beliefs as new information is presented is the heart of Bayes Theorem.

If you live in the middle of nowhere, off the grid and haven’t seen another person in six months your pre-test probability is essentially zero, which if you multiply anything by zero, the answer is zero. If you a healthcare provider and have worked with numerous COVID-19 positive patients in the past months you are probably more likely to have COVID-19 than the average person.

The best guesses for the sensitivity and specificity COVID-19 antibody tests range from 95-99% sensitivity and 95-99% specificity. The test manufacturers data should be taken with large grains of salt; many of them have not performed as advertised. When the likelihood ratios are calculated, using 95% we end up at **a positive likelihood ratio of 19 and a negative likelihood ratio of 0.05**.

To be honest, I’m not even sure how to multiply a percent by ratio. The good news is you do not have to be good at math to understand concepts. Let the Fagan Nomogram do the work for you.

**The answers to the question about YOUR chances: **

What follows is three different scenarios. The first one uses 5% prevalence as the pre-test probability, then moving on to 20% and 30% pre-test probability. The first nomogram is the answer to our initial scenario.

Drawing a line from 5% pre-test probability across the likelihood ratio of 19 leads us to 50% post-test probability. This is the same number we arrived at with the frequency tree earlier in the article with 47 positives and 47 false positives. What if we increase the pre-test probability though?

Increasing the pre-test probability a bit more makes the positive results much more trustworthy.

On the other hand, it seems things have become less certain the more we know. COVID-19 presents and ever moving target. Any time we begin to feel like we are getting our footing on solid ground the carpet is pulled out from under our feet. A recent study demonstrated that 40% of those confirmed to have had a previous COVID-19 infection did not have detectable levels of antibodies in their blood several months post infection.

Switching from binary thinking, from thinking either someone has a disease or they do not, to a probabilistic approach is hard. The world becomes less black and white, answers are harder to find, and at times it can be exhausting. But as clinicians we should try to align our beliefs with reality as much as possible. Getting closer to the truth often involves embracing uncertainty.

*Thanks to JK and RC for holding my hand with some of the math in this post. *