COVID 19 Antibody Testing: Is it a waste of time?

The scenario

Your EMS agency has decided to test the staff for COVID-19 antibodies. The results of each person will be kept private — you will only find out your results, not those of others.

Your county has 20,000 people in it. The public health department performed 10,000 tests for COVID-19 infection with a viral PCR tests last week. There have been no new cases in the county since then. Out of the 10,000 tests done, 500 people were positive. Assume that this test is 99.999% valid and were not concerned about false positives or false negatives with this test. You just need to take this number at face value for now. 500 out of 10,000 people had COVID in your county.

You remember feeling sick in February, but because testing was not available then, you were never tested. You don’t know if you actually had it or not this past winter. A few of your workers think they might have been sick then, but most were not sick last winter. You did not have a true exposure that you know of at work. You were wearing the proper PPE every time you treated a patient with COVID-19 this spring at work and you are a diligent hand-washer.

Two days later your COVID 19 Antibody test comes back positive.

But you are a curious person. Your mind creates countless what-if scenarios. You decide to get the antibody test done—data is showing that perhaps up to 40% of those with COVID-19 never have symptoms or are paucisymptomatic.

The package insert for the antibody tests your agency uses states it is 95% sensitive and 95% specific. That seems pretty reliable, doesn’t it?

What are the chances that you really did have COVID-19 based on the antibody test results?
A) 99% chance you can trust the results
B) 95% chance you can trust the results
C) 63% chance you can trust the results
D) 50% chance you can trust the results
E) 5% chance you can trust the results

What would you say the chances are that you actually had COVID 19? In order to really understand what these results mean we need to discuss a few terms first.

Testing:
Tests should accurately discriminate between those who have a disease and those who do not have a disease. It’s tempting to think of things in binary terms here; either they have the disease and test positive or they don’t have the disease and test negative. But that is not how things work. Complex problems rarely have simple answers.

Test results:
When testing for a disease or condition there are four possible results, aside from things like inconclusive results, which for the sake of clarity we are not going to discuss.

True positive: persons with the disease that test positive for the disease. 
True negative: persons without the disease that test negative for the disease.
False positive: persons without the disease but they tested positive.
False negative: persons with disease but the test failed to detect it.

We can take the four results and put them in a table called a 2 x 2 contingency table, also know as a confusion matrix.


Prevalence: What percentage of the population has this condition? Take 1000 people and see how many of them have the disease you’re testing for and that is a good estimate of prevalence. Of course, you need to make sure the sample is representative of the population you are testing.

Prevalence using Frequency Trees. Starting with 1000 people and using an infallible test to confirm we end up with a 5% prevalence we end up with 50 people with the disease and 950 people who are free of the disease.

Specificity: How well a test can identify those who do not have the disease. A false negative is probably the scariest result in healthcare. That nagging cough you’ve had all week—is it just allergies or is the start of COVID? You are supposed to work in a few days and don’t want to give your coworkers the ‘Rona.

Specificity is important because it tells us how much faith we can put in the negative result from a test -how well does the test identify those who DON’T have the disease.

To calculate this value take the number of true negatives and divide it by the number of true negatives and false positives. Put another way the number of negatives divided by the number of all the people who really don’t have the disease.

Sensitivity: How well a test can identify those with the disease. To calculate this value, divide the number of true positives by the number of true positives and false negatives (people the test missed to detect) combined. You could design a test that is 100% sensitive, but the rates of false positives could be staggering as well. 100% sensitivity is best explained by thinking of your paranoid friend who thinks every thing everyone says is about them. They are 100% sensitive, but frequently misidentify things as being about them when they aren’t.

Using a test that is 95% sensitive and 95% specific and a 5% prevalence rate is no better than tossing a coin. 47 true positives and 47 false positives.

Half of the results will be false positives.
Testing in this scenario is the same as tossing a coin. Does this make the results meaningless. Maybe? It depends on a few other things.

If the test becomes better, if the sensitivity or specificity is increased, it changes how much we can trust the results. There are several examples that follow. A word of caution should be said; many manufacturers put sensitivity and specificity numbers in their information that make the tests look near perfect – real life often shows that these numbers are much lower than initially claimed.

New, improved test. Now with 99% sensitivity and specificity (the prevalence is still 5%) we can put more faith in the results. The false positives went from 47 people to 10 people.

A great test for a disease with a prevalence 5% is pretty reliable. But what happens if the disease is not quite as commonplace as we thought? Even if we have the new, improved test, dropping the prevalence from 5% to 1% we end up with results that are essentially meaningless again, at 50/50.

Using the new, improved test again, but the prevalence has dropped from 5% of the population to 1%. This puts us back at a 50/50 chance (10 true positives and 10 false positives).


Putting it all together: Bayes Theorem.
We want to know how likely is it that someone has a disease after a test, in this case we want to know how likely is it that someone really has COVID-19 antibodies and not a false positive.

With two pieces of information we can compute the chances that someone has a disease based on a test.

Remembering the contingency table from earlier, tests can produce both an actual positive and a false positive. How do we figure out if someone is a false positive or if they are a true positive?

We need to know two pieces of information before we can do this.

Bayes’ Theorem.

Pre-test probability:
What are the chances a person has the disease before any testing is done? The answer could also be a damn good guess that the febrile patient with a pericardial friction rub heard on auscultation is going to end up having pericarditis.

When there is no other information available prevalence can be used to establish this. If there is more information available this can be a well reasoned or evidence based number, but it might just be a well educated guessing.

If all things are equal, and there is no more information to add to the equation, then prevalence in a population will be the pre-test probability.

If a random person was snatched off the street and you knew nothing about them, you would say their pre-test is the same as prevalence, roughly 3-5% for having had COVID. If the same person was in the ICU on oxygen, proned, and had ARDS following a flu-like illness last week their chances are much higher than 5%, it seems that something like a 50% pre-test probability would be closer to the truth.

Taking a healthcare worker who had a flu like illness this winter but never got tested for COVID-19 the pretest probability is certainly higher than the prevalence for all people. The more information we can add to this scenario, the more accurately we can determine a pre-test probability.

Likelihood ratio:
How certain can we be that a positive result is an actual positive and not a false positive?

Likelihood ratios are broken down into a positive likelihood ratio and a negative likelihood ratios. How much should test results change our beliefs about reality? How much does a positive or negative result change how we view our initial diagnosis?

If we are fairly certain someone is having an AMI but their ECG is normal, how much do we adjust our beliefs? Not that much. The reason being that a negative ECG is not that powerful for ruling out ischemia.

To calculate the positive likelihood ration you need to know how often the test produces false positives. Taking the true positive rate and dividing it by the false positive rate tells you how much trust should be placed in a positive result. The negative likelihood ratio is the opposite – what do we do with a negative result?

To calculate the negative likelihood ratio you need to know how good the test is at detecting those who do not have the disease (specificity). Divide the false negative rate by the true negative rate to find out how much you should trust a negative result.

The Fagan Nomogram:
Nomograms are three column charts. Drawing a line between two columns of numbers leads to a third column of numbers.

You want to know what your results of the COVID 19 antibody test mean rather than some random person’s results. All things being equal you start out with a 3-5% chance of having this disease, based on prevalence alone- but are all things equal? Probably not. If you can add new information to the question you answer changes. Updating beliefs as new information is presented is the heart of Bayes Theorem.

If you live in the middle of nowhere, off the grid and haven’t seen another person in six months your pre-test probability is essentially zero, which if you multiply anything by zero, the answer is zero. If you a healthcare provider and have worked with numerous COVID-19 positive patients in the past months you are probably more likely to have COVID-19 than the average person.

The best guesses for the sensitivity and specificity COVID-19 antibody tests range from 95-99% sensitivity and 95-99% specificity. The test manufacturers data should be taken with large grains of salt; many of them have not performed as advertised. When the likelihood ratios are calculated, using 95% we end up at a positive likelihood ratio of 19 and a negative likelihood ratio of 0.05.

To be honest, I’m not even sure how to multiply a percent by ratio. The good news is you do not have to be good at math to understand concepts. Let the Fagan Nomogram do the work for you.

The Fagan Nomogram. Draw a line from the pre-test probability across the likelihood ratio for the test you are using. You will end up with two lines, one for the + LR and one for the – LR, sometimes this is place on one graph. I’ll do it as two separate graphs for clarity in this article but many times both lines are on one graph.

The answers to the question about YOUR chances:
What follows is three different scenarios. The first one uses 5% prevalence as the pre-test probability, then moving on to 20% and 30% pre-test probability. The first nomogram is the answer to our initial scenario.


At 5% prevalence (which is probably more than the USA average) a positive COVID-19 Antibody test is equivalent to tossing a coin. The results are 50%.

Drawing a line from 5% pre-test probability across the likelihood ratio of 19 leads us to 50% post-test probability. This is the same number we arrived at with the frequency tree earlier in the article with 47 positives and 47 false positives. What if we increase the pre-test probability though?

If the pre-test probability is 20%, then a positive result for a COVID-19 antibody test is 80% reliable, which is pretty good.

Increasing the pre-test probability a bit more makes the positive results much more trustworthy.

At 30% pre-test probability a positive COVID-19 antibody test is almost 90% reliable.

On the other hand, it seems things have become less certain the more we know. COVID-19 presents and ever moving target. Any time we begin to feel like we are getting our footing on solid ground the carpet is pulled out from under our feet. A recent study demonstrated that 40% of those confirmed to have had a previous COVID-19 infection did not have detectable levels of antibodies in their blood several months post infection.

Switching from binary thinking, from thinking either someone has a disease or they do not, to a probabilistic approach is hard. The world becomes less black and white, answers are harder to find, and at times it can be exhausting. But as clinicians we should try to align our beliefs with reality as much as possible. Getting closer to the truth often involves embracing uncertainty.

Thanks to JK and RC for holding my hand with some of the math in this post.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s