Four Big Problems with the Stanford Antibody Study

Four Big Problems with the Stanford Antibody Study

Re: COVID-19 Antibody Seroprevalence in Santa Clara County, California

Eric Sartori RN

Here's a link to the study if you want to follow along.

Check out these sexy headlines

"Way more people may have gotten coronavirus than we thought, small antibody study suggests." - 

"Antibody tests suggest that coronavirus infections vastly exceed official counts" 

"Far more people may have been infected by coronavirus in one California county, study estimates" -

Are you curious about these headlines like me?

If you are reading this you may be wondering about the Stanford Antibody Study. Maybe like me you think something seems a bit off. I work in a COVID unit so I'm constantly watching the numbers. I'm not a research expert but as a nurse I've received a bit of education on how research is conducted. I'll use my understanding to take a look at this study with a skeptical eye. I've been browsing some conversations from people with the knowledge and expertise to critique a study like this and here are a few takeaways. I'll keep it simple because research conversations can become mindbogglingly complex. But first...

Keep this in context!

Before we get into some specific problems, keep in mind, not all published research has undergone the scrutiny of peer review. Now that the Stanford Antibody Study is published and as the media is going crazy over it, experts in immunology and research analysis are just now able to ask questions and critique the study and deliberate about the quality of this study. This is called peer review. The media getting ahead of the science is a big problem and why the media loses credibility. Sometimes reporters feel the need to compete with other media outlets before the process of scientific scrutiny is complete which leads to public confusion. Now for the problems...

The Problem with the Numbers

1. If you take the numbers from this study (death rate of 1 in 600) and apply them to another city, they don't make sense. Here's an example of how this doesn't make sense, updated from the discussion linked at the bottom of this page. 

There are currently 10,000 recorded deaths in New York City. If you use the data from the study (1 death/600 cases) that means there should be 6,000,000 cases in New York City. Since there are nearly 9,000,000 people living in New York City that means one in three people would be COVID-19 positive at this time. 4/19/20, 1900 EST This seems very unlikely because the rate of people infected who don't have symptoms would be very high and inconsistent with the majority of studies that have been conducted in other parts of the world. 

The Problem with Antibody Testing

2. No test is perfect and there is a question whether the results fall within the margin of error of the test. 

The researchers state that they adjusted for this problem but it's difficult to tell how they did this when the specificity (how often a test has false positives) has a range between 90% and 100%. What is is, 90% or 100%? This is what you will see being discussed as the story unfolds. Even the authors of the study state this, "New information on test kit performance and population should be incorporated as more testing is done and we plan to revise our estimates accordingly." The question is, what is the percentage of false positives in this test and are the results merely reflecting false positives?

Problems with the Sample

3. In any study it is often difficult to take a small sample of people and make generalizations to the whole population. 

Researchers use a number of methods to attempt to ensure that the sample matches the greater population. The problem in this study it that it is likely to be flawed based on who decided to show up for the test. Since this test was advertised on Facebook, it's likely that the people who wanted to be tested would be people who had symptoms and are concerned about exposing others and motivated to be tested. People who didn't have symptoms wouldn't be as motivated to show up. This would skew the results to increased numbers from the general population. It's like asking people to show up to taste test a cheeseburger and then using those who show up as a representative of how many vegetarians live in the area. 

The Problem with the Data

4. As you can see from the published study, the raw data is missing so it's difficult for peer reviewers to accurately scrutinize the methods used in the study. As someone suggested, it's possible that this was done to make it easier to publish. An Institutional Review Board, from my understanding, can slow down the publication of a study if the study is proposing to publish the raw data because there is the concern over study participant confidentiality. The hasty publication of their findings will make it difficult to resolve the peer review process. 

Keeping an eye on the discussion and the headlines

There were many other concerns discussed on the platforms that I was able to peruse today and there may be even more questions that will come up in the near future but I think for now these four problems make it pretty clear that the media should be more cautious about their headlines and be careful to keep the story in it's proper context. 

Here's link to one of the discussions if you'd be interested in seeing what scientific scrutiny looks like behind the scenes:

I'll continue to update this as the conversation evolves. 


Popular posts from this blog

Frank's Sign

Fast Club - Community Motivation for Your Fasting Lifestyle

Love Wins