You may recall we've had a bit to say, and none of it good, about the surveys that allege that Kung Flu is vastly more widespread than was supposed.
We pointed out no small number of major flaws in all those studies.
Lest you think I'm alone in this, let me share a few links and pull quotes, courtesy of MD Ed Grouch:
"(The Guardian/UK) Both studies [at Stanford and USC/LA County] used an antibody test made by Premier Biotech company that has not been approved by the FDA and comes with an acknowledgment that it can record false positives.
Hundreds of antibody tests have emerged on the world market in recent weeks, including some that promise a result from a finger prick in just hours, an executive from the diagnostics and pharmaceutical company Roche told Reuters on Tuesday. None of them currently have FDA approval and some of them are “a disaster”, the Roche CEO, Severin Schwan, said.
Then there are concerns about the Stanford study’s sample and statistical analysis. The biggest criticism was that it estimated cases for the whole county’s population based on detecting only 50 positives out of 3,300 people sampled. And since the tests had a false positive rate in one assessment of two out of 371, critics argued all the Covid-19 cases detected by the tests in Santa Clara could conceivably have been false positives.
“I think the authors of the above-linked paper owe us all an apology,” wrote Andrew Gelman, director of the applied statistics center at Columbia University, who has written numerous books on teaching statistical methods. “We wasted time and effort discussing this paper whose main selling point was some numbers that were essentially the product of a statistical error.”
The prominent Washington state genetics researcher Trevor Bedford said on Twitter he was glad to see antibody studies emerging but was “skeptical” of the high results. The author and biotech investor Peter Kolchinsky tweeted that the “flaws with this study could trick you into thinking that getting shot in the head has a low chance of killing you”.
The study was also criticized for recruiting its volunteers on Facebook, a method some critics charged could have induced some to participate in the study because they had had symptoms but were unable to get tested. Researchers say they attempted to screen for this by collecting information from participants on any recent symptoms, such as coughing or fever.
Both the Stanford University team and the researchers at USC declined to respond to a request for comment.We're just getting started.
From a peer review of the Stanford study posted on Medium
From Columbia University's Statistical Modeling page, quotes from an e-mail sent to Andrew Gelman, Director of Columbia's Applied Statistics Center, and posted on their site:
- First, the false positive rate may be high enough to generate many of the reported 50 positives out of 3330 samples. Or put another way, we don’t have high confidence in a very low false positive rate, as the 95% confidence interval for the false positive rate is roughly [0%, >1.2%] and the reported positive rate is ~1.5%.
- Second, the study may have enriched for COVID-19 cases by (a) serving as a test-of-last-resort for symptomatic or exposed people who couldn’t get tests elsewhere in the Bay Area and/or (b) allowing said people to recruit other COVID-19 cases to the study in private groups. These mechanisms could also account for a significant chunk of the 50 positives in 3330 samples.
- Third, in order to produce the visible excess mortality numbers that COVID-19 is already piling up in Europe and NYC, the study would imply that COVID-19 is spreading significantly faster than past pandemics like H1N1, many of which had multiple waves and took more than a year to run their course.
It’s perfectly plausible that the shocking prevalence rates published in the study are mostly, or even entirely, due to false positives.
Recruitment was done via facebook ads with basic demographic targeting. Since we’re looking for a feature that affects something like 2% of the population (or much, much less), we really have to worry about self selection. They may have discussed this in the portions of the paper I didn’t read, but I can’t imagine how researchers would defeat the desire to get a test if you had reason to believe that you, or someone near you, had the virus (and wouldn’t some people hide those reasons to avoid being disqualified from getting the test?).So all 50 positives could be statistical error, and they had people self-selecting with an ulterior motive, either of which, by itself, could have represented the entire positive sample size. That's about as crapola as you can get.
But wait! There's more, this time about the tests being used in those studies:
(NBC News) But some COVID-19 antibody tests, including those being used by public health departments in Denver and Los Angeles and provided to urgent care centers in Maryland and North Carolina, were supplied by Chinese manufacturers that are not approved by China's Center for Medical Device Evaluation, a unit of the National Medical Product Administration, or NMPA, the country's equivalent of the U.S. Food and Drug Administration, NBC News has found.Two U.S. companies — Premier Biotech of Minneapolis and Aytu Bioscience of Colorado — have been distributing the tests from unapproved Chinese manufacturers, according to health officials, FDA filings and a spokesman for one of the Chinese manufacturers. Many of the unapproved tests appear to have been shipped to the U.S. after the FDA relaxed its guidelines for tests in mid-March and before the Chinese government banned their export just over two weeks later.
If COVID-19 antibody tests are unreliable, they can produce false results, either negative or positive, health officials said.
Sketchy tests, with accuracy so dubious it even worries the Chinese, with false results both positively and negatively, being used to do surveys on which public health decisions might be made. What could possibly go wrong?
I repeat, all the US surveys to date have HUUUUUGE problems with sample size, selection bias, accuracy of the tests, and are unpeer-reviewed hokum and horsie pooh that should be laughed out of any serious consideration.
You're being bullshitted by people who should know better, doing shoddy work, using subpar tests, and passing it off as research.