John P.A Ioannidis, writing in the August 2005 issue of PLOS, entitled his essay "Why most published research findings are false". If that claim is true, how solid are the foundations of the recently venerablized Evidence Based Medicine"(EBM)? Obviously, not solid at all. He states " It can be proven that most claimed research findings are false" After one rather dense paragraph, he says " ... a research finding is more likely true than false if (1-beta)R >0.05, where 1-beta is the power of the study and R is the ratio of true relationship to no relationships and 0.05 is the typically used alpha( type I error) value. It seems to me that if we knew the ratio of true to no relationships we would have to have had some aprior knowledge of what was true, which we do not. A letter to the editor by Jonathan Wren of the University of Oklahoma goes down this same counter-argument road when he says that yes, the probability that a research findings is true depends in part on the prior probability of it being true but we do not know that probability, we merely makes guesses or estimates about it.
Ioannidis's thesis cannot be verified ( or falsified) unless we had some independent method of determining if study results were true. An individual study can be falsified-if robust contradictory evidence is adduced- or it can become stronger when attempts at falsification fail but the generalization that most studies are false seems to not be the type of statement readily falsified on empirical grounds, since you cannot ever really prove a given study true. You can sometimes prove one is false but to prove most is a Herculean task particularly given the current increasing pace of new studies being published. Wren evokes a type of Russell Paradox argument saying that if most studies are wrong, what about the studies that Ioanndis marshaled to support his thesis and what does that say about Ioannidis's published study.
My eyes glaze over a more than a little bit when I dive into Ioannidis's theoretical argument but his so called " corollaries " seem to ring true and have been widely discussed in the medical literature. These include various factors that tend to make studies less reliable. For example,
smaller studies are less likely to be true as are studies that target small effects ( eg. RR of 1.05) .
Further, the greater the number and the lesser the selection of tested relationship in a scientific field, the less likely the research findings are to be true.
Ioannidis's paper will not derail medical or scientific research-after all science marched on even after David Hume's critique of induction-but I do welcome efforts that might medical students develop a healthy skepticism about research findings. However, even in that light the letter to the editor by Stephen Paulker also worth reading. For a researcher the issue may well be is the finding more likely than not or even a higher standard of evidence but for the clinical doctor the question is- given the circumstances of the patient- what is the best thing to do. Even if we are not sure of a given therapy's efficacy and even if the probability of it being efficacious is less than .5, offering that treatment may be the best thing to do. I think he is saying that even though there may be many good reasons to be skeptical using the best (realizing it is not perfect or guaranteed) evidence we have is really the best we can do. Often-maybe most of the time- we have to go with the evidence we have- not the evidence we want to have.