The generally (or maybe just widely) accepted formulation of the hierarchy of medical evidence that is a major construct in what has come to be known as evidence-based medicine (EBM) considers the randomized clinical trial (RCT) and the meta-analysis (MA) as the premium types of evidence capable of trumping other forms of evidence listed below them on the evidentiary ranking system. Observational studies, case reports,personal experience and notably physiologic considerations fall on the lower rungs of the ladder. According to a widely accepted and practiced version of this scheme, higher levels of evidence trump lower lower levels.
For some time I have been perplexed about this hierarchy .
This first thing that bothered me was if meta-analysis deserved such a lofty position. Two things made me decide that they definitely did not.
The important Annals of Internal Medicine Articles and editorial by Dr. Steve Goodman of Johns Hopkins made it clear to me that MAs were basically themselves observational studies in which the studies themselves were subjects. He discussed two major MAs on the value of mammograms, one which concluded they were effective and valuable and the other concluded the opposite.The major difference between the studies was their choice of studies to include and to exclude. Both sets of authors maintained their criteria for exclusion were valid and yet they were quite different and resulted in opposite conclusions. Quoting Goodman:
... this controversy shows that the justification for why studies are included or excluded from the evidence base can rest on competing claims of methodologic authority that look little different from the traditional claims of medical authority that proponents of evidence-based medicine have criticized.
Secondly, not only is the choice of inclusion rules important but so are the various statistical techniques used to analyze the data. A widely quoted MA on the use of large doses of Vitamin E published in the Annals Internal Medicine made it clear to me the importance of methodology and how impenetrable the bickering between statistical experts could be as they debate the merits of their technique of choice. In this case, the authors of the article found a tiny increase ( relative risk of 1.03) in overall deaths from the vitamin E. A flurry of letters to the editors claimed their technique was wrong and that when the "correct" method was used there was in fact no statistically significant difference. Since so much seems to depend on the investigator's choice of what studies to include and the method used to analyze the data and those actions basically takes place at least to the usual physician reader behind the thick methodological-statistical curtain that faith in the authors become very prominent.So accepting that type of evidence involves more than a little faith .
It seemed clear to me that the MA did not belong in the top of the evidence ladder and I wrote about it here.
Sometime later after I became aware of a more fundamental problem in this evidence ranking and trumping system in an 2001 article by Dr. M.R. Tonelli and after wrestling with that article for while I began to think that the construct of a hierarchy itself was in error. Tonelli said in part:
Proponents of evidence-based medicine have made a conceptual error by grouping knowledge from clinical experience and physiologic rationale under the heading of evidence and then compounded the error by developing hierarchies of evidence that relegate these forms of medical knowledge to the lowest rungs. Empirical evidence, when it exists,is viewed as the "best"evidence on which to make a clinical decision, superseding clinical experience and physiologic rationale.
More recently several articulate and thoughtful bloggers have discussed what one called the "elephant in the [EMB} room". Orac had this to say in a recent posting.
As I've come to realize, the elephant in the room when it comes to EBM is that it relegates basic science and estimates of prior probability based on that science to one of the lowest forms of evidence, to be totally trumped by clinical evidence. This may be appropriate when the clinical evidence is very compelling and shows a very large effect; in such cases we may legitimately question whether the basic science is wrong. But such is not the case for homeopathy, where the basic science evidence is exceedingly strong against it and the clinical evidence, even from the "positive" studies, generally shows small effects. EBM, however, tells us that this weak clinical evidence must trump the very strong basic science, the problem most likely being that the originators of the EBM movement never saw CAM coming and simply assumed that supporters of EBM wouldn't waste their time investigating therapeutic modalities with an infinitesimally small prior probability of working.
It seems that the elephant's cloak of invisibility was torn away when a number of small clinical trials allegedly found that such things as homeopathy and reiki ( the most improbably absurd of the improbable methods of CAM) seemed to work or at a minimum the claim was made that larger trials were needed. Perhaps worse still ,some meta-analysis of those trials by folks highly regarded in EBM circles (such as the Cochrane Collaboration)suggested there was some evidence of their efficacy , such analysis of the trials notable by the absence of any mentioning about how these techniques fly in the face of current concepts of chemistry, physics and physiology let alone contrary prior experience. The message of "trials trump basic science" seemed to have taken to heart by the folks at Cochrane.
Dr. RW had this to say
... treatments must pass not only the evidentiary test but also the test of scientific plausibility. Because EBM devalues the latter it is inadequate for the evaluation of implausible claims even though it may perform well in evaluating plausible ones. This fundamental error is built into EBM’s system of analysis as illustrated by its evidence hierarchy, which places physiologic rationale and scientific principles at the bottom of the heap...
Essential reading on this general topic must include posting by Dr. Kimball Atwood IV and the articles by Dr. Steve Goodman explicating meta-analysis and the topic of prior probability in the form of Bayesian analysis.
Goodman said "data alone cannot prove the hypothesis". It is essential to take into account biological plausibility and prior evidence.The astounding example of how sincere disciples of the extreme empiricism form of EBM "analyzed" homeopathy should be all we need to listen closely to what Goodman and Tonelli have been saying. Was is naive to think that a calculus had been devised that enabled us to make decisions simply by using a ranking system in which types of evidence higher on the pole invariably trumped those lower. To quote Goodman yet again.
Judgment determines what evidence is admissible and how strongly to weigh different forms of admissible evidence. When there is consensus on these judgments and the data are
strong, an illusion is created that the evidence is speakingfor itself and that the methods are objective. But this episode[ the mammogram controversy mentioned above] should raise awareness that judgment cannot be excised from the process of evidence synthesis and that the variation of this judgment among experts generates uncertainty just as real as the probabilistic uncertainty of statistical calculations.
I cannot help but think the EBM ranking- trumping system became for some a means to avoid the judgment that Dr. Goodman writes about, though it may have not been so intended by the founders of the movement. If medical students grow up to believe that clinical trials always trump basic science the incredible growth of CAM in main line medical schools (see here and here for how bad this is getting) will only continue to get worse and worse. Everything I have said here has been said better by the folks I quoted but I think it is important to keep the fires of protest burning.