The twin pillars of the epidemiologic-statistical foundations of the "best evidence " part of Evidence based Medicine (EBM) are the randomized controlled trial (RCT) and the quantitative systematic review AKA meta-analysis (MA).
In a earlier blog, I suggested that a major insight of the RCT is that everyone does not react the Same to a given treatment.
That is one of the basic facts of life in MAs as well since a systematic review typically reviews RCTs.
Two others are:
The conclusion of a MA is dependent on the published (or unpublished) studies included in the pooled analysis and the outcome statistic chosen.
Just as clinical advice needs to be hooked to cases to give them limbic valence,these abstractions can be made meaningful with a real life case of different MAs reaching conclusions that would have very different implications for clinical medicine.
Two Danish researchers (Olsen and Gotsche-Cochrane data base Sys Rev. 2001;CD0018777) concluded that screening mammography was not effective.In the same time frame the USPSTF concluded the opposite:both groups based their conclusion on MAs.
Dr. Steven Goodman's explanation of that discrepancy should be part of handouts to medical students in their course on EBM. In an editorial in the Annals of Internal Medicine, sept 2002,volume 137 issue 5, pages 363-365 (He explains that a MA is basically an observational design wherein the subjects are published studies.Those studies that are considered but eliminated from the analysis can make a big difference.Why studies are kept or not rests on "competing claims" of methodologic validity.In this regard the average or for that matter super doctor is challenged to know which expert is correct.The Danish epidemiologists excluded more trials from their analysis than did USPSTF and reached a different conclusion.Another reason for the difference was that the US group used breast cancer mortality while the Danes chose all cause mortality as a summary statistic.
So the choice of studies to include and the summary statistic chosen determine the outcome. This example alone should convince medical students that meta-analysis are not infallible. There are context dependent.
I became aware of Goodman's Annals of Internal Medicine editorial in the blog "Medical Metamusings"and I later learned of two other publications that would be good additions to the study list given medical students to illustrate the limitations of MAs.
LeLorier et al from Montreal published an article in NEJM in 1997 in which they described discrepancies between MAs and subsequent large RCTS. An editorial by John Bailar followed, entitled "The Promise and Problems of Meta-Analysis"(NEJM-volume 337:559-561,aug 21,1997).
A major point made by Bailar and LeLorier is that while a well done MA can be helpful in presenting disparate studies on a common scale - using odds ratios- the problem may arise when all of the data is summarized into one odds ratio which is supposed to capture the entire issue but actually may oversimplify a complex issue and lead to erroneous conclusions. Simple answers to complex problems are welcomed but life is so messy they are often just wrong. Meta-analysis is not rocket science-its techniques and particulars are still being working out to some degree and reputable researchers may disagree over operational issues.There is still a lot to learn about the best way to do them.