Featured Post

Is the new professionalism and ACP's new ethics really just about following guidelines?

The Charter ( Medical Professionalism in the New Millennium.A Physician's Charter) did not deal with just the important relationship of ...

Monday, May 02, 2005

Retired Doc's Suggestion for Medical Curriculum,Part 8,Meta-Analysis -know its limitations

The twin pillars of the epidemiologic-statistical foundations of the "best evidence " part of Evidence based Medicine (EBM) are the randomized controlled trial (RCT) and the quantitative systematic review AKA meta-analysis (MA).

In a earlier blog, I suggested that a major insight of the RCT is that everyone does not react the Same to a given treatment.
That is one of the basic facts of life in MAs as well since a systematic review typically reviews RCTs.

Two others are:

The conclusion of a MA is dependent on the published (or unpublished) studies included in the pooled analysis and the outcome statistic chosen.
Just as clinical advice needs to be linked to cases to give them limbic valence,these abstractions can be made meaningful with a real life case of different MAs reaching conclusions that would have very different implications for clinical medicine.
Two Danish researchers (Olsen and Gotsche-Cochrane data base Sys Rev. 2001;CD0018777) concluded that screening mammography was not effective.In the same time frame the USPSTF concluded the opposite:both groups based their conclusion on MAs.

Dr. Steven Goodman's explanation of that discrepancy should be part of handouts to medical students in their course on EBM. In an editorial in the Annals of Internal Medicine, sept 2002,volume 137 issue 5, pages 363-365 (He explains that a MA is basically an observational design wherein the subjects are published studies.Those studies that are considered but eliminated from the analysis can make a big difference.Why studies are kept or not rests on "competing claims" of methodologic validity.In this regard the average or for that matter super doctor is challenged to know which expert is correct.The Danish epidemiologists excluded more trials from their analysis than did USPSTF and reached a different conclusion.Another reason for the difference was that the US group used breast cancer mortality while the Danes chose all cause mortality as a summary statistic.

So ,the choice of studies to include and the summary statistic chosen determine the outcome. This example alone should convince medical students that meta-analysis are not infallible. There are context dependent.
I became aware of Goodman's Annals of Internal Medicine editorial in the blog "Medical Metamusings"and I later learned of two other publications that would be good additions to the study list given medical students to illustrate the limitations of MAs.

LeLorier et al from Montreal published an article in NEJM in 1997 in which they described discrepancies between MAs and subsequent large RCTS. An editorial by John Bailar followed, entitled "The Promise and Problems of Meta-Analysis"(NEJM-volume 337:559-561,aug 21,1997).

A major point made by Bailar and LeLorier is that while a well done MA can be helpful in presenting disparate studies on a common scale - using odds ratios- the problem may arise when all of the data is summarized into one odds ratio which is supposed to capture the entire issue but actually may oversimplify a complex issue and lead to erroneous conclusions. Simple answers to complex problems are welcomed but life is so messy they are often just wrong. Meta-analysis is not rocket science-its techniques and particulars are still being working out to some degree and reputable researchers may disagree over operational issues.There is still a lot to learn about the best way to do them.

2 comments:

Dr. Luke Van Tessel said...

Will you come out of retirement and teach at my med school?

Anonymous said...

LOL. Curiously, I discovered both your's and Magnificent Bastard's blog on the same day a few days ago. Great blogs.

Retired doc, this is another good post on EBM tools. The Bailar-LeLorier exchange is well-known, and goes to the heart of what MA really means.

I have come to hold a rather jaundiced view of many MAs. My feeling is that summary estimates should never be made unless the treatment compared is almost essentially the same. For example, one of Joanna Wardlaw's early Cochrane MAs on stroke thrombolytics rather indiscriminately produced point estimates comparing rtPA, streptokinase, urokinase etc, given at different times, because all were "thrombolytics". You end up not knowing what on earth the summary estimate is about - a characteristic of a nebulous "drug", "thrombolytic"?

In the ER, you do not prescribe "thrombolytic 0.9mg/kg IV, given as a 10% bolus followed by infusion over 1 hour". You prescribe alteplase, streptokinase, etc. Meta analysis of drug classes is generally weird.

I think the role of MA in the future should increasingly be prospective IPD analysis of large trials each of which was prospectively designed to study the role of treatment X in different populations/subgroups.