Sunday, March 07, 2010

Just a few reasons why meta-analysis may sometimes not advance our knowledge or understanding

Dr. David Rind in his blog "Evidence in Medicine" tackles an issue that has bothered me for some time.See here.That issue is, in the form of a question, is " are meta-analyses (MAs) really helpful" or not. I have questioned whether MAs should sit as high as they do on the generally accepted hierarchy of evidence based medicine. See here for my overly lengthy comments on why MA should not share the pinnacle position on the evidence based medicine hierarchy with randomized clinical trials and why the hierarchy scheme itself is flawed.

In his commentary he lists some of the reasons why they may be neither helpful nor productive of new insights. He is not saying MAs are never helpful ( and neither am I) and in fact takes the position that a recent MA regarding statins and diabetes does provide useful information .

Why may MAs sometimes not be very useful. Here are some of the reasons Rind lists and some others:

From Dr. Rind's commentary

1) Frequently... meta-analyses are either driven by the single large RCT that everyone would have cited anyway or, worse, a number of small, poorly-performed RCTs are combined with a moderate-sized, well-performed RCT and alter the results away from what was likely the best estimate of reality: the results of the well-performed RCT.

Sculpturing the numbers from several small, poorly done RCTs is no guarantee of discovering the clinical truth and may well give credence to some bad ideas.

3.MAs authored by folks with little or not subject matter expertise cannot or do not put the issues in a context that would have born of actual experience.

4.MAs that lump apples and peaches. I am reminded of one MA that looked at all thrombolytics. You don't write an order for "a thrombolytic" you write for a specific one.

5.Remembering that a meta-analysis is really an observational study in which the observed entities are trials,there is a real risk that the investigators might pre-screen the trails and in a post hoc fashion devise inclusion or exclusion criteria that would stack the deck to favor a conclusion that they already "knew was right".

No comments: