A tip of the blogging hat to DB's Medical Rants for this reference.
This article by Ian Shrier and others from McGill investigated the subjectivity of meta-analyses and concluded the following:
The interpretation of the results of systematic reviews with meta-analyses includes a subjective component that can lead to discordant conclusions that are independent of the methodology used to obtain or analyse the data. And things get even more mushy when the statistical experts differ as to what methodology should be used.
In their discussion this paragraph says it well :
Our results suggest that a systematic review with a meta-analysis must be viewed with the perspective that it represents one study conducted by specific investigators with a specific methodology. At each step of the methodology (defining the general criteria, search strategy, inclusion/exclusion criteria, data abstraction, and analysis), subjective decisions are required that could affect the validity of the study; the relative importance of each will likely depend on the topic of inquiry and the data acquired. Our study demonstrates that disagreements in the conclusions of systematic reviews with meta-analyses can also be due to subjective interpretations of the results and not only of the methodology. The inclusion-exclusion criteria often are determinative of the outcome.Meta-analyses can be thought of as observational studies in which the subjects are trials.
Of course meta-analyses involve subjective judgment calls and various type of personal bias that the investigators bring to the table. How could it possibly be otherwise?
This gives me still another opportunity to reference the classic editorial in the Annals of Internal Medicine by Steve Goodman of Johns Hopkins which I discussed at some length here. To sum it I can do no better than to quote Goodman:
Judgment determines what evidence is admissible and how strongly to weigh different forms of admissible evidence. When there is consensus on these judgments and the data are
strong, an illusion is created that the evidence is speaking for itself and that the methods are objective. But this episode[ the mammogram controversy mentioned above] should raise awareness that judgment cannot be excised from the process of evidence synthesis and that the variation of this judgment among experts generates uncertainty just as real as the probabilistic uncertainty of statistical calculations.
I never tire of repeating my rant that meta-analyses should never have been placed at the top of the evidence based medicine evidence hierarchy.And for that matter biological plausibility should never have relegated to lower rings of the ladder.
2 comments:
I never tire of repeating my rant that meta-analyses should never have been placed at the top of the evidence based medicine evidence hierarchy. And for that matter biological plausibility should never have relegated to lower rings of the ladder.
Couldn't agree more, with both points.
The importance of meta-analyses has been way over stated and over played.
Statistics is an important part of science, and meta-analysis has its place, but when used like this as the gold standard, particularly in fields where there is still serious disagreement over basics, (such as patient selection and trials outcomes assessment criteria), then it does us all a disservice.
Better stop there, or I'll never shut up about it.
Ummm . . . . how do you know there is something called "biological plausibility" without statistical support?
I suppose you mean that certain propositions are consistent with other propositions for which you have statistical support.
That's not empirical science. And, it shouldn't be medicine.
Post a Comment