It was only a matter of time before someone published an article using a different meta-analytic technique from that used in the now famous rosiglitazone study by Nissen that failed to confirm Nissen's findings. George Diamond is the senior author of such an article in the Oct. 16, 2007 issue the Annals of Internal Medicine in which by using different techniques from Nissen neither an increased nor a decreased risk from the use of rosiglitazone in diabetic patients was demonstrated.( a subscription is required for full text )
No one should be surprised. The tools of the meta-analysis trade are arcane and the average or even way above average physician reading a meta-analysis either has to accept the findings at face value or ignore the thing entirely because he basically does not understand what was done and is in no position to meaningfully critique the techniques.If the issue is important and/or major economic forces have an interest there will soon be what we have here namely dueling statisticians.(I am not implying that the authors of the Annals article were motivated by those forces and would be surprised if they were)
Is the technique used by Nissen correct or is the method used by Diamond or is that even a meaningful question? It may be the case that combining disparate, incomplete sets of data, often without patient level data cannot ever answer certain questions such as the one posed by the rosiglitazone data analyses. It may well be a randomized clinical trial is the only way to possibly generate a meaningful answer which is what , at least in regard to the "rosi" question, is what Diamond et al suggested.
I have ranted on and on about meta-analyses (MAs) before and have borrowed heavily from the powerfully instructive writings of Dr. Steve Goodman. Medical students should have the following sentence grafted into their frontal lobes. The outcome of a meta-analysis is a function of the studies that one decides to include and the summary statistic used and various experts differ in regard to what statistic to use and the method of inclusion of studies.
They are basically observational studies in which the "subjects" are studies or trials and the "truth obtaining" value of observational studies is well recognized to be several notches lower that the randomized trial. Statistically combining two or three randomized trials does not always magically generate a higher degree of truth ( ie. correspondence to reality) than does the individual trials although sometimes it might. The trick is how to figure how when it does and it is a trick I have not learned.When are we dealing with apples and oranges and when are we merely seeing apples with minor and insignificant variations in color and consistency?
There is an editorial in the same issue of the Annals by Mulrow et al that says in part:
The analyses by GlaxoSmithKline,Nissen and Wolski and Diamond and colleagues and the FDA teach us that summarizing data about scarce adverse events is difficult. Summary estimates, confidence bounds and statistical significance can vary depending on analysis techniques.
This means that well meaning, honest investigators can reach completely opposite conclusions based on how they decide to analyze the data and there can be honest disagreements about how to decide on which technique to use.
But in regard to rosi I believe we cannot get the cats back in the bag. With what has been published and magnified in the news and on the web to prescribe rosi to a new type 2 diabetic would be to pin a large target on your back with a sign that says sue me please even if we really are not sure if rosi increases risk of cardiovascular events or not and we may never "really know". Sometimes issues are just dropped and we move on to something else.