Featured Post

Is the new professionalism and ACP's new ethics really just about following guidelines?

The Charter ( Medical Professionalism in the New Millennium.A Physician's Charter) did not deal with just the important relationship of ...

Wednesday, January 17, 2007

Observational data is important but how do we analyze the data.

Here is a summary of a recent JAMA article that should make it clear that the statistical techniques on which a study result may turn are definitely not your father's t test, p-values and simple regression equations any more.

Here is a summary of the JAMA article by Stukel et.al.

My simplistic "understanding" of all of this follows.

Randomized clinical trials (RCTs) are the best way to measure treatment effects because they reduce or ideally eliminate selection bias making the treatment and control group equal in regard to all features. If done properly the study should be protected from known and unknown confounders eliminating the need for statistical manipulations.

There is an important difference between the outcome of a RCT wherein the patient self selects to be randomized and there are often many criteria which exclude subjects such as age, sex, other medical conditions present, etc. and the outcome that occurs when the treatment is applied to patients in the more real ,non-RCT world of medical practice. This is often called the "effficacy (RCT results) -effectiveness (real world results) gap. Observational studies relate to the second.

Observational studies- though plagued by selection bias-can be valuable:they provide data when there are no relevant RCTs and they are capable of finding deleterious treatment effects in the longer run with more patients.RCTs are not the end-all in regard to side effects with recent examples being cardiac outcomes of selective NSAIDs and the long term occlusion of drug eluting stents.

So we need observational data but statisticians have to deal with selection bias.
Stukel and co-workers compared various statistical methods to address these selection biases using Medicare data on the use and outcomes of cardiac catheterizations following myocardial infarctions.

They compared something called "propensity score methods" with "instrumental variable methods". As explained in an accompanying editorial by Drs. D'Agostino, Stukel's group maintained that the instrumental methods were better because they produces an answer closer to RCTs and they eliminated bias due to unobserved variables.

The editorialists seem to cast doubt on that conclusion with statistical arguments that quickly escalate from my level of understanding but they did share my concern with the authors's comment " instrumental variable analyses...are more suited to answer policy questions than to provide insight into a specific clinical question for a specific patient." The D'Agostinos say "treatment effects should deal with effects relevant to patients." Of course, what else could it be all about? How can policy decisions not affect patient management decisions?

If you juxtapose the article with editorial you quickly see that experts in the field of statistics differ in major ways about the the best way(s) to analyze observational data to mitigate the potentially misleading effect of selection bias but both groups agree that the choice of analytic method can have major effects regarding the conclusion as to what the data are thought demonstrate. So it is important how you analyze the data but statisticians differ as to how to do it. The devil seems to be in the details of analysis but the intricacies and understanding of these details seem increasingly to be beyond the reach of many practicing physicians.

I seem to stumble across more and more studies whose conclusions seem to vary by the choice of statistical technique and the discussions regarding the choice of techniques seem to get more and more obtuse.

No comments: