The blog commentary by Dr. David Rind discusses the issue of composite end points in clinical trials and in particular the CREST trial which compared carotid endarterectomy with carotid stenting. See here.
The end points in Crest were periprocedural stroke,myocardial infarction,death or ipsilateral stroke occurring within four years after the procedure. Since both procedures are really done to decrease the risk of stroke in a patient with carotid stenosis, why not just compare the rate of stroke occurring in the two treatment groups over a several year period following the procedure? That would appear to be the key outcome of interest. Well, the more invasive endarterectomy procedure might be more likely to cause operative or post op problems than the catheter based treatment so some measure of that needs to be included in the accounting.
Basically end composite outcomes are done because the difference between two competing therapies is  thought to be so small that a very  large number of patients would be needed to provide a clinical trial that has sufficient ability or power to detect a difference between the two treatments. This has been particularly evident in regard to the treatment of acute myocardial infarction as treatments have continued to decrease the mortality of acute MI and incremental changes in benefit become smaller as therapies improve.
So what could be wrong with the composite approach?
CREST illustrates what could be wrong. Here the stinting group had fewer  myocardial infarction with more strokes. So the trade offs appears to be more strokes with stints and more MIs with surgery. This could be interpreted to mean that the two techniques are quite equivalent but they differ in the adverse effects but are the two adverse effect equivalent? Most folks would say no since surviving a stroke can be much more devastating and life altering that a survived heart attack.
Rind put it this way:
Composites can quickly get you into trouble, though,  if you combine events of very different importance to patients.  Sometimes this appears to have been done with the intention of obscuring  the real outcome of a trial or to make a therapy look far better than  it really is.
A recent commentary in JAMA also discussed the composite outcome issue and warned readers to beware of a" bait-and-switch" type phenomenon. See here. The following is the authors' final paragraph.
Readers of  randomized trial reports must understand both the reasons for  and pitfalls of choosing to combine clinical outcomes. Examination  of the relative importance, frequency, and consistency of  effect size across the components of a composite outcome are  important steps in the interpretation of information derived from  trials. But it is equally important to be aware of a potential bait  and switch strategy. In some cases, readers and authors of  reports of randomized trials may wish to weight each of the outcomes  by an importance factor, similar to the way quality of life  is measured.10  In other cases, they may wish to point out that even though a  randomized trial was designed to detect a difference in the  composite outcome (because the vast majority of the effect is  on one component, typically the least severe), the trial has  mainly showed the effect on surrogate outcomes and not  definitive ones.
 
 
No comments:
Post a Comment