Thursday, December 19, 2013

so why don't we really know about breast cancer screening? Is it really turtles all the way down?

The more I read and the more I think about what I read and the more things seem to change but really stay the same I develop a  renewed sense of just how damn hard is it to figure out what to do or what advice to give in regard to cancer screening as well as other so called preventive measures.

The  verbal scuffles following the USPSTF recommendations on mammograms settled little. The best commentary I have read on this matter is from the amazingly prolific Dr. Roy Poses and is found in this paragraph from his recent blog posting:

One would think that a big point of discussion about breast cancer screening would be why after eight trials enrolling a total of about 350,000 patients reported over 20 years we still cannot answer the big clinical questions. A related point for discussion in the US is why only one, and the earliest trial was conducted here. If we here in the US think breast cancer screening is such a major concern (and we should think so), why have we been unable to mount a single important trial of it since the HIP trial conducted more than 30 years ago?

The big clinical questions of which he spoke were:
does screening mammography improve survival
does it improve quality of life
do the benefits outweigh the risk and harms

Yeah, so how come we don't know after all those studies and trials and analysis and meta-analysis.

Part of the evidence that the USPSTF panel used in formulating its recommendations came from a recent meta-analysis. I always cringe a bit when a meta-analysis seems to play an important role in a decision. I am reminded of an commentary by Dr. Steve Goodman regarding what seemed to be dueling meta-analysis regarding the very topic of breast cancer screening.

This is what I said about that before with slight editorial reconfiguration:

[An] important Annals of Internal Medicine Articles and related editorial by Dr. Steve Goodman of Johns Hopkins made it clear to me that meta-analyses (MAs) were basically themselves observational studies in which the studies themselves were subjects. He discussed two major MAs on the value of mammograms, one which concluded they were effective and valuable and the other concluded the opposite.The major difference between the studies was their choice of studies to include and to exclude. Both sets of authors maintained their criteria for exclusion were valid and yet they were quite different and resulted in opposite conclusions. Quoting Goodman:

... this controversy shows that the justification for why studies are included or excluded from the evidence base can rest on competing claims of methodological authority that look little different from the traditional claims of medical authority that proponents of evidence-based medicine have criticized.

I have no doubt that the panelists did their work according to generally accepted evidence based medicine "rules of the game". But along the way both within and apart  from the meta-analyses used in their calculus there are many gaps in which subjectivity and yes even personal bias come upon the scene. When human beings approach problems , gather evidence and analyze it is not evidence turtles all the way down, judgment turtles crawl in.I quote Goodman again:

"Controversies like this one about mammography are
likely to appear more frequently as we move toward reassessing
the evidence base after each new study appears
(15). Such reassessments will guarantee that we are often in
a gray zone between moderate and strong evidence, where
scientific judgment can make a critical difference. We must
learn to navigate within this gray zone better. Judgment
determines what evidence is admissible and how strongly
to weigh different forms of admissible evidence. When
there is consensus on these judgments and the data are
strong, an illusion is created that the evidence is speaking for itself and that the methods are objective. But this episode
should raise awareness that judgment cannot be excised
from the process of evidence synthesis and that the
variation of this judgment among experts generates uncertainty
just as real as the probabilistic uncertainty of statistical
calculations."

I am even more cynical-there may not be any way to navigate this grey zone better. There are many more trade offs than there are solutions.

No comments: