Featured Post

Is the new professionalism and ACP's new ethics really just about following guidelines?

The Charter ( Medical Professionalism in the New Millennium.A Physician's Charter) did not deal with just the important relationship of ...

Tuesday, February 14, 2006

How low can a relative risk be and still mean anything?

A death blow or near death blow to the use of Vitamin E in the prevention of whatever it was supposed to prevent was dealt by an article that claimed Vitamin E increased the risk of death. The relative risk reported in that meta-analysis by Miller et al was 1.01.

Should a relative risk (RR) that tiny convince anyone of anything? What value should a RR be to have clout or significant evidentiary value?

That question was posed by the EBM folks at McMaster to a venerable guru of epidemiology, Sir Richard Doll ( see pg. 162 of the book, "Evidence Based Medicine", Sackett, D L et al, , second ed. Churchhill Livingstone Press, 2000). His reply was cautious. He is quoted as saying " It's almost impossible to set a level of risk which is so high that the findings in a well-conducted epidemiological study would necessarily exclude confounding." He continued saying that if the RR were 20 that would be almost sufficient to indicate causality.

Sackett and co workers went on to indicate that a RR of greater than 3 was "convincing".

A relative risk of greater than 2 is being used by courts to reach the threshold of more likely than not which is current level of proof in most tort cases.( I do not know what RR would be needed to reach the level of "clear and convincing evidence".) Weak associations, ie RRs barely above 1, are more likely to be explained by undetected biases.

We are bombarded by articles that report relative risks between 1 and 2 and some of these , such as the Vitamin E meta-analysis seem to be a tipping point in the discussion about a given medical intervention. Before medical students get too carried away by articles such as the Vit E meta-analysis they should take a moment or two to read the letters to the editor that meta-analysis almost always seem to generate. (When I read these it is not that I understand the often obscure arguments raised but that there is so much disagreement among the experts about how to analyze and interpret the data.) When they see how often and often how vehemently the statistical experts disagree, they will be less influenced by the latest meta-analysis headline of the day.

No comments: