First a great quote, from Michale Thun, VP of Epidemiology and Surveillance Research at the American Cancer society:
With epidemiology you can tell a little thing from a big thing.What's very hard to do is to tell a little thing from nothing at all.
Gary Taubes in his widely cited article,"Epidemiology Faces Its Limits",Science, Vol 269,p. 164,July 1995, followed that quote with this comment:
...journals today are full of studies suggesting that a little risk is not nothing at all.
There is no basic law of science or statistics or epidemiology or metaphysics which will define how large a relative risk or a odds ratio has to be before physicians and patients need be concerned. Here we are talking about the interpretation or weighing of the evidence that accrues in the quest for evidence on which to base medicine.
Committees that author guidelines typically outline for the reader what their evidentiary hierarchy will be, usually randomized clinical trials at the top etc. But what are the rules for judging individual studies particularly the observational ones, e.g. case control studies and cohort studies.How big should a RR (or ORs for case-control studies) be before they consider that study worthy of adding to the pile of evidence worth of consideration ?
To get a sense of what the professionals do in that regard we could survey experts and see rules of thumb they use in knowing when to consider an observational study worthy of worrying about or suitable for publication. What we learn is that it may not be just the size of the RR but the overall context.
Robert Temple of the FDA is quoted by Taubes as saying:
My basic rule is if the relative risk isn't at least 3 or 4, forget it.
However, Dr. John Bailar,from McGill,believes there is no magic dividing line.
If it's a 1.5 relative risk and it's only one study and even a very good one, you scratch your chin and say maybe.
It is not size of the RR alone ( but we have to agree at some point low is too low say 1.03 relative risk) but the results of other studies addressing the same issue and concerns about scientific in general and in specific biological plausibility have to be factored in. Even though the size of the RR or OR ( odds ratio) is not necessarily determinative it is easy to cite a number of experts in the field who favor the notion that RR less than 2 should be- if not dismissed- at least looked at with a very skeptical eye.
While size of the relative risk is not the end of the analysis, it is the case that small RRs are more likely to be generated by undetected systematic error(s) than are large one. A RR of 1.2 should be much more suspect than a RR of 3.2
The observational studies can be considered coarse-grained instruments with bias and confounding being the basis for the coarseness,the hidden variables that can lead to an association that is not real. While calculating a confidence interval takes random variation in the data into account , bias and confounding lie outside its reach.
Sophisticated (and to many medical readers-mysterious) statistical methods such as various types of mathematical modeling may serve to eliminate or minimize some of the systematic errors but in the final analysis the reader or researcher still does not know to what extent biases are not controlled. I have yet to read the discussion section of an observational study in which the authors did not believe that they had "controlled" for sources of bias and confounding even when that study contradicted an earlier one whose authors also believed their methods likely excluded bias.
I believe that the best an ordinary medical reader -one whose wall is not decorated with an advanced degree in epidemiology or statistics-can do is:
1.Be very skeptical of Relative Risks less than 2 and particularly less than 1.5
2.Look in the articles' discussion sections for citation of confirming or contradictory studies.
3.Consider whether the findings fit some concept of reasonable biological plausibility.