A few years ago I posted a piece about the effects of antioxidants on GI tract cancers.There was a reported small increase in Relative Risk (1.06) if one used a fixed effects statistical model but no increase if one used a random effects statistical model analysis.
Here the truth seemed to turn on the choice of statistical method used. How robust could the truth be when it depends on choices made by statisticians and importantly when experts disagree on the choice which was the case quoted above? For a critical reader to decide for himself in a rational way it seems he would have to be fairly conversant in the vagaries of regression analysis and analysis of variance which are the areas where these two competing statistical models live.
I have read that the fixed effect model in general is the one more commonly used and it is the one with lower standard errors and hence more power. I have read that in regard to meta-analysis the fixed effect model is appropriate when the data appear homogeneous and the random effects model when the data are heterogeneous. Then we run up against tests for heterogeneity in regard to which no one should be surprise if statisticians may differ as to the correct way to do that. It seems to me if I writing a paper and wanted it published and since positive studies are more likely to see the light of day in print I would want to use the more powerful fixed effects model.
It seems the deeper you plunge into the analysis of the data the less certain you may become regarding the robustness of the conclusion.But truth does not really turn on the basis of the choice of statistical technique for one meta-analysis.The results of any one study-even if that study is a meta-analysis-should not decide the issue.Prior context and biological plausibility must be taken into account.