The term "evidence based paralysis" came to my attention in a letter to the editor in the Archives of Internal Medicine in the Aug 14/28 2006 Letter to the Editor section. Drs. David Ziemer and Lawrence S. Phillips from Emory used it in their reply to a comment about an article concerning the issue of whether tight control of type 2 diabetic patients is " evidence based ". They also used to term "RCTomyopia" to refer to an " unwillingness to take action without incontrovertible proof from controlled trials".
RCTs and meta-analyses are considered the most reliable tools in the hierarchy of EBM.(For some time I have questioned whether MA belong in that position.) Because of this, some make the mistake of jumping to the position that if there are no RCTs that there is no evidence based reason for action.
Of course, RCTs are good, but there are not good for everything.They cannot answer all the questions physicians need/want to have answered.
They are great for discrete interventions in carefully defined, relatively homogeneous conditions in terms of determining efficacy. They are clearly less good for determining harm and for determining how to diagnosis conditions and for determining prognosis. In regard to harm, RCTs can recognize relatively common adverse effects that occur fairly soon after a medication is started but are less useful in detecting less common and/or delayed side effects.
However, when the questions to be answered arise in and from more complex patient populations in which the patient characteristics and interventions are more complex and heterogeneous it become much more difficult to separate out causality from bias,confounding and even random variation.In fact, some times it is more than difficult in that after the trial has been done and analyzed we still do not know the answer we were searching for. Case in point is the recent back surgery for herniated disc versus conservative management RCT published in JAMA. In this instance, there was so much cross over between the two groups as they were randomized that the "intent to treat" analysis was not considered valid and the "as treated analysis" suffers from the very real risk of selection bias which is why we have randomization in the first place. One is left with the disturbing thought that it may not be possible to solve this clinical issue by doing a randomized trial as long as we deal with patients who are free to do what they think is best for them.
It is important to know the limitations of the RCTs and that the very nature of certain clinical problems may be too complex for RCTs to be of use. Further, RCTs can never be done for all the problems that they are suitable to analyze.There are too many questions, often too little money and interest and with new drugs, procedures and testing methods always evolving-or at least changing-older RCTs lose relevance and the newer treatments may be years away from RCT results.
In treating patients with serious illnesses often we have to act not just sit there. We have to go with the evidence we have not the evidence we would like to have.
Ziemer and Philips had this to say in their letter to editor:
We believe that responsible physicians and patients should make decisions based on the best available evidence-including cell and animal studies, observational studies and controlled trials if available-and the strengths and weaknesses of the findings with each approach should be given due consideration.
I cannot resist when discussing this general topic to throw in the parachute-gravitational challenge comment that I quoted last year.