In this month's Archives of Internal Medicine an article describes what so far has not been accomplished by the VA's highly touted computerized interventions in medication delivery.
The authors ( Nebeker JR et al " High Rates of Adverse Drug Events in a Highly computerized Hospital; Arch Intern Med/Vol 165, May 23, 2005 p 1111-1116) quote the literature that indicate that specific computerized interventions can reduce medication errors and hopefully reduce adverse drug events (ADEs). The VA system has earned praise for their efforts to reduce medication errors by computerized physician order entry (CPOE) and bar coding medication delivery and EMR, automated drug interaction checking and allergy tracking and alerting. These authors deserve praise for looking at how well the emperor is clothed. They studied the ADEs in a 20 week period at the VA Hospital in Salt Lake City.
The authors address the question "Do CPOE and related systems reduce ADEs?" Since they did not compare a control period-i.e. ADE rate prior to the introduction of these various systems-they cannot determine conclusively from their data if these systems bring about a reduction in ADE rate.
Interesting they report higher rates than do other studies (the author quote their incidence density of 70 ADEs per 1000 patient days as being 5 to 19 time higher than generally previously reported). The authors doubt their rate is really higher but that they merely found more cases because of their diligence in case finding and use of clinical pharmacists to find cases and legible and accessible medical records. Understandably, they reject the notion that their systems lead to an increase in ADEs.
On a positive note, they found that their system had apparently virtually eliminated ADEs due to transcription error and bar code administration. The three most common events were constipation, low potassium and hypotension.
The authors describes the ADE rate as high the ADEs as serious and recommend improvements in computerized interventions. For example, they suggest that CPOE could suggest a order for supplemental potassium and for monitoring potassium and creatinine when loop diuretics are ordered.( I cannot help but wonder why house officers or their attendings don't already do this, Isn't that what you do on rounds? I am more concerned that house officers don't already know some pretty basic material than I am that we have not yet perfected a computer system to relieve doctors of the burden to know what they are doing) Generally they recommend more computerized decision support with such computerized systems and warn that purchasers of generic or off the shelf CPOE and bar code systems against expecting dramatic reduction in ADE rates. The entire article is worth reading for its detail both in regard to the data found and the methods used to look at errors and adverse reactions and to serve as a precautionary tale to those who think adverse drug events will be banished by buying a CPOE and having a few training sessions.
Featured Post
Is the new professionalism and ACP's new ethics really just about following guidelines?
The Charter ( Medical Professionalism in the New Millennium.A Physician's Charter) did not deal with just the important relationship of ...
Monday, May 30, 2005
Big Pharma's bad news shared by at least one generic manufacturer
Able Laboratories-a manufacturer of over 40 generic medications- has stopped production and recalled its products.Its history of problems with the FDA are outlined here.
Its CEO has quit and the company's president has taken over. Apparently the company has had problems since 1992 and have been under surveillance by the FDA, had been previously fined and some its officials had been forced to resign earlier. Manufacturing problems and quality control issues have been on the table for quit a while. This is the kind of thing that must make patients feel angry and betrayed to the extent they were coerced by their insurance plan's pharmacy management company to accept generic drugs.
Interesting reading is a 1989 FDA report
on several generic drug manufacturers problems which documents some egregious behavior by several companies (including bribing an FDA chemist) but finds no violations for Abel Labs, their problems apparently surfacing a while later.
Its CEO has quit and the company's president has taken over. Apparently the company has had problems since 1992 and have been under surveillance by the FDA, had been previously fined and some its officials had been forced to resign earlier. Manufacturing problems and quality control issues have been on the table for quit a while. This is the kind of thing that must make patients feel angry and betrayed to the extent they were coerced by their insurance plan's pharmacy management company to accept generic drugs.
Interesting reading is a 1989 FDA report
on several generic drug manufacturers problems which documents some egregious behavior by several companies (including bribing an FDA chemist) but finds no violations for Abel Labs, their problems apparently surfacing a while later.
Friday, May 27, 2005
statins and colon cancer-still another case-control study will not answer the question
Here is a case control study that I really would like to believe but I am skeptical. In the May 26, 2005 issue of the NEJM a 47% reduction in colon cancer is reported by JN Poynter et al from Israel.I have been taken Pravachol for 5+ years and I have a strong family history of colon cancer so I would really like it if my pill were multi-tasking prevention wise. The related editorial give us some context. There have been other case control studies- which no one should be surprised- have given somewhat conflicting results. (The case control studies regarding the effect on coronary artery events and hormone replacement therapy were also some what conflicted and we know how that worked out)
We are also told that secondary analysis from of three large meta-analyses of prospective statin trials designed to detect reduction in coronary events failed to show any colon cancer risk reduction. And then a comment is about how those studies might have not been adequately powered to detect a cancer risk difference. A Canadian case control showed a 28% reduction in colon cancer risk (surprisingly in only 2.7 years of follow-up causing one to wonder if some patients already had early colon cancer when the drugs were started) while another study showed an increase in colon as well as prostate and bladder cancer.
This exemplifies the sort of thing you see all the time when you look a set of articles that use coarse grain instruments (such as case controls and meta-analysis) to determine associations and causal links. This is what you have here. Some studies show decreased risks, some increased risk and some no difference. The authors of the article in their introduction say " to clarify the association " we evaluated data in... a case-control study. In light of the context of multiple conflicting case-control studies, how reasonable is it to expect that another case-control will ever clarify anything? Case control studies and multiple logistic regression analysis, according to Harvard epidemiologist, Kenneth J. Rothman, are two of the most important developments in epidemiology in the last fifty years. Case control studies are said to be efficacious and efficient and he characterizes it as the central tool of modern epidemiology. How many times have we heard the phrase " hypothesis generating" in regard to case control studies? Perhaps someone more skilled in epidemiology than I can explain what is the utility of doing another case control in a situation in which several earlier case control studies have resulted in contradictory findings in regard to a particular association. It reminds me of what one of our endocrinology attendings was fond of saying" if one endocrine test is equivocal, all the other tests you do to clarify it will also be equivocal." The editorialist opines that we won't really know more until a RCT is done, but that comment was just as cogent before this publication appeared.
We are also told that secondary analysis from of three large meta-analyses of prospective statin trials designed to detect reduction in coronary events failed to show any colon cancer risk reduction. And then a comment is about how those studies might have not been adequately powered to detect a cancer risk difference. A Canadian case control showed a 28% reduction in colon cancer risk (surprisingly in only 2.7 years of follow-up causing one to wonder if some patients already had early colon cancer when the drugs were started) while another study showed an increase in colon as well as prostate and bladder cancer.
This exemplifies the sort of thing you see all the time when you look a set of articles that use coarse grain instruments (such as case controls and meta-analysis) to determine associations and causal links. This is what you have here. Some studies show decreased risks, some increased risk and some no difference. The authors of the article in their introduction say " to clarify the association " we evaluated data in... a case-control study. In light of the context of multiple conflicting case-control studies, how reasonable is it to expect that another case-control will ever clarify anything? Case control studies and multiple logistic regression analysis, according to Harvard epidemiologist, Kenneth J. Rothman, are two of the most important developments in epidemiology in the last fifty years. Case control studies are said to be efficacious and efficient and he characterizes it as the central tool of modern epidemiology. How many times have we heard the phrase " hypothesis generating" in regard to case control studies? Perhaps someone more skilled in epidemiology than I can explain what is the utility of doing another case control in a situation in which several earlier case control studies have resulted in contradictory findings in regard to a particular association. It reminds me of what one of our endocrinology attendings was fond of saying" if one endocrine test is equivocal, all the other tests you do to clarify it will also be equivocal." The editorialist opines that we won't really know more until a RCT is done, but that comment was just as cogent before this publication appeared.
Thursday, May 26, 2005
NEJM headline news-CABG better than stents (at least they were in the past)
The May 26 issue of the NEJM article "Long-term outcomes of coronary artery Bypass grafting versus Stent implantation" concludes in part ". For patients with two or more diseased coronary arteries, CABG is associated with higher adjusted rates of long-term survival than stenting".
So is the issue settled?.Of course not. This is a very large data base (37,212 patients), so robust conclusions should be drawn and clinical practice illuminated. Well maybe not.
The procedures took place between Jan 1, 1997 and Dec 31, 2000 and since then both stent procedures, surgical techniques and supportive medical therapy have changed. One highly publicized change is the introduction of the drug eluting stents which have markedly decreased stent related re-stenosis. Post stent patients now are ( or should be treated) routinely with multiple drugs which are likely in the aggregate to decrease disease progression. Statins, ACE inhibitors, ASA , Plavix, beta blockers are widely used under the rubic of risk factor reduction for post surgical patients as well as post stent patients.
If drug eluting stents had not been developed and if medication regimens had not been developed and if things were frozen in time as of Dec 21,2000 we would have the answer, namely, do CABG for patients with two or more vessel disease. But medicine is not static and- to the degree that is true-outcome research is always history. Sometimes, by the time we figure out if what we are doing is any good, we are doing something else.
So is the issue settled?.Of course not. This is a very large data base (37,212 patients), so robust conclusions should be drawn and clinical practice illuminated. Well maybe not.
The procedures took place between Jan 1, 1997 and Dec 31, 2000 and since then both stent procedures, surgical techniques and supportive medical therapy have changed. One highly publicized change is the introduction of the drug eluting stents which have markedly decreased stent related re-stenosis. Post stent patients now are ( or should be treated) routinely with multiple drugs which are likely in the aggregate to decrease disease progression. Statins, ACE inhibitors, ASA , Plavix, beta blockers are widely used under the rubic of risk factor reduction for post surgical patients as well as post stent patients.
If drug eluting stents had not been developed and if medication regimens had not been developed and if things were frozen in time as of Dec 21,2000 we would have the answer, namely, do CABG for patients with two or more vessel disease. But medicine is not static and- to the degree that is true-outcome research is always history. Sometimes, by the time we figure out if what we are doing is any good, we are doing something else.
Wednesday, May 25, 2005
More Bad News for Crestor possible good news for Vytorin
A recent article in Circulation indicates that rosuvastatin leads the other statins in the adverse effect category. Although the absolute rate of AERs ( adverse effect reports) are low for all of the categories of side effects (muscle, liver, kidney) Crestor has the highest in each. Since Crestor lowers the LDL to a greater degree than the others, we would like to know the side effect rates for various doses of the drugs but that data are not provided.
This should be good news for Vytorin which seems to lower the LDL better than large doses of the statins and seems safe but we do not have a long track record to prove its safety. In addition, we don't know that lowering LDL with Vytorin works as well ( in the sense of decreasing coronary artery events) as large doses of the statins. It could be that one or more non-LDL lowering effects of larger doses of statins does good things to the vessel wall or something to decrease coronary events above and beyond the LDL effect.
With the current data available, even though the absolute risk from Crestor seems very small, personally, I would see almost no reason to use Crestor at this time ( perhaps in someone in whom nothing else works to adequately lower LDL).
If you can't hit the magic LDL target with Zocor or Lipitor or Pravachol you can always add Zetia or change to Vytorin if the patient was taking Zocor.
On June 9, 2004 The FDA issued an advisory and on 3/3/05 issued an alert calling attention to possible muscle damage and the need for lower doses in Asian patients but indicated they had no data indicating a higher risk than with the other statins. Stay tuned for what FDA will do now with the new data.
This should be good news for Vytorin which seems to lower the LDL better than large doses of the statins and seems safe but we do not have a long track record to prove its safety. In addition, we don't know that lowering LDL with Vytorin works as well ( in the sense of decreasing coronary artery events) as large doses of the statins. It could be that one or more non-LDL lowering effects of larger doses of statins does good things to the vessel wall or something to decrease coronary events above and beyond the LDL effect.
With the current data available, even though the absolute risk from Crestor seems very small, personally, I would see almost no reason to use Crestor at this time ( perhaps in someone in whom nothing else works to adequately lower LDL).
If you can't hit the magic LDL target with Zocor or Lipitor or Pravachol you can always add Zetia or change to Vytorin if the patient was taking Zocor.
On June 9, 2004 The FDA issued an advisory and on 3/3/05 issued an alert calling attention to possible muscle damage and the need for lower doses in Asian patients but indicated they had no data indicating a higher risk than with the other statins. Stay tuned for what FDA will do now with the new data.
Tuesday, May 24, 2005
If IM residents are not learning to be hospitalists already, what are the 3 years about anyway?
A recent article in the AMA News-soon to be a publication to which only AMA members are privy-describes what some perceive to be a need for a " hospitalist track" in the IM residency training program. It was not that long ago when the most pressing weakness of IM programs was thought to be too much emphasis on hospitalized patients and not enough on out patient care. Enter the requirement for continuity clinics. Basically as medicine residents we learned how to take care of sick people in hospitals.This including working with surgeons, neurologists , psychiatrists, ob-gyn,orthopedists and urologists.CCU,ICU and surg.ICU were places in which we became comfortable.
The Society of Hospital Medicine suggested several areas that need to be " beefed up":
working with nurses,pharmacists and administrators, (how one could work in the hospital as a internist and not work those folks is unclear), learn about hospital systems and infrastructure, (OK I didn't have that training and I'll admit I don't know what that means), end of life care and care outside the hospital, palliative care and of course "quality" improvement..
I think sometimes you just have to comment on the emperor's clothes.Here's the thing" Internists are already trained as hospitalists"
The Society of Hospital Medicine suggested several areas that need to be " beefed up":
working with nurses,pharmacists and administrators, (how one could work in the hospital as a internist and not work those folks is unclear), learn about hospital systems and infrastructure, (OK I didn't have that training and I'll admit I don't know what that means), end of life care and care outside the hospital, palliative care and of course "quality" improvement..
I think sometimes you just have to comment on the emperor's clothes.Here's the thing" Internists are already trained as hospitalists"
Sunday, May 22, 2005
SPN's, malpractice and Stephen Gould's essay " the Median is not the message"
Stephen Jay Gould's family is suing the Dana-Farber Cancer Insititute and Brighams and Womens Hospital for allegedly missing a solitary pulmonary nodule on a CXR which was apparently an ultimately fatal cancer.
Missing a SNP haunts radiologists and pulmonary docs.I have been abnormally sensitive to that issue ever since my oral exam in pulmonary disease (yes, at one time the pulmonary exam was oral and the internal medicine exam had both an oral and written component) The examiner had a stack of cxrs which he put on the view box one at a time and asked what I saw. The first 15 or so films or so were normal and the 16 th had a faint, poorly outlined SPN behind a rib on a underpenetrated film,which I missed.
Apparently, because my performance otherwise was so incredible, I passed anyway but I was scarred for life. Years later, ,a seasoned chest doc told me the best way to not miss the SNP was after you viewed the film in the usual way,to view it upsidedown.(This is easier if you invert the film, as opposed to standing on your head, although that might work as well).When you view a film this way small shadows that don't belong really do seem to jump out of the film to your visual consciousness. Anyway, it works for me.
Gould died in 2002 ,20 years after the diagnosis of an abdominal mesothelioma and according to news articles not from mesothelioma.When he learned of the diagnosis, he researched the prognosis, learned the median survival was said to be 8 months and wrote the well known essay "The Median Isn't the Message".It is worth reading. He talks about means and medians etc being the abstractions and that variation is the hard reality. He relates that his oncologist hesitated to give him a time frame from survival. I remember my very avuncular chief of pathology in medical school telling the class to be careful about telling someone how long they have to live , they may end up pissing on your grave. A recent article from the NCI gives imformation on current treatment and outcome of abdominal mesothelioma.
Missing a SNP haunts radiologists and pulmonary docs.I have been abnormally sensitive to that issue ever since my oral exam in pulmonary disease (yes, at one time the pulmonary exam was oral and the internal medicine exam had both an oral and written component) The examiner had a stack of cxrs which he put on the view box one at a time and asked what I saw. The first 15 or so films or so were normal and the 16 th had a faint, poorly outlined SPN behind a rib on a underpenetrated film,which I missed.
Apparently, because my performance otherwise was so incredible, I passed anyway but I was scarred for life. Years later, ,a seasoned chest doc told me the best way to not miss the SNP was after you viewed the film in the usual way,to view it upsidedown.(This is easier if you invert the film, as opposed to standing on your head, although that might work as well).When you view a film this way small shadows that don't belong really do seem to jump out of the film to your visual consciousness. Anyway, it works for me.
Gould died in 2002 ,20 years after the diagnosis of an abdominal mesothelioma and according to news articles not from mesothelioma.When he learned of the diagnosis, he researched the prognosis, learned the median survival was said to be 8 months and wrote the well known essay "The Median Isn't the Message".It is worth reading. He talks about means and medians etc being the abstractions and that variation is the hard reality. He relates that his oncologist hesitated to give him a time frame from survival. I remember my very avuncular chief of pathology in medical school telling the class to be careful about telling someone how long they have to live , they may end up pissing on your grave. A recent article from the NCI gives imformation on current treatment and outcome of abdominal mesothelioma.
Thursday, May 19, 2005
Thoughts on what does clinical experience bring to the table.
An interesting editorial appeared in ACP Journal CLub ( May/June 2005/volume 142 no. 3 p A-8) entitled " Does clinical experience make up for failure to keep up to date?" by GR Norman and KW Eva , both PhDs from McMaster University.(this publication requires a subscription).
They suggest the following thesis:Physicians in practice tend to not keep up but this seems to have little impact in patient outcomes.(Interestingly, they quote articles claiming that board certification and subspecialization both are associated with an absolute mortality difference (Norcini JJ, et al Med Educ 2002: 36;853-859)) As I have blogged about before,I am not convinced how prevalent the tendency is to not "keep up" or how valid the data is demonstrating that but the authors present some interesting ideas, at least some which is back up with some data and make several comments worthy of repeating.
They say experience makes docs make decisions rapidly. It is as if a vast medical storehouse of clinical cases is stored and a pattern-recognition process is triggered and a diagnosis is reached without conscious reflection. The authors also make the point that adherence to practice guidelines may be optimal-in some sense-at the population level but when an experienced physician considers a given cases, he may deliberately deviate from the guideline to more appropriately take care of the individual patient's needs.Less experienced docs tend more to adhere to the prescibed practice approach. So if we equate "quality" with adherence to guidelines are we really getting it right or is that just the easy way to claim we are evaluating physician's care?
It is probably too simple to say that young docs go by guidelines more and older docs have more experience based context into which to put things but there may be a trend in that direction.
Older docs can improve the degree to which they are current on guidelines-even if they decide if they are applicable or not on a case by case basis-but the only way young docs get the experience is to get to be old docs.
They suggest the following thesis:Physicians in practice tend to not keep up but this seems to have little impact in patient outcomes.(Interestingly, they quote articles claiming that board certification and subspecialization both are associated with an absolute mortality difference (Norcini JJ, et al Med Educ 2002: 36;853-859)) As I have blogged about before,I am not convinced how prevalent the tendency is to not "keep up" or how valid the data is demonstrating that but the authors present some interesting ideas, at least some which is back up with some data and make several comments worthy of repeating.
They say experience makes docs make decisions rapidly. It is as if a vast medical storehouse of clinical cases is stored and a pattern-recognition process is triggered and a diagnosis is reached without conscious reflection. The authors also make the point that adherence to practice guidelines may be optimal-in some sense-at the population level but when an experienced physician considers a given cases, he may deliberately deviate from the guideline to more appropriately take care of the individual patient's needs.Less experienced docs tend more to adhere to the prescibed practice approach. So if we equate "quality" with adherence to guidelines are we really getting it right or is that just the easy way to claim we are evaluating physician's care?
It is probably too simple to say that young docs go by guidelines more and older docs have more experience based context into which to put things but there may be a trend in that direction.
Older docs can improve the degree to which they are current on guidelines-even if they decide if they are applicable or not on a case by case basis-but the only way young docs get the experience is to get to be old docs.
Wednesday, May 18, 2005
Retired Doc's suggestion for medical curriculum,part 10, Beware of misleading bias in drug trials
Dr. Richard Smith, a editor of BMJ for 25 years,discusses some of the ways that drug companies get the results they want published in main line journals. Kevin,MD also has a recent post referencing Dr. Smith's insights.
Writing in PLoS Medicine (Smith, R (2005) PLoS Med 2 (5) : e 138)"Medical Journals are an extension of the marketing arm of pharmaceutical companies.", he gives handy hints for methods that drug companies can use to get the results desired from clinical trials.For example, trial your drugs against too low a dose of the competitor drug or conduct a trial against a treatment known to be inferior, or use multiple end points in the trial and select for publication those that give favorable result. Note he is not talking about making up data or that the trials are not conducted in a technically well done manner, but rather that bias is introduced in subtle ways. He also describes the practice of publishing the positive results more than once and to combine the results from various centers in multiple combinations. There are a number of ways that physicians can be- and apparently have been-mislead about the virtues of various medications.
As medical students learn the catechism of evidence based medicine they need to learn to be critical of trials even in the best of journals.There is good reason to believe that not only have readers of journals been taken in at times but so have the editors.Smith says editors are beginning to catch on to at least some of the manipulative techniques that have been used and may take appropriate defensive action. He admits it took him over 20 years to realize what was happening.
Medical students are appropriately taught that RCTs are the highest standard of proof in the on going search for medical truth. However,at least some the RCTs on which some standards of care are built may have been spun by the drug companies to give favorable press to this or that particular medication and present some thing far less that the whole truth. He quotes data indicating that at least 2/3 of trials published in JAMA, the Annals of Internal Medicine,NEJM and Lancet were funded by the drug industry. So it could be that these practices could be rather wide spread. Smith's article should be mandatory reading in the EBM course. It might serve as a partial antidote to the over exuberant enthusiasm for EBM precepts that some impressionable students might develop. Look at the nature of the evidence.
So what can the overwhelmed medical student do? Right now, the best I can suggest is to use the table of techniques in Smith's article but realize this is not likely to be exhaustive. The EMB gurus from McMaster might add a section in the " How to find current best evidence" chapter in their book, " Evidence Based Medicine " by Sackett et al on" How to detect bias in a drug trial publications".
Smith suggests that journals not publish trials and the protocols and results be available on the web . This, he says, is radical and not likely to happen.But with the issue more on the table now, editors will be more savvy and skeptical as, hopefully, will be the readers.
Between Ghost writing, the manipulative arsenal of big pharma described by Smith, the alleged marketing and other corporate techniques related to the cox-2 drugs and the recent " it's safe, wait, no it's not" antics of the FDA ( which in part is allegedly related to the aforementioned manipulations) it should be no wonder that patients may have less faith in what the doctors are prescribing and so should the docs.
Writing in PLoS Medicine (Smith, R (2005) PLoS Med 2 (5) : e 138)"Medical Journals are an extension of the marketing arm of pharmaceutical companies.", he gives handy hints for methods that drug companies can use to get the results desired from clinical trials.For example, trial your drugs against too low a dose of the competitor drug or conduct a trial against a treatment known to be inferior, or use multiple end points in the trial and select for publication those that give favorable result. Note he is not talking about making up data or that the trials are not conducted in a technically well done manner, but rather that bias is introduced in subtle ways. He also describes the practice of publishing the positive results more than once and to combine the results from various centers in multiple combinations. There are a number of ways that physicians can be- and apparently have been-mislead about the virtues of various medications.
As medical students learn the catechism of evidence based medicine they need to learn to be critical of trials even in the best of journals.There is good reason to believe that not only have readers of journals been taken in at times but so have the editors.Smith says editors are beginning to catch on to at least some of the manipulative techniques that have been used and may take appropriate defensive action. He admits it took him over 20 years to realize what was happening.
Medical students are appropriately taught that RCTs are the highest standard of proof in the on going search for medical truth. However,at least some the RCTs on which some standards of care are built may have been spun by the drug companies to give favorable press to this or that particular medication and present some thing far less that the whole truth. He quotes data indicating that at least 2/3 of trials published in JAMA, the Annals of Internal Medicine,NEJM and Lancet were funded by the drug industry. So it could be that these practices could be rather wide spread. Smith's article should be mandatory reading in the EBM course. It might serve as a partial antidote to the over exuberant enthusiasm for EBM precepts that some impressionable students might develop. Look at the nature of the evidence.
So what can the overwhelmed medical student do? Right now, the best I can suggest is to use the table of techniques in Smith's article but realize this is not likely to be exhaustive. The EMB gurus from McMaster might add a section in the " How to find current best evidence" chapter in their book, " Evidence Based Medicine " by Sackett et al on" How to detect bias in a drug trial publications".
Smith suggests that journals not publish trials and the protocols and results be available on the web . This, he says, is radical and not likely to happen.But with the issue more on the table now, editors will be more savvy and skeptical as, hopefully, will be the readers.
Between Ghost writing, the manipulative arsenal of big pharma described by Smith, the alleged marketing and other corporate techniques related to the cox-2 drugs and the recent " it's safe, wait, no it's not" antics of the FDA ( which in part is allegedly related to the aforementioned manipulations) it should be no wonder that patients may have less faith in what the doctors are prescribing and so should the docs.
Tuesday, May 17, 2005
Shout Out to AAFP for stating the obvious about one pay for performance plan
Unlike the ACCP, the AAFP seems to be calling it like it is in regard to one recently imposed pay for performance program, namely the United Health Care initiative. St. John's Mercy Health Care in St.Louis calls the UHC plan " ill conceived and poorly planned". The local and state medical society also have criticized the plan. The most recent issue of the ACP's Internist Observer, on the other article has a rather lengthy, platitude-filled article describing how the ACP will lead the way in helping to craft programs that will reward physicians for quality care. If the ACP needed an object lesson in an insurance company talking about quality and acting about saving money, this major problem in Missouri might be it. The Observer went to press some time after the dispute in St. Louis was well under way so it seems it ignored it rather than overlooked it.Dr.Tony commented on the PFP issue recently pointing out saving money not generating quality is what it is all about.Also, hcrenewal called attention to the incredible annual compensation of the CEO of UHC's parent company (124.8 million counting options exercised).It looks like one can improve quality and make money or at least talk about improving quality.
Monday, May 16, 2005
New Criteria for Metabolic Syndrome
The IDF has published new criteria for the Metabolic Syndrome. Obesity as defined by waist circumference is required plus two more criteria out of a possible 4. The waist sizes are smaller than the NCEP criteria.For Europids ( a term I had not heard before) it is 94 cm for men and 80cm for women.Different criteria are used for South Asians and Chinese and different for Japanese as well. The BP and blood tests criteria are the same for all for all races, BP 130/85, HDl-40 for men, 50 for women, FBS 100, and TGs 150.
Friday, May 13, 2005
Retired Doc's opinion on "Big Pharma" at all time low and falling
The first medical memory I have of what- at the time- were known as "ethical drug companies" was actually part of the rites of passage from medical student to doctor. It was the doctor black bag that Lilly gave to graduating medical students .
For years,I considered the detail men who brought samples and doughnuts to the office as basically benign and took their brief promotional comments re their drugs superiority over competitors with healthy skepticism.
But now, once I read alleged details of Merck's action re the VIGOR study and its marketing of Vioxx, I cannot think of Merck and- by extrapolation and belief there may be no difference among large pharma entities-other large drug companies with any thing but suspicion and distrust.
The gist of the "Kaiser Health Report" linked above describes marketing techniques that seem, at best, misleading in turn leading physicians to prescribe Vioxx, which a number of doctors now wish they had never done. If this was merely one company misleading physicians and the public about side effects of a drug, that would be bad enough.But there appears to be more.
There is reason to believe that in some instances drug companies have manipulated study results so that when they are published ( in main line journals) the results do not represent all of the data in such a way that opposite conclusions may have been reached if more of the data set were presented to the journal. Case in point: The 2000 JAMA article regarding celecoxib (Pfizer's Celebrex) which published 6 month data showing a better side effect profile re: GI concerns than older NSAIDs while they withheld the 12 month data which painted a different picture.
You have to worry that this is a ice berg tip and how many of the drug industry- financed and designed RCTs if carefully analyzed ( with data even journal editors do not have access to) may not really demonstrate what the articles concluded. And it is the RCTs which are the gold standard of information feed into various consensus and guidelines writing committees.
A number of professional organizations have warned physicians about undue influence regarding free meals and trips etc. If the manipulative antics of two of the drug industry giants are any thing but two isolated aberrations, the influence we need to be concerned about is much more serious than a free dinner at a toney establishment. We need to worry about how much of the "evidence" in "evidence based medicine" is suspect and how many well intentioned, honest hardworking academic physicians may have been conned as well. A recent Health Care Renewal posting links to a detail filled Wall Street Journal article dealing in part with discrepancies between the data obtained and the data reported in a number of articles at least some of which were drug company designed and managed. There is also the issue of "ghost writing, which has been well blogged about, in which drug companies hire "medical information" companies to write articles promoting a given product or group of products and then enlisting academic docs to add a bit of prestige veneer to the publication.
I don't know when Lilly stopped giving out the black bags, but I don't think many med schools would favor resumption of the practice. About forty years ago when I began to carry around my black bag as an intern, the descriptor "ethical" attached to " drug company" would have seemed quite reasonable if anyone would have given it any thought, which we didn't. But now...By the way news articles still use that term.
For years,I considered the detail men who brought samples and doughnuts to the office as basically benign and took their brief promotional comments re their drugs superiority over competitors with healthy skepticism.
But now, once I read alleged details of Merck's action re the VIGOR study and its marketing of Vioxx, I cannot think of Merck and- by extrapolation and belief there may be no difference among large pharma entities-other large drug companies with any thing but suspicion and distrust.
The gist of the "Kaiser Health Report" linked above describes marketing techniques that seem, at best, misleading in turn leading physicians to prescribe Vioxx, which a number of doctors now wish they had never done. If this was merely one company misleading physicians and the public about side effects of a drug, that would be bad enough.But there appears to be more.
There is reason to believe that in some instances drug companies have manipulated study results so that when they are published ( in main line journals) the results do not represent all of the data in such a way that opposite conclusions may have been reached if more of the data set were presented to the journal. Case in point: The 2000 JAMA article regarding celecoxib (Pfizer's Celebrex) which published 6 month data showing a better side effect profile re: GI concerns than older NSAIDs while they withheld the 12 month data which painted a different picture.
You have to worry that this is a ice berg tip and how many of the drug industry- financed and designed RCTs if carefully analyzed ( with data even journal editors do not have access to) may not really demonstrate what the articles concluded. And it is the RCTs which are the gold standard of information feed into various consensus and guidelines writing committees.
A number of professional organizations have warned physicians about undue influence regarding free meals and trips etc. If the manipulative antics of two of the drug industry giants are any thing but two isolated aberrations, the influence we need to be concerned about is much more serious than a free dinner at a toney establishment. We need to worry about how much of the "evidence" in "evidence based medicine" is suspect and how many well intentioned, honest hardworking academic physicians may have been conned as well. A recent Health Care Renewal posting links to a detail filled Wall Street Journal article dealing in part with discrepancies between the data obtained and the data reported in a number of articles at least some of which were drug company designed and managed. There is also the issue of "ghost writing, which has been well blogged about, in which drug companies hire "medical information" companies to write articles promoting a given product or group of products and then enlisting academic docs to add a bit of prestige veneer to the publication.
I don't know when Lilly stopped giving out the black bags, but I don't think many med schools would favor resumption of the practice. About forty years ago when I began to carry around my black bag as an intern, the descriptor "ethical" attached to " drug company" would have seemed quite reasonable if anyone would have given it any thought, which we didn't. But now...By the way news articles still use that term.
Monday, May 09, 2005
Retired Doc's suggestion for Medical Curriculum,Part 9, The Nazi transformation for Healing to Killing,
Media coverage of the 60th anniversary of the end of world war II has focused on various events including the anniversary of the liberation of Auschwitz (Jan 27,1945). A few blogs ago I wrote about the transformation of a lay person into a physician. The anniversaries highlighted this year made me think of another transformation; the transforming of more than a few German physicians into mass murderers. Some teaching time needs to be spent to make medical students aware of the major role that doctors played in state sponsored killing of unbelievable magnitude and genocide. Robert Jay Lifton's " The Nazi Doctors-Medical Killing and the Psychology of Genocide" would be a good reference source.It available on Amazon.com
(also the AMA has teamed with the US Holocaust Memorial Museum to sponsor a series of lectures.)
Students will learn of the major role doctors played at Auschwitz and of the progression from the compulsory sterilization movement (to rid the German nation of inferior(Jewish) blood to the children's "euthanasia" program to the T4 program which focused on killing adult patients judged"not fit to live" providing the preliminary "learning curve" of perfecting the mass killing techniques later used in the camps. Lifton explains how the T4 program involved "virtually the entire German psychiatric community and the related portions of the general medical community" It was not just a few psychopaths.The processes became bureaucratized and participants could think of themselves as just "cogs" and maybe thereby lessen guilt.
It still seems inconceivable that there could have been this transformation of many physicians from healers to killers. Dr. Lifton discusses factors involved in the specificity of the Jewish genocide and provides material that might help illuminate broader questions of life and death and state control that are of enduring interest to physicians including the sacrifice of the individual to the state.The malignant racism and Anti-Semitism cannot be over emphasized but medical students need to know there were also large killing programs managed by physicians that murdered non-Jewish German children and adults who were labelled a burden to society.
The philosophical underpinnings of the "nazification of the medical profession" is discussed by Lifton. The influential manual of Rudolf Ramm proposed that physicians be the "Physicians to the Volk". The physician was to be concerned with the health of the Volk and was to overcome the old individualistic principle of the right to one's own body and the embrace the duty to be healthy and the physician duty was to the collectivity.The physician was the cultivator of the genes.National socialism was thought of as an "applied biology".Physicians were to be the technicians and engineers of the pseudoscience of race hygiene.
Some of the factors leading to the transformation from healing to killing emphazied by Lifton are:the acceptance of the notion that the state was the patient and not the individual human being,the belief or rationalization that the physician was just a cog in a machine and not to blame and that they were powerless to stop the evil because the orders came from someone else and ultimately the Fuhrer.
How all the medical ideals were betrayed and physicians became key players in unprecedented horror, murder and torture should be worth a few hours of instruction and discussion among medical students.
(also the AMA has teamed with the US Holocaust Memorial Museum to sponsor a series of lectures.)
Students will learn of the major role doctors played at Auschwitz and of the progression from the compulsory sterilization movement (to rid the German nation of inferior(Jewish) blood to the children's "euthanasia" program to the T4 program which focused on killing adult patients judged"not fit to live" providing the preliminary "learning curve" of perfecting the mass killing techniques later used in the camps. Lifton explains how the T4 program involved "virtually the entire German psychiatric community and the related portions of the general medical community" It was not just a few psychopaths.The processes became bureaucratized and participants could think of themselves as just "cogs" and maybe thereby lessen guilt.
It still seems inconceivable that there could have been this transformation of many physicians from healers to killers. Dr. Lifton discusses factors involved in the specificity of the Jewish genocide and provides material that might help illuminate broader questions of life and death and state control that are of enduring interest to physicians including the sacrifice of the individual to the state.The malignant racism and Anti-Semitism cannot be over emphasized but medical students need to know there were also large killing programs managed by physicians that murdered non-Jewish German children and adults who were labelled a burden to society.
The philosophical underpinnings of the "nazification of the medical profession" is discussed by Lifton. The influential manual of Rudolf Ramm proposed that physicians be the "Physicians to the Volk". The physician was to be concerned with the health of the Volk and was to overcome the old individualistic principle of the right to one's own body and the embrace the duty to be healthy and the physician duty was to the collectivity.The physician was the cultivator of the genes.National socialism was thought of as an "applied biology".Physicians were to be the technicians and engineers of the pseudoscience of race hygiene.
Some of the factors leading to the transformation from healing to killing emphazied by Lifton are:the acceptance of the notion that the state was the patient and not the individual human being,the belief or rationalization that the physician was just a cog in a machine and not to blame and that they were powerless to stop the evil because the orders came from someone else and ultimately the Fuhrer.
How all the medical ideals were betrayed and physicians became key players in unprecedented horror, murder and torture should be worth a few hours of instruction and discussion among medical students.
Saturday, May 07, 2005
More thoughts on Older doctor bashing,
Dr. Philip R. Alper has a regular column in Internal Medicine World Report. In the April 2005 issue, he discussed the Annals of Internal Medicine article Entitled " The Relationship between Clinical Experience and Quality of Health Care" (Ann Intern Med 2005:142;260-273.) He was critical of the article and the trailing editorial as was Medical Metamusings in an earlier blog and was Dr.Roy Poses (hcrenewal.blogspot.com) in his excellent analysis of the Annals article in his blog. Basically, the article, a systematic review, concluded that older doctors provided lower quality of care.
The editorial in the Annals accepted the article at face value-even though as Poses, Medical Metamusings and Alper point out the study had serious methodological problems-and used the article to claim that "quality improvement interventions" were needed.The editorialists, at least some of whom are members of the American Board of Internal Medicine, used the article to bolster their campaign for recertification giving the perception that their friendly acceptance of the flawed article might be self-serving.,
I bring this issue up again though it has been well blogged already to be able to quote a great insight Dr. Alper wrote in his column.
"...most of guideline surveillance should be automated.It is the subtleties of diagnosis and treatment and the establishment of a therapeutic relationship with the patient that cannot so readily automated.Everything is important but the notion of placing global responsibility on the primary physician's shoulders make sense only to those who seek to be their superiors.
Dr. Alpers also says in regard to recertification and other control measures "The absence of proof that uniform central planning in medicine will achieve the goals desired by its proponents (and may even be counterproductive) adds to my discomfort"
Not only is there absence of proof there is much proof demonstrating how badly central planning in general works.But,at least for some of the people who advocate it it is not about proof it is about power and control.
The editorial in the Annals accepted the article at face value-even though as Poses, Medical Metamusings and Alper point out the study had serious methodological problems-and used the article to claim that "quality improvement interventions" were needed.The editorialists, at least some of whom are members of the American Board of Internal Medicine, used the article to bolster their campaign for recertification giving the perception that their friendly acceptance of the flawed article might be self-serving.,
I bring this issue up again though it has been well blogged already to be able to quote a great insight Dr. Alper wrote in his column.
"...most of guideline surveillance should be automated.It is the subtleties of diagnosis and treatment and the establishment of a therapeutic relationship with the patient that cannot so readily automated.Everything is important but the notion of placing global responsibility on the primary physician's shoulders make sense only to those who seek to be their superiors.
Dr. Alpers also says in regard to recertification and other control measures "The absence of proof that uniform central planning in medicine will achieve the goals desired by its proponents (and may even be counterproductive) adds to my discomfort"
Not only is there absence of proof there is much proof demonstrating how badly central planning in general works.But,at least for some of the people who advocate it it is not about proof it is about power and control.
Thursday, May 05, 2005
Cost effectiveness of X, , how can a doctor possibly interpret what that means?
A typical cost-effectiveness article appears in the May 2, 2005 Annals of Internal Medicine. The summary states in part "alendronate ... is not cost effective."
When I was doing research on pulmonary function testing in diffuse lung disease, I could read an article , plow into the methods section and learn what technique of certain tests were done, what prediction equation was used, etc etc and could understand to a reasonable level of knowledge what it was all about.
Try that with the Annals article.
You will read that " We used a QALY value estimated with the EuroQul questionnaire and ...we derived the QALY value ...from direct prospective estimate of Kanis and colleagues" And " we constructed a Markow cost-utility model that contained 8 health states and compared 5 years of treatment with alendronate with no drug therapy..." and "We assumed relative risk for incident vertebral fractures of 0.54 and 0.82 for those with femoral neck T-scores of -2.0 t0 2.4 and -1.5 respectively..." And on and on the methods section goes through an elaborate mathematical exercise.
How could a hypothetical well informed general internist-the hypothetical ideal target of Annals articles- possible analyze what was done procedurally.How could she know if the assumptions are reasonable or biased towards some particular outcome?.How would he know if the Markow model was done correctly or more basically how valid is that approach or any of the particular details are in the first place?
You wonder who is the targeted audience of this type article. Is it an internist who would, armed with this latest research,inform his patient that they should not take X because it has been "shown to be" not cost effective. Is it aimed that a benefits manager who who would love some more quasi-justification to save money?
And even if you could understand- or give up and just accept- what they actually did, then you have to deal with the following statement."Assuming a social willingness to pay $ 50,000 per QALY gained...our results indicate that alendronate is not cost effective..."
I don't claim to be expert in Markow chain nuances but I do know that the authors or any one else cannot "determine" societal willingness " because that is not a thing you can determine.It is a meaningless abstraction.
Cost effectiveness articles should not be considered Evidence based medicine. They are calculations based on stacks of assumptions and typically end by concluding that something is or is not cost effective based on whether the indicator of interest is above or below an arbitrary dollar value that other authors have decided will be the threshold value.(The magic number,typically,is $50,000 per QALY,which is treated as a constant of nature rather than a contingent construct based on a stream of guesses) I ranted about this general topic few blogs ago but keep returning to it as articles appear and I see so little criticism of these pseudo-scientific papers.
When I was doing research on pulmonary function testing in diffuse lung disease, I could read an article , plow into the methods section and learn what technique of certain tests were done, what prediction equation was used, etc etc and could understand to a reasonable level of knowledge what it was all about.
Try that with the Annals article.
You will read that " We used a QALY value estimated with the EuroQul questionnaire and ...we derived the QALY value ...from direct prospective estimate of Kanis and colleagues" And " we constructed a Markow cost-utility model that contained 8 health states and compared 5 years of treatment with alendronate with no drug therapy..." and "We assumed relative risk for incident vertebral fractures of 0.54 and 0.82 for those with femoral neck T-scores of -2.0 t0 2.4 and -1.5 respectively..." And on and on the methods section goes through an elaborate mathematical exercise.
How could a hypothetical well informed general internist-the hypothetical ideal target of Annals articles- possible analyze what was done procedurally.How could she know if the assumptions are reasonable or biased towards some particular outcome?.How would he know if the Markow model was done correctly or more basically how valid is that approach or any of the particular details are in the first place?
You wonder who is the targeted audience of this type article. Is it an internist who would, armed with this latest research,inform his patient that they should not take X because it has been "shown to be" not cost effective. Is it aimed that a benefits manager who who would love some more quasi-justification to save money?
And even if you could understand- or give up and just accept- what they actually did, then you have to deal with the following statement."Assuming a social willingness to pay $ 50,000 per QALY gained...our results indicate that alendronate is not cost effective..."
I don't claim to be expert in Markow chain nuances but I do know that the authors or any one else cannot "determine" societal willingness " because that is not a thing you can determine.It is a meaningless abstraction.
Cost effectiveness articles should not be considered Evidence based medicine. They are calculations based on stacks of assumptions and typically end by concluding that something is or is not cost effective based on whether the indicator of interest is above or below an arbitrary dollar value that other authors have decided will be the threshold value.(The magic number,typically,is $50,000 per QALY,which is treated as a constant of nature rather than a contingent construct based on a stream of guesses) I ranted about this general topic few blogs ago but keep returning to it as articles appear and I see so little criticism of these pseudo-scientific papers.
Monday, May 02, 2005
Retired Doc's Suggestion for Medical Curriculum,Part 8,Meta-Analysis -know its limitations
The twin pillars of the epidemiologic-statistical foundations of the "best evidence " part of Evidence based Medicine (EBM) are the randomized controlled trial (RCT) and the quantitative systematic review AKA meta-analysis (MA).
In a earlier blog, I suggested that a major insight of the RCT is that everyone does not react the Same to a given treatment.
That is one of the basic facts of life in MAs as well since a systematic review typically reviews RCTs.
Two others are:
The conclusion of a MA is dependent on the published (or unpublished) studies included in the pooled analysis and the outcome statistic chosen.
Just as clinical advice needs to be linked to cases to give them limbic valence,these abstractions can be made meaningful with a real life case of different MAs reaching conclusions that would have very different implications for clinical medicine.
Two Danish researchers (Olsen and Gotsche-Cochrane data base Sys Rev. 2001;CD0018777) concluded that screening mammography was not effective.In the same time frame the USPSTF concluded the opposite:both groups based their conclusion on MAs.
Dr. Steven Goodman's explanation of that discrepancy should be part of handouts to medical students in their course on EBM. In an editorial in the Annals of Internal Medicine, sept 2002,volume 137 issue 5, pages 363-365 (He explains that a MA is basically an observational design wherein the subjects are published studies.Those studies that are considered but eliminated from the analysis can make a big difference.Why studies are kept or not rests on "competing claims" of methodologic validity.In this regard the average or for that matter super doctor is challenged to know which expert is correct.The Danish epidemiologists excluded more trials from their analysis than did USPSTF and reached a different conclusion.Another reason for the difference was that the US group used breast cancer mortality while the Danes chose all cause mortality as a summary statistic.
So ,the choice of studies to include and the summary statistic chosen determine the outcome. This example alone should convince medical students that meta-analysis are not infallible. There are context dependent.
I became aware of Goodman's Annals of Internal Medicine editorial in the blog "Medical Metamusings"and I later learned of two other publications that would be good additions to the study list given medical students to illustrate the limitations of MAs.
LeLorier et al from Montreal published an article in NEJM in 1997 in which they described discrepancies between MAs and subsequent large RCTS. An editorial by John Bailar followed, entitled "The Promise and Problems of Meta-Analysis"(NEJM-volume 337:559-561,aug 21,1997).
A major point made by Bailar and LeLorier is that while a well done MA can be helpful in presenting disparate studies on a common scale - using odds ratios- the problem may arise when all of the data is summarized into one odds ratio which is supposed to capture the entire issue but actually may oversimplify a complex issue and lead to erroneous conclusions. Simple answers to complex problems are welcomed but life is so messy they are often just wrong. Meta-analysis is not rocket science-its techniques and particulars are still being working out to some degree and reputable researchers may disagree over operational issues.There is still a lot to learn about the best way to do them.
In a earlier blog, I suggested that a major insight of the RCT is that everyone does not react the Same to a given treatment.
That is one of the basic facts of life in MAs as well since a systematic review typically reviews RCTs.
Two others are:
The conclusion of a MA is dependent on the published (or unpublished) studies included in the pooled analysis and the outcome statistic chosen.
Just as clinical advice needs to be linked to cases to give them limbic valence,these abstractions can be made meaningful with a real life case of different MAs reaching conclusions that would have very different implications for clinical medicine.
Two Danish researchers (Olsen and Gotsche-Cochrane data base Sys Rev. 2001;CD0018777) concluded that screening mammography was not effective.In the same time frame the USPSTF concluded the opposite:both groups based their conclusion on MAs.
Dr. Steven Goodman's explanation of that discrepancy should be part of handouts to medical students in their course on EBM. In an editorial in the Annals of Internal Medicine, sept 2002,volume 137 issue 5, pages 363-365 (He explains that a MA is basically an observational design wherein the subjects are published studies.Those studies that are considered but eliminated from the analysis can make a big difference.Why studies are kept or not rests on "competing claims" of methodologic validity.In this regard the average or for that matter super doctor is challenged to know which expert is correct.The Danish epidemiologists excluded more trials from their analysis than did USPSTF and reached a different conclusion.Another reason for the difference was that the US group used breast cancer mortality while the Danes chose all cause mortality as a summary statistic.
So ,the choice of studies to include and the summary statistic chosen determine the outcome. This example alone should convince medical students that meta-analysis are not infallible. There are context dependent.
I became aware of Goodman's Annals of Internal Medicine editorial in the blog "Medical Metamusings"and I later learned of two other publications that would be good additions to the study list given medical students to illustrate the limitations of MAs.
LeLorier et al from Montreal published an article in NEJM in 1997 in which they described discrepancies between MAs and subsequent large RCTS. An editorial by John Bailar followed, entitled "The Promise and Problems of Meta-Analysis"(NEJM-volume 337:559-561,aug 21,1997).
A major point made by Bailar and LeLorier is that while a well done MA can be helpful in presenting disparate studies on a common scale - using odds ratios- the problem may arise when all of the data is summarized into one odds ratio which is supposed to capture the entire issue but actually may oversimplify a complex issue and lead to erroneous conclusions. Simple answers to complex problems are welcomed but life is so messy they are often just wrong. Meta-analysis is not rocket science-its techniques and particulars are still being working out to some degree and reputable researchers may disagree over operational issues.There is still a lot to learn about the best way to do them.
Subscribe to:
Posts (Atom)