Featured Post

Is the new professionalism and ACP's new ethics really just about following guidelines?

The Charter ( Medical Professionalism in the New Millennium.A Physician's Charter) did not deal with just the important relationship of ...

Thursday, February 03, 2011

"High-value"health care achieves buzz word status-An ACP Committee defines "rationing"

In the 1 Feb 2011 issue of the Annals of Internal Medicine in the Clinical Guideline section, ACP's Clinical Guideline Committee authored an article entitled :

High-Value, Cost Conscious Health Care: Concepts for Clinicians to Evaluate the Benefits,Harms,
, and Costs of Medical Intervention". see here for full text.

Dr. Douglass K. Owens, author of numerous cost effectiveness studies, was the lead author.

The article begins with expression of the customary alarm about increasing health care costs and the need for cost control, an effort the authors believe should focus on the value of the health care interventions.

Their operational definition of value is " an assessment of the benefit of an intervention relative to expenditures".Value is determined by balancing benefit and costs.

This is consistent with Harvard Business School professor, M.E. Porter's definition which is:
Value =outcome/cost.

Simple enough we just figure out the benefits and the cost and ...but the devil is in the details as always.

The Annals authors then make what they believe to be critical distinction -the distinction between cost and value. A high cost item may or may not provide high value and low cost may have little benefit , therefore that intervention is of low value. So what we want is high-value health care.

(As best I can tell,the busswordification of" high-value health care" can be attributed at least in part to the efforts of Porter and Dr. Elizabeth Teisberg, although I don't wish to slight Dr. Don Berwick and physicians at the ACP.Whatever it origins and vectors of spread, medical authors and policy wonks talk about it now as if everyone knows what it is.)

The authors then redefine rationing (or in the authors words " more appropriately" define) to mean "restricting the use of effective, high-value care". So that if an intervention that is "determined" to be low value is restricted that would not be by the new definition considered rationing. This should provide comfort to those who worry about the rationing of health care. eliminating an intervention that is determined (By whom?) to be of low value is not rationing at all. One can see what power this puts in the hands of those determining what is high and low value.

The authors then discuss the importance of considering the downstream costs and benefits of an intervention.For example, one has to factor in the cost of maintaining a ICD not just the initial cost of assessment and placement of the device.

If a treatment is both better and cheaper than an alternative there is no problem in deciding between the two. More complexity emerges when an alternative provides more benefits but also costs more.

In this situation we are told we need comparative effectiveness analysis which is basically cost benefit analysis (CAB) that compares the various alternative interventions. Conceding this point, at least for the sake of argument, one now asks who will make that analysis

Owens et al provide the answer:

...we recommend assessing their value [competing interventions] to patients and society by using cost effectiveness analysis. Such analysis require specialized expertise and training,are often expensive, and thus are typically performed by investigators.

Note this type of assessment cannot be done by just anybody, only those with specialized expertise and note what they claim to provide-assessment of value not only to patients but to society.

Realizing that some may find that level of hubris unsettling, the real money quote of the article is :

"The choice of a cost effectiveness threshold is itself a value judgment and depends on several factors, including who the decision maker is.

That is the heart of the matter, after all of the gathering of various costs and developing estimates of the quality adjusted life years (QALY) and the aggregation of costs and aggregation of estimated benefits and using various analytic tools ( e.g. cost-effectiveness ratios), someone or some committee has to make a value judgment. Is the benefit worth the cost or not? At the end, it is a human value judgment- not the solving of some equation. Then the question is who will decide.

In the same issue of the Annals of Internal Medicine there is an Editorial by Michael Gusmano and Daniel Callahan of the Hasting Center offering cautionary counterpoints.

They emphasize Owen and co-authors' admission that effectiveness evidence is lacking and our ability to assess quality of life is inadequate. If the evidence is lacking and our ability to assess quality of life is inadequate even investigators with expertise and special training might be challenged. Gusmano and Callahan continue:

Perhaps the biggest problem with cost-utility analysis in that the expenditures on health care cannot be compared with other societal needs..the failure to consider opportunity costs may eliminate existing,but un-assessed health care technologies and services that are a better value than the "cost effective" technology included in these assessments.


Anonymous said...

The distinction between Comparative Effectiveness Research (CER)and Cost-Effectiveness research is important.The later compares both the benefits and the costs of various intervention using something called an incremental cost-effectiveness ratio.The ACP recommends using cost-effectiveness research as part of CER.My understanding is that the health care bill does not allow the government to do that.

james gaulte said...

Thank you and yes, that is an important distinction between the two processes both of which could be misleadingly abbreviated as CER or CEr. I probably did not make that distinction clear and/or emphasize it adequately.CER is mom and apple pie; the cost issue is complex and contentious.