Search Journal-type in search term and press enter
Southwest Pulmonary and Critical Care Fellowships

 Editorials

Last 50 Editorials

(Most recent listed first. Click on title to be directed to the manuscript.)

A Call for Change in Healthcare Governance (Editorial & Comments)
The Decline in Professional Organization Growth Has Accompanied the
   Decline of Physician Influence on Healthcare
Hospitals, Aviation and Business
Healthcare Labor Unions-Has the Time Come?
Who Should Control Healthcare? 
Book Review: One Hundred Prayers: God's answer to prayer in a COVID
   ICU
One Example of Healthcare Misinformation
Doctor and Nurse Replacement
Combating Physician Moral Injury Requires a Change in Healthcare
   Governance
How Much Should Healthcare CEO’s, Physicians and Nurses Be Paid?
Improving Quality in Healthcare 
Not All Dying Patients Are the Same
Medical School Faculty Have Been Propping Up Academic Medical
Centers, But Now Its Squeezing Their Education and Research
   Bottom Lines
Deciding the Future of Healthcare Leadership: A Call for Undergraduate
   and Graduate Healthcare Administration Education
Time for a Change in Hospital Governance
Refunds If a Drug Doesn’t Work
Arizona Thoracic Society Supports Mandatory Vaccination of Healthcare
   Workers
Combating Morale Injury Caused by the COVID-19 Pandemic
The Best Laid Plans of Mice and Men
Clinical Care of COVID-19 Patients in a Front-line ICU
Why My Experience as a Patient Led Me to Join Osler’s Alliance
Correct Scoring of Hypopneas in Obstructive Sleep Apnea Reduces
   Cardiovascular Morbidity
Trump’s COVID-19 Case Exposes Inequalities in the Healthcare System
Lack of Natural Scientific Ability
What the COVID-19 Pandemic Should Teach Us
Improving Testing for COVID-19 for the Rural Southwestern American Indian
   Tribes
Does the BCG Vaccine Offer Any Protection Against Coronavirus Disease
   2019?
2020 International Year of the Nurse and Midwife and International Nurses’
   Day
Who Should be Leading Healthcare for the COVID-19 Pandemic?
Why Complexity Persists in Medicine
Fatiga de enfermeras, el sueño y la salud, y garantizar la seguridad del
   paciente y del publico: Unir dos idiomas (Also in English)
CMS Rule Would Kick “Problematic” Doctors Out of Medicare/Medicaid
Not-For-Profit Price Gouging
Some Clinics Are More Equal than Others
Blue Shield of California Announces Help for Independent Doctors-A
   Warning
Medicare for All-Good Idea or Political Death?
What Will Happen with the Generic Drug Companies’ Lawsuit: Lessons from
   the Tobacco Settlement
The Implications of Increasing Physician Hospital Employment
More Medical Science and Less Advertising
The Need for Improved ICU Severity Scoring
A Labor Day Warning
Keep Your Politics Out of My Practice
The Highest Paid Clerk
The VA Mission Act: Funding to Fail?
What the Supreme Court Ruling on Binding Arbitration May Mean to
   Healthcare 
Kiss Up, Kick Down in Medicine 
What Does Shulkin’s Firing Mean for the VA? 
Guns, Suicide, COPD and Sleep
The Dangerous Airway: Reframing Airway Management in the Critically Ill 
Linking Performance Incentives to Ethical Practice 

 

For complete editorial listings click here.

The Southwest Journal of Pulmonary and Critical Care welcomes submission of editorials on journal content or issues relevant to the pulmonary, critical care or sleep medicine. Authors are urged to contact the editor before submission.

---------------------------------------------------------------------------------------------

Monday
Oct082012

The Emperor Has No Clothes: The Accuracy of Hospital Performance Data  

Several studies were announced within the past month dealing with performance measurement. One was the Joint Commission on the Accreditation of Healthcare Organizations (Joint Commission, JCAHO) 2012 annual report on Quality and Safety (1). This includes the JCAHO’s “best” hospital list. Ten hospitals from Arizona and New Mexico made the 2012 list (Table 1).

Table 1. JCAHO list of “best” hospitals in Arizona and New Mexico for 2011 and 2012.

This compares to 2011 when only six hospitals from Arizona and New Mexico were listed. Notably underrepresented are the large urban and academic medical centers. A quick perusal of the entire list reveals that this is true for most of the US, despite larger and academic medical centers generally having better outcomes (2,3).

This raises the question of what criteria are used to measure quality. The JCAHO criteria are listed in Appendix 2 at the end of their report. The JCAHO criteria are not outcome based but a series of surrogate markers. The Joint Commission calls their criteria “evidence-based” and indeed some are, but some are not (2). Furthermore, many of the Joint Commission’s criteria are bundled. In other words, failure to comply with one criterion is the same as failing to comply with them all. They are also not weighted, i.e., each criterion is judged to be as important as the other. An example where this might have an important effect on outcomes might be pneumonia. Administering an appropriate antibiotic to a patient with pneumonia is clearly evidence-based. However, administering the 23-polyvalent pneumococcal vaccine in adults is not effective (4-6). By the Joint Commission’s criteria administering pneumococcal vaccine is just as important as choosing the right antibiotic and failure to do either results in their judgment of noncompliance.

Previous studies have not shown that compliance with the JCAHO criteria improves outcomes (2,3). Examination of the US Health & Human Services Hospital Compare website is consistent with these results. None of the “best” hospitals in Arizona or New Mexico were better than the US average in readmissions, complications, or deaths (7).

A second announcement was the success of the Agency for Healthcare Quality and Research’s (AHRQ) program on central line associated bloodstream infections (CLABSI) (8). According to the press release the AHRQ program has prevented more than 2,000 CLABSIs, saving more than 500 lives and avoiding more than $34 million in health care costs. This is surprising since with the possible exception of using chlorhexidine instead of betadine, the bundled criteria are not evidence-based and have not correlated with outcomes (9). Examination of the press release reveals the reduction in mortality and the savings in healthcare costs were estimated from the hospital self-reported reduction in CLABSI.

A clue to the potential source of these discrepancies came from an article published in the Annals of Internal Medicine by Meddings and colleagues (10). These authors studied urinary tract infections which were self-reported by hospitals using claims data. According to Meddings, the data were “inaccurate” and “are not valid data sets for comparing hospital acquired catheter-associated urinary tract infection rates for the purpose of public reporting or imposing financial incentives or penalties”. The authors propose that the nonpayment by Medicare for “reasonably preventable” hospital-acquired complications resulted in this discrepancy. There is no reason to assume that data reported for CLABSI or ventilator associated pneumonia (VAP) is any more accurate.

These and other healthcare data seem to follow a trend of bundling weakly evidence-based, non-patient centered surrogate markers with legitimate performance measures. Under threat of financial penalty the hospitals are required to improve these surrogate markers, and not surprisingly, they do. The organization mandating compliance with their outcomes joyfully reports how they have improved healthcare saving both lives and money. These reports are often accompanied by estimates, but not measurement, of patient centered outcomes such as mortality, morbidity, length of stay, readmission or cost. The result is that there is no real effect on healthcare other than an increase in costs. Furthermore, there would seem to be little incentive to question the validity of the data. The organization that mandates the program would be politically embarrassed by an ineffective program and the hospital would be financially penalized for honest reporting.

Improvement begins with the establishment of guidelines that are truly evidence-based and have a reasonable expectation of improving patient centered outcomes. Surrogate markers should be replaced by patient-centered outcomes such as mortality, morbidity, length of stay, readmission, and/or cost. The recent "pay-for-performance" ACA provision on hospital readmissions that went into effect October 1 is a step in the right direction. The guidelines should not be bundled but weighted to their importance. Lastly, the validity of the data needs to be independently confirmed and penalties for systematically reporting fraudulent data should be severe. This approach is much more likely to result in improved, evidence-based healthcare rather than the present self-serving and inaccurate programs without any benefit to patients.

Richard A. Robbins, MD*

Editor, Southwest Journal of Pulmonary and Critical Care

References

  1. Available at: http://www.jointcommission.org/assets/1/18/TJC_Annual_Report_2012.pdf (accessed 9/22/12).
  2. Robbins RA, Gerkin R, Singarajah CU. Relationship between the Veterans Healthcare Administration hospital performance measures and outcomes. Southwest J Pulm Crit Care 2011;3:92-133.
  3. Rosenthal GE, Harper DL, Quinn LM. Severity-adjusted mortality and length of stay in teaching and nonteaching hospitals. JAMA 1997;278:485-90.
  4. Fine MJ, Smith MA, Carson CA, Meffe F, Sankey SS, Weissfeld LA, Detsky AS, Kapoor WN. Efficacy of pneumococcal vaccination in adults. A meta-analysis of randomized controlled trials. Arch Int Med 1994;154:2666-77.
  5. Dear K, Holden J, Andrews R, Tatham D. Vaccines for preventing pneumococcal infection in adults. Cochrane Database Sys Rev 2003:CD000422.
  6. Huss A, Scott P, Stuck AE, Trotter C, Egger M. Efficacy of pneumococcal vaccination in adults: a meta-analysis. CMAJ 2009;180:48-58.
  7. http://www.hospitalcompare.hhs.gov/ (accessed 9/22/12).
  8. http://www.ahrq.gov/news/press/pr2012/pspclabsipr.htm (accessed 9/22/12).
  9. Hurley J, Garciaorr R, Luedy H, Jivcu C, Wissa E, Jewell J, Whiting T, Gerkin R, Singarajah CU, Robbins RA. Correlation of compliance with central line associated blood stream infection guidelines and outcomes: a review of the evidence. Southwest J Pulm Crit Care 2012;4:163-73.
  10. Meddings JA, Reichert H, Rogers MA, Saint S, Stephansky J, McMahon LF. Effect of nonpayment for hospital-acquired, catheter-associated urinary tract infection: a statewide analysis. Ann Intern Med 2012;157:305-12.

*The views expressed are those of the author and do not necessarily represent the views of the Arizona or New Mexico Thoracic Societies.

Reference as: Robbins RA. The emperor has no clothes: the accuracy of hospital performance data. Southwest J Pulm Crit Care 2012;5:203-5. PDF

Tuesday
Sep252012

Getting the Best Care at the Lowest Price 

“Computers make it easier to do a lot of things, but most of the things they make it easier to do don't need to be done.”- Andy Rooney

A recent report from the IOM Institute of Medicine (IOM) claims that $750 billion, or about 30% of healthcare expenditures is wasted each year (1). This attention-grabbing statistic is reminiscent of the oft-quoted figure of 44,000-98,000 deaths attributable to medical errors annually from the 2000 IOM report titled “To Err Is Human: Building a Safer Health System” (2). The IOM estimate of deaths was based on two studies that used the Harvard Medical Practice Study methodology (3-6). Nurses reviewed charts and using preset criteria cases referred charts to physicians who had undergone a short training course. The physicians judged whether the adverse event was due to a medical error and whether the error contributed to the patient’s death. The incidence of deaths from medical errors was double in New York compared to Utah and Colorado resulting in the IOM’s high and low estimate. I remember reading the studies and thinking that both had problems. The physician reviewers were often outside the specialty area involved (e.g., nonsurgeons reviewing surgical cases); the criteria for error and whether it contributed to death were not clearly defined; and the results were inconsistent (were physicians from New York really twice as negligent as those from Utah and Colorado?). My impression was that no one would believe these flawed studies. I was very wrong. The IOM report helped spark an ongoing campaign for patient safety resulting in a number of interventions. Most were focused on physicians, some were expensive, and to date, it is unclear whether they have improved outcomes or wasted resources.

Now the IOM has published that an inefficient, extraordinarily complex, and slow-to-change US healthcare system wastes huge amounts of money (Table 1) (1).

Table 1. IOM estimates of wasted healthcare dollars.

Although the validity of the estimates is uncertain, most in healthcare would agree that a large portion of healthcare dollars are wasted. The report implies much of this inefficiency is due to clinicians because they are slow-to-change, inefficient and unable to keep up with the explosion in healthcare knowledge. Because of these limitations, physicians often mismanage the patient resulting in the waste of dollars noted above. In the healthcare system envisioned by the IOM, electronic health records (EHRs) would bring the research contained in more than 750,000 journal articles published each year to the point of care. Since it would be impossible for a clinician to read all 750,000 articles these would be communicated to the clinicians as guidelines.

Over the past decade, a remarkable number of laws, rules, regulations, and new ways of doing business have hit physicians (7). Each, when viewed alone, looks very reasonable, but, taken in aggregate, they are undermining the profession and medical care. Healthcare has become more expensive and physicians have shouldered this blame despite losing much of their autonomy. The IOM recommendations on computers may be another in the death by a thousand cuts that independently thinking physicians are receiving.

Although I’m resentful of the IOM report’s implications, bringing computers and EHRs to the clinic is a good idea. However, as a retired VA physician I have repeatedly heard how the “magic” of the computer can solve problems. The VA long ago installed an electronic health record with a set of guidelines that anyone could follow. Certainly improved efficiency and reduced costs would shortly follow. Unfortunately, this does not appear to be the case. When the VA EHR was instituted the numbers of physicians and nurses within the VA declined although the numbers of total employees increased (8). At least part of the increase was due to installation and maintenance of an EHR. At the same time an ever increasing number of guidelines were placed on the computer. Costs to ensure compliance and bonuses paid to administrators for compliance further escalated expenses. Furthermore, the guidelines caused a marked consumption of clinician time. According to one estimate, compliance with the source of many of the VA guidelines, the US Preventative Services Task Force, would require 4-7 hours of additional clinician time per day (9). Clearly, this was unsustainable so further money was allocated to hire healthcare technicians to comply with many of the guidelines. Compliance improved but efficiency, costs, morbidity or mortality did not (10). Furthermore, an unexpected increase in healthcare expenditures occurred outside the VA as a consequence of EHRs. A recent report from the Office of Inspector General of Health and Human Services notes an increase in higher level billing codes in Medicare patients (11). Experts say EHR technology resulted in the increase because of its super-charting capabilities (12). Therefore, it seems unlikely that EHRs as currently utilized will improve efficiency or lower costs.

Much to their credit, the IOM seems to recognize these limitations when they say, "Given such real-world impediments, initiatives that focus merely on incremental improvements and add to a clinician's daily workload are unlikely to succeed” (1). The report goes on to say that instead, the entire infrastructure and culture of healthcare must be reconfigured for significant change to occur. I would agree. Previous changes to improve healthcare have done nothing more than shift monies away from clinical care which will not improve patient outcomes (13). This occurred at the VA and will occur again if left unchecked. A meaningful partnership between clinicians and payers achieving and rewarding high-value care is needed. To do this physicians need considerable input, and perhaps more importantly, control of any EHR. Second, physicians need to be rewarded for good care which is centered on improved patient outcomes and not endless checklists that do little more than consume time. Failure to do so will result in inefficient and more costly care and not in the improvements promised by the IOM.

Richard A. Robbins, MD*

Editor, SWJPCC

References

  1. Smith M, Saunders R, Stuckhardt L, McGinnis JM. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC: National Academy Press. 2000. Available at: http://www.iom.edu/Reports/2012/Best-Care-at-Lower-Cost-The-Path-to-Continuously-Learning-Health-Care-in-America.aspx (accessed 9/8/12). 
  2. Kohn LT, Corrigan JM, Donaldson MS.  To Err Is Human: Building A Safer Health System.  Washington, DC: National Academy Press. 2000. Available at: http://www.nap.edu/openbook.php?isbn=0309068371 (accessed 9/8/12). 
  3. Hiatt HH, Barnes BA, Brennan TA, et al. A study of medical injury and medical malpractice. N Engl J Med 1989;321:480-4.
  4. Brennan TA, Leape LL, Laird NM, Hebert L, Localio AR, Lawthers AG, Newhouse JP, Weiler PC, Hiatt HH. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med 1991;324:370-6.
  5. Leape LL, Brennan TA, Laird N, Lawthers AG, Localio AR, Barnes BA, Hebert L, Newhouse JP, Weiler PC, Hiatt H. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med 1991;324:377-84.
  6. Thomas EJ, Studdert DM, Burstin HR, Orav EJ, Zeena T, Williams EJ, Howard KM, Weiler PC, Brennan TA. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care 2000;38:261-71.
  7. Kellner KR. Physician killed by ducks. Chest 2005;127:695-6.
  8. Robbins RA. Profiles in medical courage: of mice, maggots and Steve Klotz. Southwest J Pulm Crit Care 2012;4:71-7.
  9. Yarnall KS, Pollak KI, Østbye T, Krause KM, Michener JL. Primary care: is there enough time for prevention? Am J Public Health 2003;93:635-41.
  10. Robbins RA, Gerkin R, Singarajah CU. Relationship between the Veterans Healthcare Administration hospital performance measures and outcomes. Southwest J Pulm Crit Care 2011;3:92-133.
  11. Office of Inspector General. Coding trends of Medicare evaluation and management services. Available at: http://oig.hhs.gov/oei/reports/oei-04-10-00180.asp (accessed 9-8-12).
  12. Lowes R. Are Physicians Coding Too Many 99214s? Medscape Medical News. Available at: http://www.medscape.com/viewarticle/767732 (accessed 9-8-12).
  13. Robbins RA, Gerkin R, Singarajah CU. Correlation between patient outcomes and clinical costs in the VA healthcare system. Southwest J Pulm Crit Care 2012;4:94-100.

*The views expressed in this editorial are those of the author and do not necessarily represent the views of the Arizona or New Mexico Thoracic Societies.

Reference as: Robbins RA. Getting the best care at the lowest price. Southwest J Pulm Crit Care 2012;5:145-8. (Click here for a PDF version of the editorial)

Tuesday
Jul172012

A New Paradigm to Improve Patient Outcomes

A Tongue-in-Cheek Look at the Cost of Patient Satisfaction

A landmark article entitled “The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality” was recently published in the Archives of Internal Medicine by Fenton et al. (1). The authors conducted a prospective cohort study of adult respondents (n=51,946) to the 2000 through 2007 national Medical Expenditure Panel Survey. The results showed higher patient satisfaction was associated with higher admission rates to the hospital, higher overall health care expenditures, and increased mortality.

The higher costs are probably not surprising to many health care administrators. Programs to improve patient satisfaction such as advertising, valet parking, gourmet meals for patients and visitors, massages, never-ending patient and family satisfaction surveys, etc. are expensive and would be expected to increase costs. Some would argue that these costs are simply the price of competing for patients in the present health care environment. Although the outcomes are poorer, substituting patient satisfaction as a surrogate marker for quality of care is probably still valid as a business goal (2). Furthermore, administrators and some healthcare providers are paid bonuses based on patient satisfaction. These bonuses are necessary to maintain salaries at a level to attract the best and brightest.

Although it seems logical that most ill patients wish to live and get well as quickly and cheaply as possible, the Archives article demonstrates that this is a fallacy. Otherwise, higher patient satisfaction would clearly correlate with lower mortality, admission rates and expenses. Since the hospitals and other health care organizations are here to serve the public, some would argue that giving the patients what they want is more important that boring outcomes such as hospital admission rates, costs and mortality.

The contention of this study – that dissatisfaction might improve patient survival – may have biological plausibility.  Irritation with the healthcare process might induce adrenal activation, with resulting increases in beneficial endogenous catecholamines and cortisol.  The resulting increase in global oxygen delivery might reduce organ failure.  Furthermore, the irritated patient is less likely to consent to unnecessary medical procedures and is therefore protected from ensuing complications.  An angry patient is likely to have less contact with healthcare providers who are colonized with potentially dangerous multi-drug resistant bacteria.

Specific bedside practices can be implemented in order to increase patient dissatisfaction, and thereby benefit mortality.   Nurses can concentrate on techniques of sleep deprivation such as waking the patient to ask if they want a sleeping pill.  Third year medical students can be employed to start all IVs and perform all lumbar punctures.  Attending physicians can do their part by being aloof and standoffish.  For instance, a patient suffering an acute myocardial infarction might particularly benefit from hearing about the minor inconveniences the attending suffered aboard a recent south Pacific cruise ship – “I ordered red caviar, and they brought black!”  During the medical interview, non-pregnant women should always be asked “when is the baby due?”  Repeatedly confusing the patient’s name, or calling them by multiple erroneous names on purpose, can heighten their sense of insecurity.  Simply making quotation signs with your fingers whenever the physician refers to themselves as their “doctor” can be quite off-putting. 

Simple props can be useful.  Wads of high-denomination cash, conspicuously bulging from all pockets of the attending’s white coat, can promote a sense of moral outrage.  Conspicuously placing a clothespin on your nose upon entering the patient’s room can be quite effective.  Simply placing your stethoscope in ice water for a few minutes before applying it to the patient’s bare chest can make a difference   

Other more innovative techniques might arise.  Charging the patient in cash for each individual medical intervention might be quite useful, emphasizing the magnitude of overcharging.  This would be made apparent to the patient who for instance might be asked to pay $40 cash on the barrelhead for a single aspirin pill.

Often the little things make a big difference – dropping a pile of aluminum food trays on the floor at 4 AM, clamping the Foley tube, purposely ignoring requests for a bedpan, or making the patient NPO for extended periods for no apparent reason can be quite effective. 

However, we fear that health care professionals may have difficulty overcoming their training to be responsive to patients. Therefore, we suggest a different strategy to National health care planners seeking to reduce costs and improve patient mortality, what we term the designated institutional offender (DIO). A DIO program where an employee is hired to offend patients would likely be quite cost effective. The DIO would not need expensive equipment or other resources. The DIO role is best suited for someone with minimal education and a provocative attitude. Only the most deficient and densest (as opposed to the best and brightest) should be hired.

Clearly, an authoritative group must be formed to establish guidelines and bundles for both the DIO and healthcare providers. We suggest formation of the Institute of Healthcare Irritation, or IHI.  They could certify DIOs to insure that the 7 habits of highly offensive people are used (3).  IHI can also establish clinical practice bundles like the rudeness bundle, the physical discomfort bundle, the moral outrage bundle, etc.

We suggest the following as an example to muster compliance with the physical discomfort bundle. The patient must be documented to be experiencing:

  • Hunger
  • Thirst
  • Too cold (or too hot)
  • Sleep deprivation
  • Drug-related constipation
  • And the inability to evacuate their bladder

Patient satisfaction with even a single component indicates failure of bundle compliance. Of course a cadre of personnel will need to be hired to ensure compliance with the bundles.

Based on the evidence from the Archives article, there was a 9.1% cost differential between the highest and the lowest satisfaction quartile. Shifting patients to lower satisfaction quartiles could result in huge cost savings. If the DIO and IHI strategies to offend are particularly effective, many patients will not return for health care at all, resulting in further savings. Targeting those who are the largest consumers of care could result in even larger savings.

The DIO and IHI would also save lives. Those patients in the highest satisfaction quartile had a 26% higher mortality rate than the lowest quartile. If patients who have poor self-related health and > 3 chronic diseases are excluded, the mortality rate is 44% higher in the highest satisfaction quartile.

Administrators could now be paid bonuses for not only compliance with the IHI bundles, but also lower patient satisfaction scores, since they can argue that lower satisfaction is actually good for patients. Furthermore, the administrators should receive higher compensation since the DIO and the personnel hired to ensure compliance with the IHI guidelines would be additional employees in their administrative chain of command and administrative salaries are often based on the number of employees they supervise.   

Richard A. Robbins, MD

Robert A. Raschke, MD

References

  1. Fenton JJ, Jerant AF, Bertakis KD, Franks P. The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med 2012;172:405-11.
  2. Browne K, Roseman D, Shaller D, Edgman-Levitan S. Analysis & commentary. Measuring patient experience as a strategy for improving primary care. Health Aff (Millwood). 2010 May;29(5):921-5
  3. Bing S. The seven habits of highly offensive people. Fortune magazine available at http://money.cnn.com/magazines/fortune/fortune_archive/1995/11/27/208025/index.htm (accessed 7-7-12).

Reference as: Robbins RA, Raschke RA. A new paradigm to improve patient outcomes: a tongue-in-cheek look at the cost of patient satisfaction. Southwest J Pulm Crit Care 2012;5:33-5. (Click here for a PDF version of the editorial) 

Saturday
Jun092012

A Little Knowledge is a Dangerous Thing 

An article entitled “A Comprehensive Care Management Program to Prevent Chronic Obstructive Pulmonary Disease Hospitalizations: A Randomized, Controlled Trial” from the VA cooperative studies program was recently published in the Annals of Internal Medicine (1).  This article describes the BREATH trial mentioned in a previous editorial (2). BREATH was a randomized, controlled, multi-center trial performed at 20 VA medical centers comparing an educational comprehensive care management program to guideline-based usual care for patients with chronic obstructive pulmonary disease (COPD). The intervention included COPD education during 4 individual and 1 group sessions, an action plan for identification and treatment of exacerbations, and scheduled proactive telephone calls for case management. After enrolling 426 (44%) of the planned total of 960 the trial was stopped because there were 28 deaths from all causes in the intervention group versus 10 in the usual care group (hazard ratio, 3.00; 95% CI, 1.46 to 6.17; p = 0.002). Deaths due to COPD accounted for the largest difference (10 deaths in the intervention group versus 3 in usual care; hazard ratio, 3.60; 95% CI, 0.99 to 13.08). This trial led us to perform a meta-analysis of educational interventions in COPD (3). In this meta-analysis of 2476 subjects we found no difference in mortality between intervention and usual care groups and that the recent Annals study was heterogenous compared to the other studies.

Should the recent VA study have been stopped early? Several reports demonstrate that studies stopped early usually overestimate treatment effects (4-7). Some have even suggested that stopping trials early is unethical (7). A number of articles suggest that trials should only be stopped if predetermined statistical parameters are exceeded, with the p value for stopping set at a very low level (4-7).  There was no planned interim analysis for any outcome in the recent VA trial. The rationale for stopping a study for an adverse effect when there is no a priori reasonable link between the intervention and the adverse effect is missing in this instance.  It seems unlikely that education would actually lead to increased deaths in COPD patients.  Any effect should logically have impacted the COPD related mortality, yet there was no significant increase for COPD related deaths in the intervention group. An accompanying editorial by Stuart Pocock makes most of these points and suggests that chance was the most likely cause of the excess deaths (8).

The VA Coop Trials coordinating center told the investigators that the reason for stopping the trial was that there were “significant adverse events” in the intervention group. Inquires regarding what adverse events went unanswered. This would seem to be a breakdown in VA research oversight. The information provided to both investigators and research subjects was incomplete and would seem to be a violation of the informed consent, which states the subject would be notified of any new information that significantly altered their risk.

Lastly, investigators were repeatedly warned by the VA coordinating center that “all communications with the media should occur through your facility Public Affairs office”. It seems very unlikely that personnel in any public affairs office have sufficient research training to answer any medical, statistical or ethical inquiries into the conduct of this study.

In our meta-analysis we have shown that self-management education is associated with a reduction in hospital admissions with no indication for detrimental effects in other outcome parameters. This would seem sufficient to justify a recommendation of self-management education in COPD. However, due to variability in interventions, study populations, follow-up time, and outcome measures, data are still insufficient to formulate clear recommendations regarding the form and content of self-management education programs in COPD.

Richard A. Robbins, M.D.*

Editor, Southwest Journal of Pulmonary

   and Critical Care

References

  1. Fan VS, Gaziano JM, Lew R, et al. A comprehensive care management program to prevent chronic obstructive pulmonary disease hospitalizations: a randomized, controlled trial. Ann Intern Med 2012;156:673-683.
  2. Robbins RA. COPD, COOP and BREATH at the VA. Southwest J Pulm Crit Care 2011;2:27-28.
  3. Hurley J, Gerkin R, Fahy B, Robbins RA. Meta-analysis of self-management education for patients with chronic obstructive pulmonary disease. Southwest J Pulm Crit Care 2012;4:?-?.
  4. Pocock SJ, Hughes MD. Practical problems in interim analyses, with particular regard to estimation.Control Clin Trials 1989;10:209S-221S.
  5. Montori VM, Devereaux PJ, Adhikari NK, et al. Randomized trials stopped early for benefit: a systematic review. JAMA 2005;294:2203-9.
  6. Bassler D, Briel M, Montori VM, et al. Stopping randomized trials early for benefit and estimation of treatment effects: systematic review and meta-regression analysis. JAMA 2010;303:1180-7.
  7. Mueller PS, Montori VM, Bassler D, Koenig BA, Guyatt GH. Ethical issues in stopping randomized trials early because of apparent benefit. Ann Intern Med. 2007;146:878-81.
  8. Pocock SJ. Ethical dilemmas and malfunctions in clinical trials research. Ann Intern Med 2012;156:746-747.

*Dr. Robbins was an investigator and one of the co-authors of the Annals of Internal Medicine manuscript (reference #1).

Reference as: Robbins RA. A little knowledge is a dangerous thing. Southwest J Pulm Crit Care 2012;4:203-4. (Click here for a PDF version of the editorial) 

Saturday
May052012

VA Administrators Gaming the System 

On 4-23-12 the Department of Veterans Affairs (VA) Office of Inspector General (OIG) issued a report of the accuracy of the Veterans Healthcare Administration (VHA) wait times for mental health services. The report found that “VHA does not have a reliable and accurate method of determining whether they are providing patients timely access to mental health care services. VHA did not provide first-time patients with timely mental health evaluations and existing patients often waited more than 14 days past their desired date of care for their treatment appointment. As a result, performance measures used to report patient’s access to mental health care do not depict the true picture of a patient’s waiting time to see a mental health provider.” (1). The OIG made several recommendations and the VA administration quickly concurred with these recommendations. Only four days earlier the VA announced plans to hire 1900 new mental health staff (2).

This sounded familiar and so a quick search on the internet revealed that about a year ago the United States Court of Appeals for the Ninth Circuit issued a scathing ruling saying that the VA had failed to provide adequate mental health services to Veterans (3). A quick review of the Office of Inspector General’s website revealed multiple instances of similar findings dating back to at least 2002 (4-7). In each instance, unreliable data regarding wait times was cited, VA administration agreed, and no or inadequate action was taken.

Inadequate Numbers of Providers

One of the problems is that inadequate numbers of clinical physicians and nurses are employed by the VA to care for the patients. In his “Prescription for Change”, Dr. Ken Kizer, then VA Undersecretary for Health, made bold changes to the VA system in the mid 1990’s (8). Kizer cut the numbers of hospitals but also the numbers of clinicians while the numbers of patients increased (9). The result was a marked drop in the number of physicians and nurses per VA enrollee (Figure 1).

Figure 1. Nurses (squares) and physicians (diamonds) per 1000 VA enrollees for selected years (10,11).

This data is consistent with a 2011 VA survey that asked VA mental health professionals whether their medical center had adequate mental health staff to meet current veteran demands for care; 71 percent responded no. According to the OIG, VHA’s greatest challenge has been to hire psychiatrists (1). Three of the four sites visited by the OIG had vacant psychiatry positions. One site was trying to replace three psychiatrists who left in the past year. This despite psychiatrists being one of the lowest paid of the medical specialties (12). The VA already has about 1,500 vacancies in mental-health specialties. This prompted Sen. Patty Murray, Chairman of the Senate Committee on Veterans Affairs to ask about the new positions, "How are you going to ensure that 1,600 positions ... don't become 1,600 vacancies?" (13).

Administrative Bonuses

A second problem not identified by the OIG is administrative bonuses. Since 1996, wait times have been one of the hospital administrators’ performance measures on which administrative bonuses are based. According to the OIG these numbers are unreliable and frequently “gamed” (1,4-7). This includes directions from VA supervisors to enter incorrect data shortening wait times (4-7).

At a hearing before the Senate Committee on Veterans' Affairs Linda Halliday from the VA OIG said "They need a culture change. They need to hold facility directors accountable for integrity of the data." (13). VA "greatly distorted" the waiting time for appointments, Halliday said, enabling the department to claim that 95 percent of first-time patients received an evaluation within 14 days when, in reality, fewer than half were seen in that time. Nicholas Tolentino, a former mental-health administrative officer at the VA Medical Center in Manchester, N.H., told the committee that managers pressed the staff to see as many veterans as possible while providing the most minimal services possible. "Ultimately, I could not continue to work at a facility where the well-being of our patients seemed secondary to making the numbers look good," he said.

Although falsifying wait times has been known for years, there has been inadequate action to correct the practice according to the VA OIG. Sen. Murray said the findings show a "rampant gaming of the system." (13). This should not be surprising. Clerical personnel who file the data have their evaluations, and in many cases pay, determined by supervisors who financially benefit from a report of shorter wait times. There appears no apparent penalty for filing falsified data. If penalties did exist, it seems likely that the clerks or clinicians would be the ones to shoulder the blame.

The Current System is Ineffective

A repeated pattern of the OIG being called to look at wait times, stating they are false, making recommendations, the VA concurring, and nothing being done has been going on for years (1, 3-7). Based on these previous experiences, the VA will likely be unable to hire the numbers of clinicians needed and wait times will continue to be unacceptably long but will be “gamed” to “make the numbers look good”. Pressure will be placed on the remaining clinicians to do more with less. Some will become frustrated and leave the VA. The administrators will continue to receive bonuses for inaccurate short wait times. If past events hold true, in 2-5 years another VA OIG report will be requested. It will restate that the VA falsified the wait times. This will be followed by a brief outcry, but nothing will be done.

The VA OIG apparently has no real power and the VA administrators have no real oversight. The VA OIG continues to make recommendations regarding additional administrative oversight which smacks of putting the fox in charge of the hen house. Furthermore, the ever increasing numbers of administrators likely rob the clinical resources necessary to care for the patients. Decreased clinical expenses have been shown to increase standardized mortality rates, in other words, hiring more administrators at the expense of clinicians likely contributes to excess deaths (14). Although this might seem obvious, when the decrease of physicians and nurses in the VA began in the mid 1990’s there seemed little questioning that the reduction was an “improvement” in care.

Traditional measures such as mortality, morbidity, etc. are slow to change and difficult to measure. In order to demonstrate an “improvement” in care what was done was to replace outcome measures with process measures. Process measures assess the frequency that an intervention is performed.  The problem appears that poor process measures were chosen. The measures included many ineffective measures such as vaccination with the 23 polyvalent pneumococcal vaccine in adult patients and discharge instructions including advice to quit smoking at hospital discharge (15). Many were based on opinion or poorly done trials, and when closely examined, were not associated with better outcomes. Most of the “improvement” appeared to occur in performance of these ineffective measures. However, these measures appeared to be quite popular with the administrators who were paid bonuses for their performance.

Root Causes of the Problems

The root causes go back to Kizer’s Prescription for Change. The VA decreased the numbers of clinicians, but especially specialists, while increasing the numbers of administrators and patients. The result has been what we observe now. Specialists such as psychiatrists are in short supply. They were often replaced by a cadre of physician extenders more intent on satisfying a checklist of ineffective process measures rather than providing real help to the patient. Waiting times lengthened and the administrative solution was cover up the problem by lying about the data.

VA medical centers are now usually run by administrators with no real medical experience. From the director down through their administrative chain of command, many are insufficiently medically trained to supervise a medical center. These administrators could not be expected to make good administrative decisions especially when clinicians have no meaningful input (10).

The present system is not transparent. My colleagues and I had to go through a FOIA request to obtain data on the numbers of physicians and nurses presented above. Even when data is known, the integrity of the data may be called into question as illustrated by the data with the wait times. 

The falsification of the wait times illustrates the lack of effective oversight. VA administration appears to be the problem and hiring more administrators who report to the same administrators will not solve the problem as suggested by the VA OIG (3-7). What is needed is a system where problems such as alteration of wait times can be identified on the local level and quickly corrected.

Solutions to the Problems

The first and most important solution is to provide meaningful oversight by at the local level by someone knowledgeable in healthcare. Currently, no system is in place to assure that administrators are accountable.  Despite concurring with the multitude of VA OIG’s recommendations, VA central office and the Veterans Integrated Service Networks have not been effective at correcting the problem of falsified data. In fact, their bonuses also depend on the data looking good. Locally, there exists a system of patient advocates and compliance officers but they report to the same administrators that they should be overseeing. The present system is not working. Therefore, I propose a new solution, the concept of the physician ombudsman. The ombudsman would be answerable to the VA OIG’s office. The various compliance officers, patient advocates, etc. should be reassigned to work for the ombudsman and not for the very people that they should be scrutinizing.

The physician ombudsman should be a part-time clinician, say 20% at a minimum. The latter is important in maintaining local clinical knowledge and identifying falsified clinical data. One of the faults of the present VA OIG system is that when they look at a complaint, they seem to have difficulty in identifying the source of the problem (16). Local knowledge would likely help and clinical experience would be invaluable. For example, it would be hard to say waiting times are short when the clinician ombudsman has difficulty referring a patient to a specialist at the VA or even booking a new or returning patient into their own clinic.

The overseeing ombudsman needs to have real oversight power, otherwise we have a repeat of the present system where problems are identified but nothing is done. Administrators should be privileged similar to clinicians. Administrators should undergo credentialing and review. This should be done by the physician ombudsman’s office.  Furthermore, the physician ombudsman should have the capacity to suspend administrative privileges and decisions that are potentially dangerous. For example, cutting the nursing staffing to dangerous levels in order to balance a budget might be an example of a situation where an ombudsman could rescind the action.

The paying of administrative bonuses for clinical work done by clinicians should stop. Administrators do not have the necessary medical training to supervise clinicians, and furthermore, do nothing to improve efficiency or clinically benefit Veterans (14). The present system only encourages further expansion of an already bloated administration (17). Administrators hire more administrators to reduce their workload. However, since they now supervise more people, they argue for an increase in pay. If a bonus must be paid, why not pay for something over which the administrators have real control, such as administrative efficiency (18). Perhaps this will stop the spiraling administrative costs that have been occurring in healthcare (17).

These suggestions are only some of the steps that could be taken to improve the chronic falsification of data by administrators with a financial conflict of interest. The present system appears to be ineffective and unlikely to change in the absence of action outside the VA. Otherwise, the repeating cycle of the OIG being called to look at wait times, noting that they are gamed, and nothing being done will continue.

Richard A. Robbins, M.D.*

Editor, Southwest Journal of Pulmonary

            and Critical Care

References

  1. http://www.va.gov/oig/pubs/VAOIG-12-00900-168.pdf  (accessed 4-26-12).
  2. http://www.va.gov/opa/pressrel/pressrelease.cfm?id=2302 (accessed 4-26-12).
  3. http://www.ca9.uscourts.gov/datastore/opinions/2011/07/12/08-16728.pdf (accessed 4-26-12).
  4. http://www.va.gov/oig/52/reports/2003/VAOIG-02-02129-95.pdf (accessed 4-26-12).
  5. http://www.va.gov/oig/54/reports/VAOIG-05-03028-145.pdf (accessed 4-26-12).
  6. http://www.va.gov/oig/54/reports/VAOIG-05-03028-145.pdf (accessed 4-26-12).
  7. http://www.va.gov/oig/52/reports/2007/VAOIG-07-00616-199.pdf (accessed 4-26-12).
  8. www.va.gov/HEALTHPOLICYPLANNING/rxweb.pdf (accessed 4-26-12).
  9. http://veterans.house.gov/107th-congress-hearing-archives (accessed 3/18/2012).
  10. Robbins RA. Profiles in medical courage: of mice, maggots and Steve Klotz. Southwest J Pulm Crit Care 2012;4:71-7.
  11. Robbins RA. Unpublished observations obtained from the Department of Veterans Affairs by FOIA request.
  12. http://www.medscape.com/features/slideshow/compensation/2012/psychiatry (accessed 4-26-12).
  13. http://seattletimes.nwsource.com/html/localnews/2018071724_mentalhealth26.html (accessed 4-26-12).
  14. Robbins RA, Gerkin R, Singarajah CU. Correlation between patient outcomes and clinical costs in the VA healthcare system. Southwest J Pulm Crit Care 2012;4:94-100.
  15. Robbins RA, Klotz SA. Quality of care in U.S. hospitals. N Engl J Med 2005;353:1860-1 [letter].
  16. Robbins RA. Mismanagement at the VA: where's the problem? Southwest J Pulm Crit Care 2011;3:151-3.
  17. Woolhandler S, Campbell T, Himmelstein DU. Health care administration in the United States and Canada: micromanagement, macro costs. Int J Health Serv 2004;34:65-78.
  18. Gao J, Moran E, Almenoff PL, Render ML, Campbell J, Jha AK. Variations in efficiency and the relationship to quality of care in the Veterans health system. Health Aff (Millwood) 2011;30:655-63.

*The author is a former VA physician who retired July 2, 2011 after 31 years.

The opinions expressed in this editorial are the opinions of the author and not necessarily the opinions of the Southwest Journal of Pulmonary and Critical Care or the Arizona Thoracic Society.

Reference as: Robbins RA. VA administrators gaming the system. Southwest J Pulm Crit Care 2012;4:149-54. (Click here for a PDF version of the editorial)