Search Journal-type in search term and press enter
Southwest Pulmonary and Critical Care Fellowships

 Editorials

Last 50 Editorials

(Most recent listed first. Click on title to be directed to the manuscript.)

A Call for Change in Healthcare Governance (Editorial & Comments)
The Decline in Professional Organization Growth Has Accompanied the
   Decline of Physician Influence on Healthcare
Hospitals, Aviation and Business
Healthcare Labor Unions-Has the Time Come?
Who Should Control Healthcare? 
Book Review: One Hundred Prayers: God's answer to prayer in a COVID
   ICU
One Example of Healthcare Misinformation
Doctor and Nurse Replacement
Combating Physician Moral Injury Requires a Change in Healthcare
   Governance
How Much Should Healthcare CEO’s, Physicians and Nurses Be Paid?
Improving Quality in Healthcare 
Not All Dying Patients Are the Same
Medical School Faculty Have Been Propping Up Academic Medical
Centers, But Now Its Squeezing Their Education and Research
   Bottom Lines
Deciding the Future of Healthcare Leadership: A Call for Undergraduate
   and Graduate Healthcare Administration Education
Time for a Change in Hospital Governance
Refunds If a Drug Doesn’t Work
Arizona Thoracic Society Supports Mandatory Vaccination of Healthcare
   Workers
Combating Morale Injury Caused by the COVID-19 Pandemic
The Best Laid Plans of Mice and Men
Clinical Care of COVID-19 Patients in a Front-line ICU
Why My Experience as a Patient Led Me to Join Osler’s Alliance
Correct Scoring of Hypopneas in Obstructive Sleep Apnea Reduces
   Cardiovascular Morbidity
Trump’s COVID-19 Case Exposes Inequalities in the Healthcare System
Lack of Natural Scientific Ability
What the COVID-19 Pandemic Should Teach Us
Improving Testing for COVID-19 for the Rural Southwestern American Indian
   Tribes
Does the BCG Vaccine Offer Any Protection Against Coronavirus Disease
   2019?
2020 International Year of the Nurse and Midwife and International Nurses’
   Day
Who Should be Leading Healthcare for the COVID-19 Pandemic?
Why Complexity Persists in Medicine
Fatiga de enfermeras, el sueño y la salud, y garantizar la seguridad del
   paciente y del publico: Unir dos idiomas (Also in English)
CMS Rule Would Kick “Problematic” Doctors Out of Medicare/Medicaid
Not-For-Profit Price Gouging
Some Clinics Are More Equal than Others
Blue Shield of California Announces Help for Independent Doctors-A
   Warning
Medicare for All-Good Idea or Political Death?
What Will Happen with the Generic Drug Companies’ Lawsuit: Lessons from
   the Tobacco Settlement
The Implications of Increasing Physician Hospital Employment
More Medical Science and Less Advertising
The Need for Improved ICU Severity Scoring
A Labor Day Warning
Keep Your Politics Out of My Practice
The Highest Paid Clerk
The VA Mission Act: Funding to Fail?
What the Supreme Court Ruling on Binding Arbitration May Mean to
   Healthcare 
Kiss Up, Kick Down in Medicine 
What Does Shulkin’s Firing Mean for the VA? 
Guns, Suicide, COPD and Sleep
The Dangerous Airway: Reframing Airway Management in the Critically Ill 
Linking Performance Incentives to Ethical Practice 

 

For complete editorial listings click here.

The Southwest Journal of Pulmonary and Critical Care welcomes submission of editorials on journal content or issues relevant to the pulmonary, critical care or sleep medicine. Authors are urged to contact the editor before submission.

---------------------------------------------------------------------------------------------

Entries in Joint Commission (8)

Monday
Oct082012

The Emperor Has No Clothes: The Accuracy of Hospital Performance Data  

Several studies were announced within the past month dealing with performance measurement. One was the Joint Commission on the Accreditation of Healthcare Organizations (Joint Commission, JCAHO) 2012 annual report on Quality and Safety (1). This includes the JCAHO’s “best” hospital list. Ten hospitals from Arizona and New Mexico made the 2012 list (Table 1).

Table 1. JCAHO list of “best” hospitals in Arizona and New Mexico for 2011 and 2012.

This compares to 2011 when only six hospitals from Arizona and New Mexico were listed. Notably underrepresented are the large urban and academic medical centers. A quick perusal of the entire list reveals that this is true for most of the US, despite larger and academic medical centers generally having better outcomes (2,3).

This raises the question of what criteria are used to measure quality. The JCAHO criteria are listed in Appendix 2 at the end of their report. The JCAHO criteria are not outcome based but a series of surrogate markers. The Joint Commission calls their criteria “evidence-based” and indeed some are, but some are not (2). Furthermore, many of the Joint Commission’s criteria are bundled. In other words, failure to comply with one criterion is the same as failing to comply with them all. They are also not weighted, i.e., each criterion is judged to be as important as the other. An example where this might have an important effect on outcomes might be pneumonia. Administering an appropriate antibiotic to a patient with pneumonia is clearly evidence-based. However, administering the 23-polyvalent pneumococcal vaccine in adults is not effective (4-6). By the Joint Commission’s criteria administering pneumococcal vaccine is just as important as choosing the right antibiotic and failure to do either results in their judgment of noncompliance.

Previous studies have not shown that compliance with the JCAHO criteria improves outcomes (2,3). Examination of the US Health & Human Services Hospital Compare website is consistent with these results. None of the “best” hospitals in Arizona or New Mexico were better than the US average in readmissions, complications, or deaths (7).

A second announcement was the success of the Agency for Healthcare Quality and Research’s (AHRQ) program on central line associated bloodstream infections (CLABSI) (8). According to the press release the AHRQ program has prevented more than 2,000 CLABSIs, saving more than 500 lives and avoiding more than $34 million in health care costs. This is surprising since with the possible exception of using chlorhexidine instead of betadine, the bundled criteria are not evidence-based and have not correlated with outcomes (9). Examination of the press release reveals the reduction in mortality and the savings in healthcare costs were estimated from the hospital self-reported reduction in CLABSI.

A clue to the potential source of these discrepancies came from an article published in the Annals of Internal Medicine by Meddings and colleagues (10). These authors studied urinary tract infections which were self-reported by hospitals using claims data. According to Meddings, the data were “inaccurate” and “are not valid data sets for comparing hospital acquired catheter-associated urinary tract infection rates for the purpose of public reporting or imposing financial incentives or penalties”. The authors propose that the nonpayment by Medicare for “reasonably preventable” hospital-acquired complications resulted in this discrepancy. There is no reason to assume that data reported for CLABSI or ventilator associated pneumonia (VAP) is any more accurate.

These and other healthcare data seem to follow a trend of bundling weakly evidence-based, non-patient centered surrogate markers with legitimate performance measures. Under threat of financial penalty the hospitals are required to improve these surrogate markers, and not surprisingly, they do. The organization mandating compliance with their outcomes joyfully reports how they have improved healthcare saving both lives and money. These reports are often accompanied by estimates, but not measurement, of patient centered outcomes such as mortality, morbidity, length of stay, readmission or cost. The result is that there is no real effect on healthcare other than an increase in costs. Furthermore, there would seem to be little incentive to question the validity of the data. The organization that mandates the program would be politically embarrassed by an ineffective program and the hospital would be financially penalized for honest reporting.

Improvement begins with the establishment of guidelines that are truly evidence-based and have a reasonable expectation of improving patient centered outcomes. Surrogate markers should be replaced by patient-centered outcomes such as mortality, morbidity, length of stay, readmission, and/or cost. The recent "pay-for-performance" ACA provision on hospital readmissions that went into effect October 1 is a step in the right direction. The guidelines should not be bundled but weighted to their importance. Lastly, the validity of the data needs to be independently confirmed and penalties for systematically reporting fraudulent data should be severe. This approach is much more likely to result in improved, evidence-based healthcare rather than the present self-serving and inaccurate programs without any benefit to patients.

Richard A. Robbins, MD*

Editor, Southwest Journal of Pulmonary and Critical Care

References

  1. Available at: http://www.jointcommission.org/assets/1/18/TJC_Annual_Report_2012.pdf (accessed 9/22/12).
  2. Robbins RA, Gerkin R, Singarajah CU. Relationship between the Veterans Healthcare Administration hospital performance measures and outcomes. Southwest J Pulm Crit Care 2011;3:92-133.
  3. Rosenthal GE, Harper DL, Quinn LM. Severity-adjusted mortality and length of stay in teaching and nonteaching hospitals. JAMA 1997;278:485-90.
  4. Fine MJ, Smith MA, Carson CA, Meffe F, Sankey SS, Weissfeld LA, Detsky AS, Kapoor WN. Efficacy of pneumococcal vaccination in adults. A meta-analysis of randomized controlled trials. Arch Int Med 1994;154:2666-77.
  5. Dear K, Holden J, Andrews R, Tatham D. Vaccines for preventing pneumococcal infection in adults. Cochrane Database Sys Rev 2003:CD000422.
  6. Huss A, Scott P, Stuck AE, Trotter C, Egger M. Efficacy of pneumococcal vaccination in adults: a meta-analysis. CMAJ 2009;180:48-58.
  7. http://www.hospitalcompare.hhs.gov/ (accessed 9/22/12).
  8. http://www.ahrq.gov/news/press/pr2012/pspclabsipr.htm (accessed 9/22/12).
  9. Hurley J, Garciaorr R, Luedy H, Jivcu C, Wissa E, Jewell J, Whiting T, Gerkin R, Singarajah CU, Robbins RA. Correlation of compliance with central line associated blood stream infection guidelines and outcomes: a review of the evidence. Southwest J Pulm Crit Care 2012;4:163-73.
  10. Meddings JA, Reichert H, Rogers MA, Saint S, Stephansky J, McMahon LF. Effect of nonpayment for hospital-acquired, catheter-associated urinary tract infection: a statewide analysis. Ann Intern Med 2012;157:305-12.

*The views expressed are those of the author and do not necessarily represent the views of the Arizona or New Mexico Thoracic Societies.

Reference as: Robbins RA. The emperor has no clothes: the accuracy of hospital performance data. Southwest J Pulm Crit Care 2012;5:203-5. PDF

Tuesday
Nov012011

Why Is It So Difficult to Get Rid of Bad Guidelines? 

Reference as: Robbins RA. Why is it so difficult to get rid of bad guidelines? Southwest J Pulm Crit Care 2011;3:141-3. (Click here for a PDF version of the editorial)

My colleagues and I recently published a manuscript in the Southwest Journal of Pulmonary and Critical Care examining compliance with the Joint Commission of Healthcare Organization (Joint Commission, JCAHO) guidelines (1). Compliance with the Joint Commission’s acute myocardial infarction, congestive heart failure, pneumonia and surgical process of care measures had no correlation with traditional outcome measures including mortality rates, morbidity rates, length of stay and readmission rates. In other words, increased compliance with the guidelines was ineffectual at improving patient centered outcomes. Most would agree that ineffectual outcomes are bad. The data was obtained from the Veterans Healthcare Administration Quality and Safety Report and included 485,774 acute medical/surgical discharges in 2009 (2). This data is similar to the Joint Commission’s own data published in 2005 which showed no correlation between guideline compliance and hospital mortality and a number of other publications which have failed to show a correlation with the Joint Commission’s guidelines and patient centered outcomes (3-8). As we pointed out in 2005, the lack of correlation is not surprising since several of the guidelines are not evidence based and improvement in performance has usually been because of increased compliance with these non-evidence based guidelines (1,9).

The above raises the question that if some of the guidelines are not evidence based, and do not seem to have any benefit for patients, why do they persist? We believe that many of the guidelines were formulated with the concept of being easy and cheap to measure and implement, and perhaps more importantly, easy to demonstrate an improvement in compliance. In other words, the guidelines are initiated more to create the perception of an improvement in healthcare, rather than an actual improvement. For example in the pneumonia guidelines, one of the performance measures which have markedly improved is administration of pneumococcal vaccine. Pneumococcal vaccine is easy and cheap to administer once every 5 years to adult patients, despite the evidence that it is ineffective (10). In contrast, it is probably not cheap and certainly not easy to improve pneumonia mortality rates, morbidity rates, length of stay and readmission rates.

To understand why these ineffectual guidelines persist, one needs to understand who benefits from guideline implementation and compliance. First, organizations which formulate the guidelines, such as the Joint Commission, benefit. Implementing a program that the Joint Commission can claim shows an improvement in healthcare is self-serving, but implementing a program which provides no benefit would be politically devastating. At a time when some hospitals are opting out of Joint Commission certification, and when the Joint Commission is under pressure from competing regulatory organizations, the Joint Commission needs to show their programs produce positive results.

Second, programs to ensure compliance with the guidelines directly employ an increasingly large number of personnel within a hospital. At the last VA hospital where I was employed, 26 full time personnel were employed in quality assurance. Since compliance with guidelines to a large extent accounts for their employment, the quality assurance nurses would seem to have little incentive to question whether these guidelines really result in improved healthcare. Rather, their job is to ensure guideline compliance from both hospital employees and nonemployees who practice within the hospital.

Lastly, the administrators within a hospital have several incentives to preserve the guideline status quo. Administrators are often paid bonuses for ensuring guideline compliance. In addition to this direct financial incentive, administrators can often lobby for increases in pay since with the increase number of personnel employed to ensure guideline compliance, the administrators now supervise more employees, an important factor in determining their salary. Furthermore, success in improving compliance, allows administrators to advertise both themselves and their hospital as “outstanding”.

In addition, guidelines allow administrative personnel to direct patient care and indirectly control clinical personnel. Many clinical personnel feel uneasy when confronted with "evidence-based" protocols and guidelines when they are clearly not “evidence-based”. Such discomfort is likely to be more intense when the goals are not simply to recommend a particular approach but to judge failure to comply as evidence of substandard or unsafe care. Reporting a physician or a nurse for substandard care to a licensing board or on a performance evaluation may have devastating consequences.

There appears to be a discrepancy between an “outstanding” hospital as determined by the Joint Commission guidelines and other organizations. Many hospitals which were recognized as top hospitals by US News & World Report, HealthGrades Top 50 Hospitals, or Thomson Reuters Top Cardiovascular Hospitals were not included in the Joint Commission list. Absent are the Mayo Clinic, the Cleveland Clinic, Johns Hopkins University, Stanford University Medical Center, and Massachusetts General.  Academic medical centers, for the most part, were noticeably absent. There were no hospitals listed in New York City, none in Baltimore and only one in Chicago. Small community hospitals were overrepresented and large academic medical centers were underrepresented in the report. However, consistent with previous reports, we found that larger predominately urban, academic hospitals had better all cause mortality, surgical mortality and surgical morbidity compared to small, rural hospitals (1).

Despite the above, I support both guidelines and performance measures, but only if they clearly result in improved patient centered outcomes. Formulating guidelines where the only measure of success is compliance with the guideline should be discouraged. We find it particularly disturbing that we can easily find a hospital’s compliance with a Joint Commission guideline but have difficulty finding the hospital’s standardized mortality rates, morbidity rates, length of stay and readmission rates, measures which are meaningful to most patients. The Joint Commission needs to develop better measures to determine hospital performance. Until that time occurs, the “quality” measures need to be viewed as what they are-meaningless measures which do not serve patients but serve those who benefit from their implementation and compliance.

Richard A. Robbins, M.D.

Editor, Southwest Journal of Pulmonary and Critical Care

References

  1. Robbins RA, Gerkin R, Singarajah CU. Relationship between the veterans healthcare administration hospital performance measures and outcomes. Southwest J Pulm Crit Care 2011;3:92-133.
  2. Available at: http://www.va.gov/health/docs/HospitalReportCard2010.pdf (accessed 9-28-11).
  3. Williams SC, Schmaltz SP, Morton DJ, Koss RG, Loeb JM. Quality of care in U.S. hospitals as reflected by standardized measures, 2002-2004. N Engl J Med. 2005;353:255-64.
  4. Werner RM, Bradlow ET. Relationship between Medicare's hospital compare performance measures and mortality rates. JAMA 2006;296:2694-702.
  5. Peterson ED, Roe MT, Mulgund J, DeLong ER, Lytle BL, Brindis RG, Smith SC Jr, Pollack CV Jr, Newby LK, Harrington RA, Gibler WB, Ohman EM. Association between hospital process performance and outcomes among patients with acute coronary syndromes. JAMA 2006;295:1912-20.
  6. Fonarow GC, Yancy CW, Heywood JT; ADHERE Scientific Advisory Committee, Study Group, and Investigators. Adherence to heart failure quality-of-care indicators in US hospitals: analysis of the ADHERE Registry. Arch Int Med 2005;165:1469-77.
  7. Wachter RM, Flanders SA, Fee C, Pronovost PJ. Public reporting of antibiotic timing in patients with pneumonia: lessons from a flawed performance measure. Ann Intern Med 2008;149:29-32.
  8. Stulberg JJ, Delaney CP, Neuhauser DV, Aron DC, Fu P, Koroukian SM.  Adherence to surgical care improvement project measures and the association with postoperative infections. JAMA. 2010;303:2479-85.
  9. Robbins RA, Klotz SA. Quality of care in U.S. hospitals. N Engl J Med. 2005;353:1860-1.
  10. Padrnos L, Bui T, Pattee JJ, Whitmore EJ, Iqbal M, Lee S, Singarajah CU, Robbins RA. Analysis of overall level of evidence behind the Institute of Healthcare Improvement ventilator-associated pneumonia guidelines. Southwest J Pulm Crit Care 2011;3:40-8.

The opinions expressed in this editorial are the opinions of the author and not necessarily the opinions of the Southwest Journal of Pulmonary and Critical Care or the Arizona Thoracic Society.

Tuesday
Jun282011

The Pain of the Timeout

Reference as : Robbins RA. The pain of the timeout. Southwest J Pulm Crit Care 2011:2:102-5. (Click here for a PDF version)

An article in the Washington Post entitled “The Pain of Wrong Site Surgery” (1) caught my eye earlier this month. In 2004 the Joint Commission of Healthcare Organizations (Joint Commission or JCAHO), prompted by media reports of wrong site surgery, mandated the “universal protocol” or surgical timeout. These rules require preoperative verification of correct patient, correct site, marking of the surgical site and a timeout to confirm everything just before the procedure starts. In announcing the rules, Dr. Dennis O’Leary, then president of the Joint Commission, stated “This is not quite ‘Dick and Jane,’ but it’s pretty close,” and that the rules were “very simple stuff” to prevent events such as wrong site or patient surgery which are so egregious and avoidable that they should be “never events” because they should never happen. During the following years different components have been added to the timeout and the timeout has been extended to cover most procedures in the hospital.

However, the article goes on to state that “some researchers and patient safety experts say the problem of wrong-site surgery has not improved and may be getting worse, although spotty reporting makes conclusions difficult”. Last year 93 cases were reported to the Joint Commission in 2009 compared to 49 in 2004. Furthermore the article states that reporting data from Minnesota and Pennsylvania, two states that require reporting have not shown a decrease over the past few years.

The reason for the increasing incidence of wrong site or wrong patient operations is not totally clear. Dr. Mark Chassin, who replaced O’Leary as president of the Joint Commission in 2008, said he thinks such errors are growing in part because of increased time pressures. Preventing wrong-site surgery also “turns out to be more complicated to eradicate than anybody thought,” he said, because it involves changing the culture of hospitals and getting doctors — who typically prize their autonomy, resist checklists and underestimate their propensity for error — to follow standardized procedures and work in teams. Dr. Peter Pronovost, medical director of the Johns Hopkins Center for Innovation in Quality Patient Care, echoed those sentiments by suggesting that doctors only pay lip service to the rules. Studies of wrong-site errors have consistently revealed a failure by physicians to participate in a timeout. Dr. Ken Kizer, former Undersecretary at the Department of Veterans Affairs and President of the National Quality Forum, advocates reporting doctors to a federal agency so wrong site surgery or patient cases can be investigated and the results publicly reported.

Several points made in the article need to be clarified

  1. The reason that it is unclear whether the present Joint Commission mandates actually prevents wrong site or patient surgery is that no data was systematically collected prior to implementation of the timeout to ensure that it works and no data has been collected since implementation. As with most bureaucracies, the Joint Commission emphasis has been more on ensuring compliance rather than studying the effectiveness of an intervention.
  2. Although no one condones wrong site or patient surgery, it is fortunately relatively rare. Stahel et al. (2) reported 132 wrong-site and wrong-patient cases during a 6 and a half year period by over 5000 physicians. They found only one death which was attributed to a wrong-sided chest tube placement for respiratory failure (2). This is questionable because a wrong sided chest tube does  not necessarily result in a patient’s death (3). Another 43 patients had significant harm from their wrong site or patient procedure and are listed below (Table 1). 
  3. Based on the above, occurrence of these wrong site or patient operations would appear to be mostly in the operating room. The surgeon often enters the operating room after the patient is under general anesthesia, prepped and draped. Unless the surgeon saw the patient in the operating room prior to anesthesia and marked the operative site, it would not be possible for the surgeon to know that the correct site and patient are present.  It is not stated in the article how many of the operations reported had a timeout or the surgeon labeled the operative site but it is implied in the article that it was few. The first author of the manuscript, Philip Stahel, an orthopedic surgeon from the University of Colorado, explained the results stating that “many doctors resent the rules, even though orthopedists have a 25 percent chance of making a wrong-site error during their career….” Dr. John R. Clarke, a professor of surgery at Drexel University College of Medicine and clinical director of the Pennsylvania Patient Safety Authority, agreed stating, “There’s a big difference between hospitals that take care of patients and those that take care of doctors…The staff needs to believe the hospital will back them against even the biggest surgeon.”
  4. Dr. Peter Pronost extends this sentiment by stating “Health care has far too little accountability for results. . . . All the pressures are on the side of production; that’s how you get paid.” He adds that increased pressure to turn over operating rooms quickly has trumped patient safety, increasing the chance of error.

I would offer some suggestions:

  1. Focus should be on the operating room since this is where most of the wrong site or wrong patient procedures occur. I’m frustrated by the unnecessary timeouts that occur during bronchoscopy. For example, where the patient is known to me, enters the bronchoscopy suite awake and alert, and the biopsies are done under direct vision, fluoroscopic or CT guidance there is no real chance of wrong site or patient surgery. Similar procedures do not need a timeout. The Joint Commission needs to recognize this and stop its “one size fits all” approach.
  2. What is needed is data. Right now it is unclear whether a timeout makes any difference. A scientific valid study of the timeout procedure is needed but not observational studies, designed only to create political statistics that a timeout works. The Joint Commission and other regulatory health care organizations need to break the habit of mandating interventions based on no or little evidence.
  3. The Joint Commission mandates have apparently had little impact on reducing wrong site or patient operations. Making further mandates would seem to offer little hope. If, as Dr. Chassin believes, that time is the issue, adding more items to a checklist will not likely improve the problem and probably make it worse.
  4. If time is the culprit in the operating room then simplifying the process as much as possible might be useful. I have been told of one operating room in Phoenix where a timeout is so extensive that it can take up to 30 minutes. Marking the site by the surgeon should be mandatory and a simplified, standardized checklist read and confirmed by the nurse, anesthesiologist and/or surgeon will hopefully simplify the timeout and enhance data collection.
  5. I would agree with both Provost and Kizer that accountability needs to be present. However, Kizer’s idea of a Federal repository may be ineffectual at improving outcomes. Witness the National Practioner Databank which has done nothing to improve health care and blames only physicians for lapses in healthccare. It would seem that many of the physicians quoted above do the same, i.e., blame only the doctors. Dr. Chassin suggests a team approach to medicine, i.e., an operating room team. I agree but it seems inconsistent to refer to a team approach and only hold the physicians accountable. Instead, I would suggest a mandatory reporting system with a free, transparent and searchable data base available to everyone. This data bank should report not only the surgeon(s) but everyone else in the operating room. Hospitals also need to be identified so that they cannot deflect their accountability by blaming surgeons while emphasizing operating room turn around over patient safety. This means not only the hospital but the CEO or administrator needs to accept some responsibility. The CEO or administrator controls the finances and often touts their “accountability”. It is time to put some teeth to that claim. Such a transparent data base will not only allow patients to check on surgeons but also hospitals, nurses, and anesthesiologists. Furthermore, it will allow the healthcare providers to check on each other as well as substandard hospitals and their administrators.

Richard A. Robbins, M.D.

Editor, Southwest Journal of Pulmonary and Critical Care

References

  1. Boodman SG. The pain of wrong site surgery. Washington Post. Published June 20, 2011. Available at URL http://www.washingtonpost.com/national/the-pain-of-wrong-site-surgery/2011/06/07/AGK3uLdH_story.html (accessed 6-21-11).
  2. Stahel PF, Sabel AL, Victoroff MS, Varnell J, Lembitz A, Boyle DJ, Clarke TJ, Smith WR, Mehler PS. Wrong-site and wrong-patient procedures in the universal protocol era: analysis of a prospective database of physician self-reported occurrences. Arch Surg.2010;145:978-84
  3. Singarajah C, Park K. A case of mislabeled identity. Southwest J Pulm Crit Care 2010;1:22-27.

The opinions expressed in this editorial are the opinions of the author and not necessarily the opinions of the Southwest Journal of Pulmonary and Critical Care or the Arizona Thoracic Society.

Page 1 2