Search Journal-type in search term and press enter
Southwest Pulmonary and Critical Care Fellowships

General Medicine

(Most recent listed first. Click on title to be directed to the manuscript.)

Infectious Diseases Telemedicine to the Arizona Department of Corrections
   During SARS-CoV-2 Pandemic. A Short Report.
The Potential Dangers of Quality Assurance, Physician Credentialing and
   Solutions for Their Improvement (Review)
Results of the SWJPCC Healthcare Survey
Who Are the Medically Poor and Who Will Care for Them?
Tacrolimus-Associated Diabetic Ketoacidosis: A Case Report and Literature 
   Review
Nursing Magnet Hospitals Have Better CMS Hospital Compare Ratings
Publish or Perish: Tools for Survival
Is Quality of Healthcare Improving in the US?
Survey Shows Support for the Hospital Executive Compensation Act
The Disruptive Administrator: Tread with Care
A Qualitative Systematic Review of the Professionalization of the 
   Vice Chair for Education
Nurse Practitioners' Substitution for Physicians
National Health Expenditures: The Past, Present, Future and Solutions
Credibility and (Dis)Use of Feedback to Inform Teaching : A Qualitative
Case Study of Physician-Faculty Perspectives
Special Article: Physician Burnout-The Experience of Three Physicians
Brief Review: Dangers of the Electronic Medical Record
Finding a Mentor: The Complete Examination of an Online Academic 
   Matchmaking Tool for Physician-Faculty
Make Your Own Mistakes
Professionalism: Capacity, Empathy, Humility and Overall Attitude
Professionalism: Secondary Goals 
Professionalism: Definition and Qualities
Professionalism: Introduction
The Unfulfilled Promise of the Quality Movement
A Comparison Between Hospital Rankings and Outcomes Data
Profiles in Medical Courage: John Snow and the Courage of
   Conviction
Comparisons between Medicare Mortality, Readmission and
   Complications
In Vitro Versus In Vivo Culture Sensitivities:
   An Unchecked Assumption?
Profiles in Medical Courage: Thomas Kummet and the Courage to
   Fight Bureaucracy
Profiles in Medical Courage: The Courage to Serve
and Jamie Garcia
Profiles in Medical Courage: Women’s Rights and Sima Samar
Profiles in Medical Courage: Causation and Austin Bradford Hill
Profiles in Medical Courage: Evidence-Based 
Medicine and Archie Cochrane
Profiles of Medical Courage: The Courage to Experiment and 
   Barry Marshall
Profiles in Medical Courage: Joseph Goldberger,
   the Sharecropper’s Plague, Science and Prejudice
Profiles in Medical Courage: Peter Wilmshurst,
   the Physician Fugitive
Correlation between Patient Outcomes and Clinical Costs
   in the VA Healthcare System
Profiles in Medical Courage: Of Mice, Maggots 
   and Steve Klotz
Profiles in Medical Courage: Michael Wilkins
   and the Willowbrook School
Relationship Between The Veterans Healthcare Administration
   Hospital Performance Measures And Outcomes 

 

 

Although the Southwest Journal of Pulmonary and Critical Care was started as a pulmonary/critical care/sleep journal, we have received and continue to receive submissions that are of general medical interest. For this reason, a new section entitled General Medicine was created on 3/14/12. Some articles were moved from pulmonary to this new section since it was felt they fit better into this category.

-------------------------------------------------------------------------------------

Entries in Centers for Medicare and Medicaid (2)

Saturday
Nov042017

Nursing Magnet Hospitals Have Better CMS Hospital Compare Ratings

Richard A. Robbins, MD

Phoenix Pulmonary and Critical Care Research and Education Foundation

Gilbert, AZ USA

Abstract

Background: There has been conflicting data on whether Nursing Magnet Hospitals (NMH) provide better care.

Methods: NMH in the Southwest USA (Arizona, California, Colorado, Hawaii, Nevada, and New Mexico) were compared to hospitals not designated as NMH using the Centers for Medicare and Medicaid (CMS) hospital compare star designation.

Results: NMH had higher star ratings than non-NMH hospitals (3.34 + 0.78 vs. 2.86 + 0.83, p<0.001). The hospitals were mostly large, urban non-critical access hospitals. Academic medical centers made up a disproportionately large portion of the NMH.

Conclusions: Although NMH had higher hospital ratings, the data may favor non-critical access academic medical centers which are known to have better outcomes.

Introduction

Magnet status is awarded to hospitals that meet a set of criteria designed to measure nursing quality by the American Nurses' Credentialing Center (ANCC), a part of the American Nurses Association (ANA). The Magnet designation program was based on a 1983 ANA survey of 163 hospitals deriving its key principles from the hospitals that had the best nursing performance. The prime intention was to help hospitals and healthcare facilities attract and retain top nursing talent.

There is no consensus whether Magnet status has an impact on nurse retention or on clinical outcomes. Kelly et al. (1) found that NMH hospitals provide better work environments and a more highly educated nursing workforce than non-NMH. In contrast, Trinkoff et al. (2) found no significant difference in working conditions between NHM and non-NMH. To further confuse the picture, Goode et al. (3) reported that NMH generally had poorer outcomes.

The Centers for Medicare and Medicaid Services (CMS) has developed star ratings in an attempt to measure quality of care (4). The ratings are based on five broad categories: 1. Outcomes; 2. Intermediate Outcomes; 3. Patient Experience; 4. Access; and 5. Process. Outcomes and intermediate outcomes are weighted three times as much as process measures, and patient experience and access measures are weighted 1.5 times as much as process measures. The ratings are from 1-5 stars with higher numbers of stars indicating a higher quality rating.

This study compares the CMS star ratings between NMH and non-NMH in the Southwest USA (Arizona, California, Colorado, Hawaii, Nevada and New Mexico). The results demonstrate that NMH have higher CMS star ratings. However, the NMH have characteristics which have been previously associated with higher quality of care using some measures.

Methods

Nursing Magnet Hospitals

NMH were identified from The American Nurses Credentialing Center website (5).

CMS Star Ratings

Star ratings were obtained from the CMS website (4).

Statistics

Only when data was available for both NMH and CMS star ratings were the hospitals included. Data was expressed as mean + standard deviation.  NMH and non-NMH were compared using Student’s t test. Significance was defined as p<0.05.

Results

Hospital Characteristics

There were 44 NMH and 415 non-NMH hospitals in the data (see Appendix). California had the most hospitals (287) and the most NMH (28). Arizona had 8 NMH, Colorado 7 and Hawaii 1. Nevada and New Mexico had none. All the NMH were acute care hospitals located in major metropolitan areas. Most were larger hospitals. None were designated critical access hospitals by CMS. Eleven of the NMH were the primary teaching hospitals for medical schools. Many of the others had affiliated teaching programs.

CMS Star Ratings

The CMS star ratings were higher for NMH than non NMH (3.34 + 0.78 vs. 2.86 + 0.83, p<0.001, Figure 1).

Figure 1. CMS star ratings for Nurse Magnet Hospitals (NMH) and non-NMH (p<0.001).

Discussion

The present study shows that for hospitals in the Southwest, NMH had higher CMS star ratings than non-NMH. This is consistent with better levels of care in NMH than non-NMH. However, the NMH were large, urban, non-critical access medical centers which were disproportionately academic medical centers. Previous studies have shown that these hospitals have better outcomes (6,7).

There seems to be little consensus in the literature regarding patient outcomes in NMH. A 2010 study concluded that non-NMH actually had better patient outcomes than NMH (3). Similarly, studies published early in this decade suggested little difference in outcomes (1,2). In contrast, a more recent study suggested improvements in patient outcomes in NMH (8). The present study supports the concept that NMH status might be a marker for better patient outcomes.

Achieving NMH status is expensive. Hospitals pay about $2 million for initial NMH certification, and pay nearly the same amount for re-certification every 4 years. It seems unlikely that small rural hospitals could afford the fee to achieve and maintain NMH regardless of their quality of care. Therefore, the NMH would be expected to be larger, urban medical centers which were the results found in the present study.

Despite there being no direct link of NMH to reimbursement, a study by the Robert Wood Johnson Foundation suggests that achieving NMH status increased hospital revenue (9). On average, NMH received an adjusted net increase in inpatient income of about $104 to $127 per discharge after earning Magnet status, amounting to about $1.2 million in revenue each year. The reason(s) for the improvement in hospital fiscal status are unclear.

Measuring quality of care is quite complex. The CMS star ratings are an attempt to summarize the quality of care using 5 broad categories: 1. Outcomes; 2. Intermediate Outcomes; 3. Patient Experience; 4. Access; and 5. Process. There are up to 32 measures in each category. Outcomes, patient experience and access seem relatively straight-forward. An example of a secondary outcome is control of blood pressure because of its link to outcomes. Examples of process measures include colorectal cancer screening, annual flu shot and monitoring physical activity. To further complicate the CMS ratings, each category is weighted.

It is possible that the CMS star ratings might miss or under weigh a key element in quality of care. For example, Needleman et al. (10) has emphasized that increased registered nurse staffing reduces hospital mortality. However, a 2011 study concluded that NMH had less total staff and a lower RN skill mix compared with non-NMH hospitals contributing to poorer outcomes (3).

The present study supports the concept that achieving NMH status is associated with better care as defined by CMS. However, given the complexities of measuring quality of care it is unclear whether this represents a marker of better hospitals or if the process of achieving NMH leads to better care.

References

  1. Kelly LA, McHugh MD, Aiken LH. Nurse outcomes in Magnet® and non-Magnet hospitals. J Nurs Adm. 2012 Oct;42(10 Suppl):S44-9. [PubMed]
  2. Trinkoff AM, Johantgen M, Storr CL, Han K, Liang Y, Gurses AP, Hopkinson S. A comparison of working conditions among nurses in Magnet and non-Magnet hospitals. J Nurs Adm. 2010 Jul-Aug;40(7-8):309-15. [CrossRef] [PubMed]
  3. Goode CJ, Blegen MA, Park SH, Vaughn T, Spetz J. Comparison of patient outcomes in Magnet® and non-Magnet hospitals. J Nurs Adm. 2011 Dec;41(12):517-23. [CrossRef] [PubMed]
  4. Centers for Medicare and Medicaid. 2017 star ratings. Available at: https://www.cms.gov/Newsroom/MediaReleaseDatabase/Fact-sheets/2016-Fact-sheets-items/2016-10-12.html (accessed 10/15/17).
  5. The American Nurses Credentialing Center. ANCC List of Magnet® Recognized Hospitals. Available at: http://www.clinicalmanagementconsultants.com/ancc-list-of-magnet-recognized-hospitals--cid-4457.html (accessed 10/15/17).
  6. Burke LG, Frakt AB, Khullar D, Orav EJ, Jha AK. Association Between Teaching Status and Mortality in US Hospitals. JAMA. 2017 May 23;317(20):2105-13. [CrossRef] [PubMed]
  7. Joynt KE, Harris Y, Orav EJ, Jha AK. Quality of care and patient outcomes in critical access rural hospitals. JAMA. 2011 Jul 6;306(1):45-52. [CrossRef] [PubMed]
  8. Friese CR, Xia R, Ghaferi A, Birkmeyer JD, Banerjee M. Hospitals in 'Magnet' program show better patient outcomes on mortality measures compared to non-'Magnet' hospitals. Health Aff (Millwood). 2015 Jun;34(6):986-92. [CrossRef] [PubMed]
  9. Jayawardhana J, Welton JM, Lindrooth RC. Is there a business case for magnet hospitals? Estimates of the cost and revenue implications of becoming a magnet. Med Care. 2014 May;52(5):400-6. [CrossRef] [PubMed]
  10. Needleman J, Buerhaus P, Pankratz VS, Leibson CL, Stevens SR, Harris M. Nurse staffing and inpatient hospital mortality. N Engl J Med. 2011 Mar 17;364(11):1037-45.[CrossRef] [PubMed]

Cite as: Robbins RA. Nursing magnet hospitals have better CMS hospital compare ratings. Southwest J Pulm Crit Care. 2017;15(5):209-13. doi: https://doi.org/10.13175/swjpcc128-17 PDF 

Monday
Sep232013

A Comparison Between Hospital Rankings and Outcomes Data

Richard A. Robbins, MD*

Richard D. Gerkin, MD  

 

*Phoenix Pulmonary and Critical Care Research and Education Foundation, Gilbert, AZ

Banner Good Samaritan Medical Center, Phoenix, AZ

 

Abstract

Hospital rankings have become common but the agreement between the rankings and correlation with patient-centered outcomes remains unknown. We examined the ratings of Joint Commission on Healthcare Organizations (JCAHO), Leapfrog, and US News and World Report (USNews), and outcomes from Centers for Medicare and Medicaid Hospital Compare (CMS) for agreement and correlation. There was some correlation among the three “best hospitals” ratings.  There was also some correlation between “best hospitals” and CMS outcomes, but often in a negative direction.  These data suggest that no one “best hospital” list identifies hospitals that consistently attain better outcomes.

Introduction

Hospital rankings are being published by a variety of organizations. These rankings are used by hospitals to market the quality of their services. Although all the rankings hope to identify “best” hospitals, they differ in methodology. Some emphasize surrogate markers; some emphasize safety, i.e., a lack of complications; some factor in the hospital’s reputation; some factor in patient-centered outcomes.  However, most do not emphasize traditional outcome measures such as mortality, mortality, length of stay and readmission rates. None factor cost or expenditures on patient care.

We examined three common hospital rankings and clinical outcomes. We reasoned that if the rankings are valid then better hospitals should be consistently on these best hospital lists. In addition, better hospitals should have better outcomes.

Methods

CMS

Outcomes data was obtained from the CMS Hospital Compare website from December 2012-January 2013 (1). The CMS website presents data on three diseases, myocardial infarction (MI), congestive heart failure (CHF) and pneumonia. We examined readmissions, complications and deaths for each of these diseases. We did not examine all process of care measures since many of the measures have not been shown to correlate with improved outcomes and patient satisfaction has been shown to correlate with higher admission rates to the hospital, higher overall health care expenditures, and increased mortality (2). In some instances actual data is not presented on the CMS website but only higher, lower or no different from the National average. In this case, scoring was done 2, 0 and 1 respectively with 2=higher, 0=lower and 1=no different.

Mortality is the 30-day estimates of deaths from any cause within 30 days of a hospital admission, for patients hospitalized with one of several primary diagnoses (MI, CHF, and pneumonia). Mortality was reported regardless of whether the patient died while still in the hospital or after discharge. Similarly, the readmission rates are 30-day estimates of readmission for any cause to any acute care hospital within 30 days of discharge. The mortality and readmission measures rates were adjusted for patient characteristics including the patient’s age, gender, past medical history, and other diseases or conditions (comorbidities) the patient had at hospital arrival that are known to increase the patient’s risk of dying or readmission.

The rates of a number of complications are also listed in the CMS data base (Table 1).

Table 1. Complications examined that are listed in CMS data base.

CMS calculates the rate for each serious complication by dividing the actual number of outcomes at each hospital by the number of eligible discharges for that measure at each hospital, multiplied by 1,000. The composite value reported on Hospital Compare is the weighted averages of the component indicators.  The measures of serious complications reported are risk adjusted to account for differences in hospital patients’ characteristics. In addition, the rates reported on Hospital Compare are “smoothed” to reflect the fact that measures for small hospitals are measured less accurately (i.e., are less reliable) than for larger hospitals.

Similar to serious infections, CMS calculates the hospital acquired infection data from the claims hospitals submitted to Medicare. The rate for each hospital acquired infection measure is calculated by dividing the number of infections that occur within any given eligible hospital by the number of eligible Medicare discharges, multiplied by 1,000. The hospital acquired infection rates were not risk adjusted.

JCAHO

The JCAHO list of Top Performers on Key Quality Measures™ was obtained from its 2012 list (3). The Top Performers are based on an aggregation of accountability measure data reported to The JCAHO during the previous calendar year.

Leapfrog

Leapfrog’s Hospital Safety Score were obtained from their website during December 2012-January 2013 (4). The score utilizes 26 National performance measures from the Leapfrog Hospital Survey, the Agency for Healthcare Research and Quality (AHRQ), the Centers for Disease Control and Prevention (CDC), and the Centers for Medicare and Medicaid Services (CMS) to produce a single composite score that represents a hospital’s overall performance in keeping patients safe from preventable harm and medical errors. The measure set is divided into two domains: (1) Process/Structural Measures and (2) Outcome Measures. Many of the outcome measures are derived from the complications reported by CMS (Table 1). Each domain represents 50% of the Hospital Safety Score. The numerical safety score is then converted into one of five letter grades. "A" denotes the best hospital safety performance, followed in order by "B", "C", “D,” and “F.” For analysis, these letter grades were converted into numerical grades 1-5 corresponding to letter grades A-F.

US News and World Report

US News and World Report’s (USNews) 2012-3 listed 17 hospitals on their honor roll (5). The rankings are based largely on objective measures of hospital performance, such as patient survival rates, and structural resources, such as nurse staffing levels. Each hospital’s reputation, as determined by a survey of physician specialists, was also factored in the ranking methodology. The USNews top 50 cardiology and pulmonology hospitals were also examined.

Statistical Analysis

Categorical variables such as JCAHO and USNews best hospitals were compared with other data using chi-squared analysis. Spearman rank correlation was used to help determine the direction of the correlations (positive or negative). Significance was defined as p<0.05.

Results

Comparisons of Hospital Rankings between Organizations

A large database of nearly 3000 hospitals was compiled for each of the hospital ratings (Appendix 1). The “best hospitals” as rated by the JCAHO, Leapfrog and USNews were compared for correlation between the organizations (Table 2).

Table 2. Correlation of “best hospitals” between different organizations

There was significant correlation between the JCAHO and Leapfrog and Leapfrog and USNews but not between JCAHO and USNews.

JCAHO-Leapfrog Comparison

The Leapfrog grades were significantly better for JCAHO “Best Hospitals” compared to hospitals not listed as “Best Hospitals” (2.26 + 0.95  vs. 1.85 + 0.91, p<0.0001). However, there were multiple exceptions. For example, of the 358 JCAHO “Best Hospitals” with a Leapfrog grade, 84 were graded “C”, 11 were graded “D” and one was graded as “F”.

JCAHO-USNews Comparison

Of the JCAHO “Top Hospitals” only one was listed on the USNews “Honor Roll”. Of the cardiology and pulmonary “Top 50” hospitals only one and two hospitals, respectively, were listed on the JCAHO “Top Hospitals” list.

Leapfrog-USNews Comparison

The Leapfrog grades of the US News “Honor Roll” hospitals did not significantly differ compared to the those hospitals not listed on the “Honor Roll” (2.21 + 0.02 vs. 1.81 + 0.31, p>0.05). However, Leapfrog grades of the US News “Top 50 Cardiology” hospitals had better Leapfrog grades (2.21 +  0.02 vs. 1.92 + 0.14, p<0.05). Similarly, Leapfrog grades of the US News “Top 50 Pulmonary” hospitals had better Leapfrog grades (2.21 + 0.02 vs. 1.91 + 0.15, p<0.05).

“Best Hospital” Mortality, Readmission and Serious Complications

The data for the comparison between the hospital rankings and CMS’ readmission rates, mortality rates and serious complications for the JCAHO, Leapfrog, and USNews are shown in Appendix 2, Appendix 3, and Appendix 4 respectively. The results of the comparison of “best hospitals” compared to hospitals not listed as best hospitals are shown in Table 3.

Table 3. Results of “best hospitals” compared to other hospitals for mortality and readmission rates for myocardial infarction (MI), congestive heart failure (CHF) and pneumonia.

Red:  Relationship is concordant (better rankings associated with better outcomes)

Blue:  Relationship is discordant (better rankings associated with worse outcomes)

Note that of 21 total p values for relationships, 12 are non-significant, 6 are concordant and significant, and 6 are discordant and significant.  All 4 of the significant readmission relationships are discordant. All 5 of the significant mortality relationships are concordant. This underscores the disjunction of mortality and readmission. All 3 of the relationships with serious complications are significant, but one of these is discordant. Of the 3 ranking systems, Leapfrog has the least correlation with CMS outcomes (5/7 non-significant).  USNews has the best correlation with CMS outcomes (6/7 significant).  However, 3 of these 6 are discordant.

The USNews “Top 50” hospitals for cardiology and pulmonology were also compared to those hospitals not listed as “Top 50” hospitals for cardiology and pulmonology. Similar to the “Honor Roll” hospitals there was a significantly higher proportion of hospitals with better mortality rates for MI and CHF for the cardiology “Top 50” and for pneumonia for the pulmonary “Top 50”. Both the cardiology and pulmonary “Top 50” had better serious complication rates (p<0.05, both comparisons, data not shown).

Discussion

Lists of hospital rankings have become widespread but whether these rankings identify better hospitals is unclear. We reasoned that if the rankings were meaningful then there should be widespread agreement between the hospital lists. We did find a level of agreement but there were exceptions. Hospital rankings should correlate with patient-centered outcomes such as mortality and readmission rates. Overall that level of agreement was low.

One probable cause accounting for the differences in hospital rankings is the differing methodologies used in determined the rankings. For example, JCAHO uses an aggregation of accountability measures. Leapfrog emphasizes safety or a lack of complications. US News uses patient survival rates, structural resources, such as nurse staffing levels, and the hospital’s reputation. However, the exact methodolgical data used to formulate the rankings is often vague, especially for JCAHO and US News rankings. Therefore, it should not be surprising that the hospital rankings differ.

Another probable cause for the differing rankings is the use of selected complications in place of patient-centered outcome measures. Complications are most meaningful when they negatively affect ultimate patient outcomes. Some complications such as objects accidentally left in the body after surgery, air bubble in the bloodstream or mismatched blood types are undesirable but very infrequent. Whether a slight, but significant, increase in these complications would increase more global measures such as morality or readmission rates is unlikely. The overall poor correlation of these outcomes with deaths and readmissions in the CMS database is consistent with this concept.

Some of the surrogate complication rates are clearly evidence-based but some are clearly not. For example, many of the central-line associated infection and ventilator-associated pneumonia guidelines used are non-evidence based (6.7). Furthermore, overreaction to correct some of the complications such as “signs of uncontrolled blood sugar” may be potentially harmful. This complication could be interpreted as tight control of the blood sugar. Unfortunately, when rigorously studied, patients with tight glucose control actually had an increase in mortality (8).

In some instances a complication was associated with improved outcomes. Although the reason for this discordant correlation is unknown, it is possible that the complication may occur as a result of better care. For example, catherization of a central vein for rapid administration of fluids, drugs, blood products, etc. may result in better outcomes or quality but will increase the central line-associated bloodstream infection rate. In contrast, not inserting a catheter when appropriate might lead to worse outcomes or poorer quality but would improve the infection rate.

Many of the rankings are based, at least in part, on complication data self-reported by the hospitals to CMS. However, the accuracy of this data has been called into question (9,10). Meddings et al. (10) studied urinary tract infections which were self-reported by hospitals using claims data. According to Meddings (10), the data were “inaccurate” and not were “not valid data sets for comparing hospital acquired catheter-associated urinary tract infection rates for the purpose of public reporting or imposing financial incentives or penalties”. The authors proposed that the nonpayment by Medicare for “reasonably preventable” hospital-acquired complications resulted in this discrepancy. Inaccurate data may lead to the lack of correlation a complication and outcomes on the CMS database.

The sole source of mortality and readmission data in this study was CMS. This is limited to Medicare and Medicaid patients but is probably representative of the general population in an acute care hospital. However, also included on the CMS website is a dizzying array of measures. We did not analyze every measure but analyzed only those listed in Table 1. Whether examination of other measures would correlate with mortality and readmission rates is unclear.

There are several limitations to our data. First and foremost, the CMS data is self-reported by hospitals. The validity and accuracy of the data has been called into question. Second, data is missing in multiple instances. For example, much of the data from Maryland was not present. Also, there were multiple instances when the data was “unavailable” or the “number of cases are too small”.  Third, in some instances CMS did not report actual data but only higher, lower or no different from the National average. This loss of information may have led to inaccurate analyses. Fourth, much of the data are from surrogate markers, a fact which is important since surrogate markers have not been shown to predict outcomes. This is also puzzling since patient-centered outcomes are available.  Fifth, much of the outcomes data is derived from CMS which to a large extent eliminates Veterans Administration, pediatric, mental health and some other specialty facilities.

It is unclear if any of the hospital rankings should be used by patients or healthcare providers when choosing a hospital. At present it would appear that the rankings have an over reliance on surrogate markers, many of which are weakly evidence-based. Furthermore, categorizing the data as average, below or above average may lead to an inaccurate interpretation of the data. Lastly, the accuracy of the data is unclear. Finally, lack of data on length of stay and some major morbidities is a major weakness. We as physicians need to scrutinize these measurement systems and insist on greater methodological rigor and more relevant criteria to choose. Until these shortcomings are overcome, we cannot recommend the use of hospital rankings by patients or providers.

References

  1. http://www.medicare.gov/hospitalcompare/ (accessed 6/12/13).
  2. Fenton JJ, Jerant AF, Bertakis KD, Franks P. The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med. 2012;172(5):405-11. [CrossRef] [PubMed]
  3. http://www.jointcommission.org/annualreport.aspx (accessed 6/12/13).
  4. http://www.hospitalsafetyscore.org (accessed 6/12/13).
  5. http://health.usnews.com/best-hospitals (accessed 6/12/13).
  6. Padrnos L, Bui T, Pattee JJ, Whitmore EJ, Iqbal M, Lee S, Singarajah CU, Robbins RA. Analysis of overall level of evidence behind the Institute of Healthcare Improvement ventilator-associated pneumonia guidelines. Southwest J Pulm Crit Care. 2011;3:40-8.
  7. Hurley J, Garciaorr R, Luedy H, Jivcu C, Wissa E, Jewell J, Whiting T, Gerkin R, Singarajah CU, Robbins RA. Correlation of compliance with central line associated blood stream infection guidelines and outcomes: a review of the evidence. Southwest J Pulm Crit Care. 2012;4:163-73.
  8. NICE-SUGAR Study Investigators. Intensive versus conventional insulin therapy in critically ill patients. N Engl J Med. 2009;360:1283-97. [CrossRef] [Pubmed]
  9. Robbins RA. The emperor has no clothes: the accuracy of hospital performance data. Southwest J Pulm Crit Care. 2012;5:203-5.
  10. Meddings JA, Reichert H, Rogers MA, Saint S, Stephansky J, McMahon LF. Effect of nonpayment for hospital-acquired, catheter-associated urinary tract infection: a statewide analysis. Ann Intern Med. 2012;157:305-12. [CrossRef] [PubMed]

Reference as: Robbins RA, Gerkin RD. A comparison between hospital rankings and outcomes data. Southwest J Pulm Crit Care. 2013;7(3):196-203. doi: http://dx.doi.org/10.13175/swjpcc076-13 PDF