USNWR College Rankings Reexamined

Main Article Content

Thomas J. Webster
Rosette J. Mare

Keywords

Abstract

This paper extends Webster's [2001] analysis of the accuracy of the weighting scheme utilized by U.S. News & World Report (USNWR) to rank colleges and universities according to "widely accepted indicators of national excellence," which he found to be plagued by severe and pervasive multicollinearity.  As in the Webster study, we employ principal component analysis to assess the relative contributions of thirteen criteria used by USNWR in 2004 to rank "top schools" in the national university category.  Although USNWR continues to assign the greatest weight to peer assessment, this study confirms Webster's findings that average SAT/ACT scores of enrolled students is the most significant ranking criterion.  This paper also extends Webster's study by examining the reliability of the USNWR rankings, which have come under repeated criticism for their lack of consistency.  When compared with simulations generated from an estimated principal component regression model, the 2004 USNWR rankings are found to be increasingly more unreliable for lower ranked institutions.  The source of this inconsistency appears to be peer assessment, which is the only subjective criterion used in the USNWR ranking methodology.  This suggests that the rankings might be improved by lowering (or removing entirely) the relative contribution of peer assessment from the USNWR ranking methodology.

Downloads

Download data is not yet available.
Abstract 137 | PDF Downloads 459