Every year when the popular American magazine U.S. News & World Report releases its "National Universities Rankings" online and in print, it sends shockwaves through the education world. High school students fixate themselves on the order and delete poorly-ranked colleges from their Common Application. College students hold their breath as they scroll down the web page, silently praying their school has not fallen in the rankings. Parents clench their fists as they open the magazine, wondering how the institution/s robbing their pocketbook every semester stack up against the rest. Administrators are equally anxious to see their university’s name on the first page, knowing a small slide can mean the difference between millions of dollars in endowment funding and the caliber of student the school attracts.
The U.S. News rankings are met with such anxiety because of their influence on students’ college decisions. Unfortunately, as this paper will argue, this influence is undeserved. The rankings employ a flawed methodology that does not accurately portray the quality of each institution. Adding insult to injury, as long as rankings affect student behavior, colleges will be inclined to react to them at the expense of the overall educational experience they provide. The rankings are further disconnected from reality because firms hiring college graduates do not seem to care much about the rank of the applicant’s alma mater. U.S. News’ “National Universities Rankings” are detrimental to the higher education landscape because they can mislead students and prompt misguided institutional actions. Fortunately, there are actions U.S. News can take to improve the rankings as a tool in the arduous college admissions process.
Interest in college rankings developed in the first place as a reaction to the skyrocketing cost of a college education. Since 1983, according to Professors Samuel M. Natale of the University of Oxford and Caroline Doran of St. Mary’s College of California, tuition as a percentage of college’s revenue has increased from 24% to 36.3%, representing an increased financial burden for students (Natale and Doran 191). As a result, “Because students and their families are paying [for] a larger proportion of students’ education, they are demanding information on retention rate, graduation rates, and job prospects—information that will indicate their return on investment” (192). All this information is encompassed in a college’s ranking, which could explain why forty percent of students considered college rankings in their matriculation decisions twenty years ago and that number has increased by over fifty percent since then (Bowman and Bastedo, “Getting on the Front Page” 415-416). Meanwhile, Economics Professors Amanda Griffith of Cornell University and Kevin N. Rask of Colgate University point out that the rankings sensation reflects a shift in mindset from the notion that simply attending college matters to the idea that where one attends college matters. They report, “While there is a broad literature evaluating students’ decisions on whether or not to attend college, there is a small but growing literature about the decision of where to attend college” (245).
Higher education scholars Nicholas A. Bowman and Michael N. Bastedo identify multiple ways students use rankings when deciding where to attend college. For one, they may consider the rankings as authoritative testimony to gauge the quality of various schools. Second, because a college’s ranking affects how it is portrayed in the media, students and parents might act on a vague conception of a college’s prestige that actually stems from rankings. Bowman and Bastedo write that “students and parents are likely to internalize the hierarchy presented in the rankings, perhaps even without their conscious awareness” (“Getting on the Front Page” 417). Furthermore, students care about rankings because they assume top-ranked institutions “help students obtain the best jobs, gain acceptance to top graduate schools, and join the professional class of society” (Bowman and Bastedo, “Getting on the Front Page” 418).
Since students use rankings to help them determine which schools to apply to and ultimately which one to attend, they are responsive to changes in ranking. Michael Luca and Jonathan Smith, from Harvard Business School and the College Board respectively, find that moving up one place in the rankings can increase the size of a college’s applicant pool by 0.96- 2.07% (59). In a separate study, Bowman and Bastedo discover that “a one-unit increase in U.S. News ranking corresponds to a 0.4% decrease in acceptance rate, a 0.2% increase in yield, and a 2.8-point increase in average SAT score” (“Getting on the Front Page” 416). Moreover, Griffith and Rask studied the matriculation decisions of students accepted to Colgate University between 1995 and 2004, and learned that the college’s ranking was an important factor in the decisions. They found “full-pay applicants [students receiving no financial aid] are more likely to attend a school that is higher ranked by even a few places. Aided applicants [students receiving merit or need-based financial aid] are less responsive, but still systematically prefer higher-ranked schools” (254).
One reason full-pay and aided students alike rely so heavily on rankings, rather than other perceived proxies of quality such as average SAT scores, is for their salience. Luca and Smith define salience as “the simplicity of determining a given college’s ranking” (59). Essentially, the easier the information is to understand, and the less calculation or analysis needed to derive meaning from it, the more salient it is. By breaking down the components behind a college’s U.S. News ranking, Luca and Smith conducted a study into how salience affects the influence of college rankings. They found that when rankings are listed numerically, rather than alphabetically, it is more intuitive for students to identify the top-ranked college, and students will rely on the rankings more (60). Luca and Smith conclude that “the impact of rankings depends not just on their informational content but also on their salience,” meaning that rankings’ salience can overshadow whether or not they actually provide valuable information (60).
According to many scholars, such as Luca and Smith, the U.S. News rankings do not in fact impart valuable information. Rather, they mislead readers. The crux of the rankings’ problems lies in their methodology. Although the exact formula changes every year, the numbers behind the 2015 edition are a representative example (Luca and Smith 59). In 2015, 22.5% of a college’s ranking was determined by its undergraduate academic reputation, another 22.5% was freshman retention rate, quality of faculty was 20%, admissions selectivity was 12.5%, financial resources was 10%, change in graduation rate was 7.5%, and alumni donation rate was 5% (“How U.S. News Calculated the 2015 Best Colleges Rankings”).
The ever-changing methodology is the first way the U.S. News rankings deceive students. As a result of annual adjustments, a change in a college’s ranking may be due less to a change in its quality than a revised calculation. This also complicates judging a college’s ranking over time. For instance, “In the 1990s, USNWR changed its ranking methodology six times… A striking example is the California Institute of Technology, whose rank rose from ninth in 1998 to first in 1999 entirely due to a change in methodology” (Luca and Smith 59). Due to the continually evolving methodology, the very concept of a college moving up in the rankings is misleading.
The U.S. News rankings also mislead students by relying on peer assessments of reputation. The magazine justifies this component by stating:
The U.S. News ranking formula gives significant weight to the opinions of those in a position to judge a school's undergraduate academic excellence. The academic peer assessment survey allows top academics – presidents, provosts and deans of admissions – to account for intangibles at peer institutions, such as faculty dedication to teaching. (“How U.S. News Calculated the 2015 Best Colleges Rankings”)
However, Frank J. Ascione, former dean of the University of Michigan College of Pharmacy, entirely disagrees with that assessment. He was one of the administrators with a voice in the peer assessment category, and he remembers “Despite being in academia for 35 years, I knew comparatively little about many of the colleges and schools on the list” (Ascione 1). He also pondered whether his fellow administrators would be able to assess his college, questioning “They may know about the positive impact our faculty members and alumni have had on the profession of pharmacy… But would they know, for example, that we have instituted a new PharmD curriculum in the past 2 years that extensively uses active-learning techniques?” (Ascione 1). He argues that the U.S. News rankings are deceptive to students because the peer assessments—a substantial portion of a college’s ranking—are based on little more than thin air.
Another way the U.S. News rankings are deceptive to students is they seem to suggest the top-ranked college is the best choice for everyone. Leon Cremonini, Don Westerheijden, and Jürgen Enders, from the Center for Higher Education Policy Studies at the University of Twente in the Netherlands, believe higher education has become a “marketable commodity” as administrators try to attract the best and brightest students from all over the world (374). However, “unlike marketing or management studies, college choice literature pays little attention to the possible consequences of ‘culture’ or students’ information processing during their choice processes” (Cremonini, Westerheijden, and Enders 374). The authors contend that students are more likely to attend a college where they will find other members of their ethnic or religious community (374). Since every student comes from a unique background, and rankings ignore the role heritage and customs play in a student’s college decision, the U.S. News rankings misrepresent institutions of higher education as one-size-fits-all.
Cremonini, Westerheijden, and Enders summarize their discussion of how significant yet complex cultural determinants are ignored by the rankings by stating “what is measurable seems to be incorporated into the rankings, rather than what is valid” (379). Another example they provide to support this claim is that rankings are too input-driven. They argue that while it may be easy to quantify inputs, such as how intelligent or accomplished students are when they enter college, and hard to quantify outputs, such as how well-prepared graduates are for what lies ahead, that does not mean rankings should be driven by inputs. Instead, college rankings should reflect the success of the alumni, since that is what most students care about in the end. They write that the current ranking methodology is far from perfect and “Even though, in an increasingly globalized world, HEIs [higher education institutions] are likely to compete to provide students with tangible results in the form of learning outcomes, studies on how to approach this problem are still in their infancy” (379).
Regardless of the rankings’ efficacy, as long as students predicate application and matriculation decisions on the U.S. News rankings, and colleges compete for top talent, Bastedo and Bowman believe colleges will take action to try to improve their ranking (“Modeling Institutional Effects” 164). While they argue this reaction is only natural because “processes of certification and evaluation are some of the most powerful institutional forces in organizational fields,” these actions are often at the expense of the students they purport to educate (“Modeling Institutional Effects” 164). Marc Meredith, of the Stanford University Graduate School of Business, suggests that since colleges are ranked in part on the SAT scores of their applicants, they are incentivized to accept applicants with higher SAT scores over more well-rounded candidates. He states, “One example of a questionable strategic admission decision is to only base acceptance decisions on qualities that are components in the rankings, like standardized tests, rather than focusing on the overall quality of the student” (445). This is problematic because “a quality like leadership—which Guinier and Strum (2001) [define] as an example of quality that is difficult to quantify, yet has been shown to be correlated with success in school— may be less likely to be considered in admissions decisions” (Meredith 445). Higher education might be undermining the next generation of leaders by passing over those with the most leadership potential for students with higher standardized test scores.
Another harmful behavior colleges undertake in response to the U.S. News rankings is instituting a policy whereby submitting SAT scores is optional. Michigan State University Economics Professors Michael Conlin and Stacy Dickert-Conlin and Syracuse University Graduate School Associate Dean Gabrielle Chapman state “colleges reward applicants in the admission process who submit their SAT 1 scores when their SAT 1 scores will raise the college’s average reported score and reward applicants who do not submit when their SAT 1 scores would lower the college’s average” (62). Obviously, the reward is admission. This policy is harmful because it turns the application process into a game. Students with impressive grades and extracurricular involvement must take their best guess as to whether it is worth it to submit their SAT scores, since colleges want to accept students with high SAT scores to boost their rankings even if it means passing over an overall better qualified applicant. Additionally, “applicants from private high schools who are non-minorities are more likely to take advantage of the policy, all else equal” due to superior college preparation counseling (Conlin, Dickert- Conlin, and Chapman 62). Although there may be no inherent discrimination to this policy in theory, in effect it is problematic because it has excessively benefited students of a certain socioeconomic status and ethnicity.
In addition to issues of equality, ethics also comes into play when examining how colleges address rankings. Natale and Doran write “The ethical concern with rankings should be the emphasis it puts on seeking highly qualified students, deflecting attention from the tradition of wanting to make access to higher education equitable” (192). Cheating is another ethical problem surrounding how colleges try to improve their ranking. According to Meredith, “high stakes rankings create more incentive for schools to publish inaccurate or misleading data” (445). Numerous scandals have proven that Meredith’s explanation rings true. In 2012, George Washington University admitted to overstating the percentage of its incoming class ranked in the top ten percent of their high school class (George Washington University). The same year, Emory University confessed to overstating the SAT/ACT scores and high school class rankings of the students in its incoming class for the past decade (Emory University). By misreporting data, colleges are not only misleading students but also damaging their own reputations when the scandals are eventually exposed.
Besides the ethical concerns, what may be most troubling about students and colleges’ fascination with the U.S. News rankings is that the rankings are not actually as consequential as they may seem. According to Northwestern University Management Professor Lauren A. Rivera, “top-tier law firms, investment banks, and management consulting firms” do not rely on the U.S. News rankings to shape their opinions of a college’s prestige (72). Instead, their concept of prestige is framed by an institution’s time-honored reputation (Breault and Callejo Perez 15). Rivera states that the only institutions Wall Street firms deem truly prestigious are Harvard, Yale, Princeton, and Stanford (71). She then explains that “Contrary to common sociological measures of institutional prestige, employers privileged candidates who possessed a super-elite (e.g., top four) rather than selective university affiliation” (71). The firms’ mindset is in direct opposition to students’ belief that attending a school ranked one place higher in any given year will increase their chances of future success (Bowman and Bastedo, “Getting on the Front Page” 418). Furthermore, these firms “restricted competition to students with elite affiliations and attributed superior abilities to candidates who had been admitted to super-elite institutions, regardless of their actual performance once there” (Rivera 71). This behavior is especially troubling to students because it ignores the possibility a student accepted into a super-elite school may choose to go somewhere else for financial or cultural reasons (Cremonini, Westerheijden, and Enders 374; Griffith and Rask 254). It is clear that students and prominent firms are not on the same page when it comes to interpreting the U.S. News rankings.
While Rivera found that where one attends college plays a decisive role in one’s prospects for a job on Wall Street, Professor Michael N. Bastedo and researchers from the Center for the Study of Higher and Postsecondary Education at the University of Michigan find that it is a miniscule factor in getting a job on Main Street (Kim et al. 762). This is a significant revelation because it once again undermines the notion held by many students that attending a college ranked highly by U.S. News will automatically lead to a better career (“Getting on the Front Page” 418). The researchers state “college selectivity no longer has a significant positive influence on job satisfaction or prestige, and any effect of college selectivity on future job satisfaction seems to operate through its effect on increased earnings” (Kim et al. 762). In other words, students at respected institutions are not necessarily destined for better jobs than their counterparts solely due to their institution of choice. The authors continue:
For older cohorts, graduating from selective colleges was a much more important signal (e.g., Alwin, 1974), but other dimensions, such as individual academic achievement and prior work experience (e.g., Ott, 2011), as well as extracurricular activities (Kim & Bastedo, 2013), may have become more powerful signals of what employers find desirable (Rivera, 2012). (Kim et al. 783)
Therefore, while students are obsessing more and more over the U.S. News rankings in hopes that attending the most selective college possible will ignite their career aspirations, the selectivity of their alma mater matters less and less. There is a vast disconnect between how students—and by extension colleges—and firms use and value rankings.
Fortunately, there are ways to improve the U.S. News rankings as a tool in the complex college admissions process, which could also bridge the gaps between how they are interpreted by students, colleges, and firms. University of Nebraska College of Law Professor Nancy B. Rapoport argues, “Rating schools on some relevant factors, rather than ranking them from top to bottom, would serve applicants better… To the extent that law students value particular factors more than others, they can construct their own rating systems” (1100). Although ratings are less salient than rankings, ratings would allow students and firms to evaluate institutions based on their own preferred criteria, such as cultural determinants and quality of academic programs (Luca and Smith 59; Cremonini, Westerheijden, and Enders 374; Rivera 71). Rapoport contends that “The best solution is to let consumers make their decisions based on their own weighting of factors that mean the most to them” rather than criteria chosen and quantified by U.S. News (1101).
Cremonini, Westerheijden, and Enders pose another method of improving the rankings.They argue that the U.S. News methodology is too focused on inputs—how accomplished students are when they enter college—and fails to represent how well colleges prepare their students for their futures. The scholars’ suggestion is to adjust the methodology behind the rankings so it reflects the outputs of a college education, namely how well alumni perform in graduate school and the workplace. They write “input information has only limited value. Process and especially output indicators would be more appropriate to assess the quality of HEIs [higher education institutions]” (379). Considering the influence of the U.S. News rankings, these changes could transform how students make their college decisions.
Although there is hope that the U.S. News rankings can be modified to better reflect quality in higher education, there are many steps that need to be taken toward that goal. Currently, the U.S. News rankings mislead students in regard to evaluating universities, and prompt colleges to react to the students’ behavior in irresponsible, sometimes unethical, ways. Ironically, as demonstrated by the hiring practices of firms from Main Street to Wall Street, the rank of an applicant’s alma mater matters very little in the job market. Attending a university with a time-honored reputation or building a remarkable resume is what will impress the firms hiring college graduates. The U.S. News rankings have created an artificial bubble of college prestige, and until the methodology and influence of the rankings can be justified, they will continue to be detrimental to the higher education landscape.