Wall Street Journal/Times Higher Education College Rankings 2021 methodology

Ranking of US universities and colleges puts student success and learning at its heart

September 14, 2020
Source: iStock

View the full results of the Wall Street Journal/Times Higher Education College Rankings 2021

The Wall Street Journal/Times Higher Education College Ranking is a pioneering ranking of US colleges and universities that puts student success and learning at its heart.

The ranking includes clear performance indicators designed to answer the questions that matter most to students and their families when making one of the most important decisions of their lives – who to trust with their education. Does the college have sufficient resources to teach me properly? Will I be engaged, and challenged, by my teacher and classmates? Does the college have a good academic reputation? What type of campus community is there? How likely am I to graduate, pay off my debt and get a good job?

The ranking includes the results of the THE US Student Survey, which examines a range of key issues including students’ engagement with their studies, their interaction with their teachers and their satisfaction with their experience.

The ranking adopts a balanced scorecard approach, with 15 individual performance indicators combining to create an overall score that reflects the broad strength of the institution.

For all questions about this ranking, please email:

Data sources

Data come from a variety of sources: the US government (Integrated Postsecondary Education Data System – IPEDS), the College Scorecard, the Bureau of Economic Analysis (BEA), the THE US Student Survey, the THE Academic Survey, and the Elsevier bibliometric dataset.

Our data are, in most cases, normalised so that the value we assign in each metric can be compared sensibly with other metrics.


The overall methodology explores four key areas:

Resources (30%)

Does the college have the capacity to effectively deliver teaching? The Resources area represents 30 per cent of the overall ranking. Within this we look at:

  • Finance per student (11%)
  • Faculty per student (11%)
  • Research papers per faculty (8%)

Engagement (20%)

Does the college effectively engage with its students? Most of the data in this area are gathered through the THE US Student Survey. The Engagement area represents 20 per cent of the overall ranking. Within this we look at:

  • Student engagement (7%)
  • Student recommendation (6%)
  • Interaction with teachers and students (4%)
  • Number of accredited programmes (3%)

Outcomes (40%)

Does the college generate good and appropriate outputs? Does it add value to the students who attend? The Outcomes area represents 40 per cent of the overall ranking. Within this we look at:

  • Graduation rate (11%)
  • Value added to graduate salary (12%)
  • Debt after graduation (7%)
  • Academic reputation (10%)

Environment (10%)

Is the college providing a good learning environment for all students? Does it make efforts to attract a diverse student body and faculty? The Environment area represents 10 per cent of the overall ranking. Within this we look at:

  • Proportion of international students (2%)
  • Student diversity (3%) 
  • Student inclusion (2%)
  • Staff diversity (3%)

Key changes since last year

Student survey

We had planned to conduct our student survey during Spring 2020, but we had to cancel it because of the coronavirus pandemic and subsequent difficulties for institutions in the US. Not only could we not expect institutions to invest time and effort surveying their students at this time, but the data collected would probably have been coloured by students’ experience of a sudden move to online-only teaching and therefore would not generate a reliable indicator of general teaching success. We hope to be able to return to a normal survey collection for our 2022 ranking, but for this ranking the data for the three student engagement metrics has not been updated: we are using the scores obtained by institutions last year.

Student inclusion

Our student inclusion metric previously used data from the College Scorecard (CSC) on first generation student enrolment and from IPEDS on Pell Grant enrolment. The CSC data are no longer being published, so for this year we are using the Pell Grant data from IPEDS only. We will review this for the coming ranking to ensure we are still able to measure economic diversity in a meaningful way.

Value-added salary

Following the March 2019 executive order on accountability at colleges and universities, College Scorecard is now focusing on collecting outcomes data at the field-of-study level and as a result is no longer publishing overall salary values 10 years after matriculation. These longitudinal data were what we used in our value-added model, and we are not able to replace it with the new values published. This year we are reusing last year’s scores from the value-added metric. We will be reviewing our methodology for next year to find a suitable replacement for this measure for future rankings.

Metrics used

Resources (30%)

Students and their families need to know that their college has the right resources to provide the facilities, teaching and support that are needed to succeed at college.

By looking at the amount of money that each institution spends on teaching per student (11%), we can get a clear sense of whether it is well funded, with the money to provide a positive learning environment. This metric takes into account spending on both undergraduate and graduate programmes, which is consistent with the way that the relevant spend data is available in IPEDS. The Department of Education requires schools to report key statistics such as this to IPEDS, making it a comprehensive source for education data. The data on academic spending per institution are adjusted for regional price differences, using regional price parities data from the US Department of Commerce’s Bureau of Economic Analysis.

By looking at the ratio of students to faculty members (11%), we get an overall sense of whether the college has enough teachers to teach. It gives a broad sense of how likely it is that a student will receive the individual attention that can be necessary to succeed at college, and also gives a sense as to potential class sizes. The source of this statistic is IPEDS. We are using the average of two years of data for this metric to provide a better long-term view.

Faculty who are experts in their academic fields and pushing the boundaries of knowledge at the forefront of their discipline can significantly enhance a student’s educational experience when they are able to distil their knowledge and demonstrate the power of real-world problem-solving and enquiry. So our teaching resources pillar also gives a sense of whether faculty are experts in their academic disciplines by looking at research excellence. We look at the number of published scholarly research papers per faculty member (8%) at each institution, giving a sense of their research productivity, and testing to see whether staff are able to produce research that is suitable for publication in the world’s top academic journals, as indexed by Elsevier.

Engagement (20%)

Decades of research has found that the best way to truly understand teaching quality at an institution – how well it manages to inform, inspire and challenge students – is through capturing what is known as “student engagement”. This was described by Malcolm Gladwell in The New Yorker in 2011 as “the extent to which students immerse themselves in the intellectual and social life of their college – and a major component of engagement is the quality of a student’s contacts with faculty”.

THE has captured student engagement across the US through its US Student Survey, carried out in partnership with two leading market research providers. In 2018 and 2019, we gathered the views of more than 170,000 current college and university students on a range of issues relating directly to their experience at college (see key changes detailed above).

Students answer 12 core questions about their experience that are either multiple choice or on a scale from 0 to 10, and also provide background information about themselves. The survey was conducted online and respondents were recruited by research firm Streetbees using social media, facilitated, in part, by student representatives at individual schools. We also worked with participating institutions that distributed the survey to random samples of their own students. Respondents were verified as students of their reported college using their email addresses. We used an aggregated group of respondents from both years (2018 and 2019 surveys). At least 50 validated responses in the 2019 survey were required for a university to be included.

To capture engagement with learning (7%), we look at the answers to four key questions:

  • to what extent does the student’s college or university support critical thinking? For example, developing new concepts or evaluating different points of view;
  • to what extent does the teaching support reflection on, or making connections among, the things that the student has learned? For example, combining ideas from different lessons to complete a task;
  • to what extent does the teaching support applying the student’s learning to the real world? For example, taking study excursions to see concepts in action;
  • to what extent do the classes taken in college challenge the student? For example, presenting new ways of thinking to challenge assumptions or values

To capture a student’s opportunity to interact with others (4%) to support learning, we use the responses to two questions: to what extent does the student have the opportunity to interact with faculty and teachers? For example, talking about personal progress in feedback sessions; and to what extent does the college provide opportunities for collaborative learning? For example, group assignments.

The final measure in this area from the survey is around student recommendation (6%): if a friend or family member were considering going to university, based on your experience, how likely or unlikely are you to recommend your college or university to them?

In this pillar of indicators we also seek to help a student understand the opportunities that are on offer at the institution, and the likelihood of getting a more rounded education, by providing an indicator of the number of different subjects taught (3%). While other components of the Engagement pillar are drawn from the student survey, the source of this metric is IPEDS. We are using the average of two years of data for this metric in order to provide a better long-term view.

Outcomes (40%)

At a time when US college debt stands at $1.6 trillion, and when the affordability of going to college and value for money are prime concerns, this section looks at perhaps the single most important aspect of any higher education institution – their record on delivering successful outcomes for their students.

We look at the graduation rates for each institution (11%) – a crucial way to help students to understand whether colleges have a strong track record in supporting students enough to get them through their course and ensure that they complete their degrees. We use reported graduation rates for all students including part-time and transfer students.

This pillar also includes a value-added indicator, measuring the value added by the teaching at a college to salary (12%) (see key changes detailed above). Using a value-added approach means that the ranking does not simply reward the colleges that cream off all the very best students, and shepherd them into the jobs that provide the highest salaries in absolute terms. Instead it looks at the success of the college in transforming people’s life chances, in “adding value” to their likelihood of success. The THE data team uses statistical modelling to create an expected graduate salary for each college based on a wide range of factors, such as the demographic make-up of its student body and the characteristics of the institution. The ranking looks at how far the college either exceeds expectations in getting students higher average salaries than one would predict based on its students and its characteristics, or falls below what is expected. The value-added analysis uses research on this topic by Brookings Institution, among others, as a guide.

We also include a metric on debt after graduation (7%). The concern over student debt and the cost of higher education in general has come to the forefront of public discussion recently. A measure of the debt accrued by a college’s students when they graduate reflects this concern and holds institutions accountable for the cost that they represent to individuals and funding sources. We are using the cumulative median debt reported in College Scorecard, which represents the “median loan debt accumulated at the institution by all student borrowers of federal loans”.

This pillar also looks at the overall academic reputation of the college (10%), based on THE’s annual Academic Reputation Survey, a survey of leading scholars that helps us determine which institutions have the best reputation for excellence in teaching. We used the total teaching votes from our 2019 and 2020 reputation surveys.

Environment (10%)

This category looks at the make-up of the student body at each campus, helping students understand whether they will find themselves in a diverse, supportive and inclusive environment while they are at college. We look at the proportion of international students on campus (2%), a key indicator that the university or college is able to attract talent from across the world and offers a multicultural campus where students from different backgrounds can, theoretically, learn from one another.

We also look more generally at student diversity – both racial and ethnic diversity (3%), and the inclusion of students with lower family earnings (2%). For the former, we use IPEDS data on diversity. For the latter, we look at the proportion of students who receive Pell Grants (paid to students in need of financial support), as reported in IPEDS.

We also use a measure of the racial and ethnic diversity of the faculty (3%), drawing on IPEDS data.

Technical overview of metrics


  • Finance per student – spending on teaching associated activity per full-time equivalent student (IPEDS). This is adjusted using regional price comparisons (BEA)
  • Faculty-to-student ratio – the number of faculty per student as provided by IPEDS
  • Papers per faculty – the number of academic papers published by faculty from a college in the period 2015-2019 (Elsevier) divided by the size of the faculty (IPEDS)


The data from the student survey have been rebalanced by gender to reflect the actual gender ratio at the college.

  • Student engagement – the average score of the four questions (critical thinking, connections, applying learning to the real world, challenge) in the THE US Student Survey
  • Interaction – the average score of two questions (interaction with faculty and collaborative learning) in the THE US Student Survey
  • Student recommendation (THE US Student Survey)
  • Subject breadth – the number of courses offered (IPEDS)


  • Graduation rate – the proportion of bachelor’s or equivalent graduates six/eight years after entry (IPEDS; six years for full-time students and eight years for part-time students). This covers both first-time and transfer students
  • Value-added salary – the average calculated residual of the value-added models for salary 10 years after entry. This is calculated using a range of independent variables for the College Scorecard data representing the years 2013, 2014 and 2015. It also draws on data from IPEDS and the BEA
  • Debt after graduation – the median loan debt accumulated by students at the institution, after they graduate. This is the GRAD_DEBT_MDN variable released by College Scorecard.
  • Reputation – the total votes received for teaching excellence in the THE Academic Reputation survey, which is conducted in partnership with Elsevier. We use only votes provided by academics associated with US institutions.


  • International students – the proportion of students identified as non-resident aliens (IPEDS)
  • Student diversity – a Gini-Simpson calculation of the likelihood of two undergraduates being from different racial/ethnic groups (IPEDS)
  • Faculty diversity – a Gini-Simpson calculation of the likelihood of two faculty members being from different racial/ethnic groups (IPEDS)
  • Student inclusion – the post-normalisation average of the proportion of Pell Grant recipients (IPEDS) 

Why isn’t my college included?

There are two reasons why a college might not be included in the ranking.

First, does it meet the eligibility requirements? This is an abbreviated summary:

  • Title IV eligible
  • Awards four-year bachelor’s degrees
  • Located in the 50 states or Washington, DC
  • Has more than 1,000 students
  • Has 20 per cent or fewer online-only students (the current shift to remote learning during the Covid-19 pandemic is an exception)
  • Is not insolvent

We also accept US service academies provided that they are able to supply the necessary data.

The second reason is missing data elements. Where possible we will impute missing values, but where that is not possible we have excluded colleges. In addition, some colleges did not meet our threshold for a valid number of respondents (greater than or equal to 50) to the student survey in 2019. We have also excluded private for-profit colleges.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please Login or Register to read this article.

Reader's comments (6)

Greetings — It also appears your methodology for assigning scores to the indicators has changed for 2021 rankings. Please create an addendum explaining what has changed, and how to use the new scoring methodology to make meaningful comparisons to scoring data from previous years.
Hi. The metric weightings are the same as last year. The only metric calculation that has changed is the 'student inclusion' metric - see the section titled 'Key changes since last year'. I hope that helps.
With respect to the resources methodology it appears it penalizes institutions that have figured out how to more efficiently. How do you statistically adjust for efficiency gains? Or do you just reward institutions which spend more? With respect respect to student/faculty ratio, research indicates that there is not a linear relationship between student outcomes and class size-- that the relationship is marginally step-based. How does your model account for this research? In addition, how do you adjust for actual mean and median class size at the respective institution as more faculty do not necessarily translate into smaller class sizes-- especially at the underclassman level?
Why would an institution have no score on a metric, especially when that metric is drawn largely from IPEDS data?
I do not see two notable colleges on your list, Grove City College and Hillsdale College. I assume this is because neither school accepts any federal funding. Our daughter went to Grove City, and their tuition was consistently far lower than most comparable colleges, while students graduated with low debt and consistently got high paying job offers or admittance to top graduate schools. Excluding these two schools because they avoid federal funding makes your rankings highly dubious.
Technical overview of metrics; Resources; Papers per faculty – the number of academic papers published by faculty from a college in the period 2015-2019 (Elsevier) divided by the size of the faculty (IPEDS) ********************************************************** Assume 3 faculty members in school A publishing 7, 8, 9 papers. The Average is 8 papers at school "A"; School "B" has one faculty member publishing 8 papers. These schools both average 8 publications. Are these schools equal? No, they are not equal: 1) the school with the larger faculty "A" has one person publishing at a higher rate than the one person at "B"; if quantity equals quality (a dubious assertion supported by normalization), school "A" is superior to school "B"; 2) the faculty at "A" has generated 24 papers versus 8 generated at school "B". Would most students prefer "B" to "A"? Probably not, school "A" (probably) offers a greater research corpus as well as (probably) greater faculty access; What is the point of normalizing papers per faculty: 1) arbitrarily tends to give edge to smaller schools; 2) doesn't speak to citation count or quality of papers; 3) doesn't speak to total paper count; 4) doesn't speak to faculty access. Of the metrics used, any metric which presents a normalization by size fits into the category of "Not very useful".