Teaching quality in US higher education is a myth

US universities might be world-leading in research terms, but there is scant evidence that this has any bearing on their prowess as educators. Why do institutions of higher education show so little interest and aptitude in instilling genuine learning, asks John Tagg 

April 18, 2019
Montage of woman floating in sea of rubbish
Source: iStock/Getty montage

America’s colleges and universities, as educational institutions, don’t know what they’re doing. I don’t mean this in the casual and idiomatic sense that they are incompetent and inept – although some are. I mean it literally. They do not know what the consequences of their actions are; they don’t understand what results their behaviour produces.

The result is a vast disconnect between what these institutions claim to accomplish and what they really do accomplish. And this, I would argue, has been conspicuously the case for many years and has been systematically ignored by most of those in a position to do anything about it.

An egregious and even disabling paradox is at work here. The faculties and administrators in colleges and universities are nearly all very smart people and, in many cases, are deeply knowledgeable experts in a variety of fields. At the same time, decision making about educational policy, especially as it affects undergraduate students, is conducted largely in a world of myth: of unsupported assumptions and unsupportable beliefs, often advanced with reasoning laced with superstition and bias.

The foundational myth guiding the behaviour of colleges and universities is what I call the instruction myth. This consists of the belief that what colleges do – their mission, if you will – is to provide instruction in the form of discrete classes.

Students go to college to get an education, of course. But what should be the measure of whether they receive one? Colleges’ response is to count classes. At nearly all of the colleges and universities in the US, a bachelor’s degree is granted to students who have completed 120 credit hours of coursework, distributed as prescribed by the institution. Of course, there is some quality control: they need to pass the classes, on average receiving a grade of C or better: this adds up to their transcript. But that still provides little information on the question of what students actually learn in their classes, how well they learn it, how long they remember it, or what they can do with it: all those things that in normal usage we would include under the heading of education.

Most colleges and universities pay no systematic attention to what students learn or how they learn it. If the teacher fails to meet her classes or fails to assign letter grades, attention will be paid: steps will be taken to bring the system back to equilibrium. But if the students forget everything they “learned” within a month of the final exam, nobody at the institution will even be aware of the fact, much less do anything about it.

I am not suggesting that college faculty do not care about what students learn. Of course they do. For most, the desire to advance learning is what drew them to the profession in the first place. Anonymous surveys have shown for years that most of the people who go into faculty work do so primarily because they want to teach. But college faculty, like everybody else, respond to incentives. And most of the incentives in their work are based not on student learning but on the instruction myth.

The instruction myth leads to and reinforces what I call the myth of quality. We often hear that the US has the best universities in the world. But that claim is based entirely on the research accomplishments of the great institutions. When we look at undergraduate education, we see a very different picture. Accepting the instruction myth, institutions proceed on the idea that if the classes are full and students are completing them, all is well and the job is getting done. This is not true. The average grade at Harvard University lately is an A-. This means that half of students get grades higher than A-. Wow! But is it all that it seems? Probably not.

What does a B mean? “Good” is the most common definition we will find in college catalogues (“average” is supposed to be a C). But how good? And good at what? A B in one class may mean that the student did excellent work but had two late assignments. In another, it may mean that the student struggled to reach minimum standards but did extra credit to bring up the grade. In another, it might mean that the student understood nothing but successfully crammed for the two multiple-choice tests, a mid-term and a final, before forgetting it all with a sigh of relief.

In most cases, an end-of-semester grade is a snapshot of a student’s performance at a specific time. In 2006, one institution paid several hundred students to retake their final exams two or three years after completing several introductory courses. The (unpublished) results reveal that the average grade fell from a B+ to a D. There’s no way to tell what a B means.

One thing we can tell is that grades mean less than they used to. With a brief decline in the late 1970s, grades in the US have been going up for the past 50 years. At the same time, by every plausible indicator, student effort has been going down. In 2010, Philip Babcock and Mindy Marks, two economists from the University of California, examined the credible evidence going back to 1961 on the question of how much time students spend on academic work. They found that the average number of hours per week devoted to school and study fell from 40 in 1961 to 27 in 2004. And this was not a result of changes in the student population. According to Babcock and Marks’ National Bureau of Economic Research working paper, “The falling time cost of college: evidence from half a century of time”, “study time fell for students from all demographic subgroups; within race, gender, ability, and family background; overall and within major; and for students who worked in college and for those who did not. The declines occurred at four-year colleges of every type, size, degree structure, and level of selectivity.”

In 2012, Stuart Rojstaczer of Duke University and Christopher Healy of Furman University tracked the progress of college grades going back to the 1930s, comparing it with the evidence of outcomes and student effort. Their paper in the Teachers College Record, “Where A is ordinary: the evolution of American college and university grading, 1940-2009”, concluded that “the cause of the renewal of grade inflation, which began in the 1980s and has yet to end, is subject to debate, but it is difficult to ascribe this rise in grades to increases in student achievement”. Grades are a floating exchange rate that tell us little about the value of the experiences behind them.

For a growing portion of those who hire graduates, college grades are basically irrelevant. In 2013, Laszlo Bock, senior vice-president of people operations at Google, told The New York Times that “one of the things we’ve seen from all our data crunching is that GPA’s are worthless as a criteria [sic] for hiring...We found that they don’t predict anything.”

Serious research on what students learn in college has all pointed to one conclusion: they don’t learn much, and it’s hard to generalise about what it is. Most famously, Richard Arum and Josipa Roksa’s 2013 book Academically Adrift: Limited Learning on College Campuses followed the same students through all four years using the sophisticated Collegiate Learning Assessment and showed that many learned very little about how to reason their way through problems during that time.

The Wabash National Study, a consortium of a couple of dozen colleges and universities that attempted to monitor their students’ progress between 2006 and 2012, reached similar conclusions. In a 2011 article for Change: The Magazine of Higher Learning, “How robust are the findings of Academically Adrift?”, several of those involved in the study write that the Wabash study’s findings, “based on an independent sample of institutions and students and using a multiple-choice measure of critical thinking substantially different in format than the Collegiate Learning Assessment, closely match those reported by Arum and Roksa”. The idea that American colleges and universities consistently provide a high-quality education is a myth.

Montage of women in paddling pool
iStock/Getty montage

Keeping in mind that the people who run universities are very smart people, how can this situation persist? The answer resides in another myth that is widely accepted in higher education and demonstrably false: the myth of the unity of teaching and research.

How do college teachers become college teachers? In the case of four-year colleges and universities (and often today for community colleges as well) the minimum requirement for a faculty position is a doctorate. And a doctorate is granted on the basis of evidence that the recipient can do original academic research.

A task force on doctoral study set up in 2014 by the Modern Language Association, the largest professional association in the humanities, put it like this: “Doctorate-granting universities confer prestige on research rather than teaching. A coin of the realm is the reduced teaching load – even the term load conveys a perception of burdensomeness – while honor and professional recognition, not to mention greater compensation, are linked largely to research achievements. The replication of the narrative of success incorporates this value hierarchy and projects it as a devaluation of teaching.” The same is true in every academic discipline.

While most graduate students also work as teaching assistants in their departments, few get much systematic training or preparation for teaching. New faculty are hired overwhelmingly on the evidence of their research prowess. Usually, that’s all the credible evidence there is.

The party line among college presidents and leaders is aptly summarised by James Duderstadt, who was president of the University of Michigan in the 1990s: “Teaching and scholarship are integrally related and mutually reinforcing and their blending is key to the success of the American system of higher education,” he wrote in his 2000 book, A University for the 21st Century. “Student course evaluations suggest that more often than not, our best scholars are also our best teachers.” This reasoning allows universities to substitute the research question for the teaching question: Is Professor Schmedlap a good teacher? Of course: he’s written a book and 12 articles.

Truth be told, this is pure invention, wishful thinking. There is no demonstrable connection at all between the ability to publish research articles and the ability to teach well. In the 1990s, two scholars, John Hattie and Herbert Marsh, conducted an analysis of the most credible evidence they could find on the connection between research and teaching. They reviewed 58 studies for their 1996 paper, “The relationship between research and teaching: a meta-analysis”, published in the Review of Educational Research. All of them involved outside evaluations of both teaching and research quality for university teachers. The conclusion: “The evidence suggests a zero relationship.”

Research has continued on the question, of course. One of the most interesting studies was done at Northwestern University in 2017. The Illinois institution’s president, Morton Schapiro, and fellow economist David Figlio gathered an unusually large body of evidence on the teachers of introductory courses. And they devised a clever way of assessing teaching quality by looking at students’ performance in subsequent classes. To measure research effectiveness, they looked at an index of how influential a professor’s writings were in the field, how often they were cited and referred to in other research, and whether they had been recognised by university or national organisations for research work. They applied these tests to 170 tenured faculty, who taught more than 15,000 first-quarter students between 2001 and 2008. This study, written up as a working paper titled “Are tenure track professors better teachers?”, is one of the most sophisticated and persuasive I have ever encountered on the question.

What did Figlio and Schapiro find? Exactly the same thing that Hattie and Marsh found 20 years earlier: “Our bottom line is that, regardless of our measure of teaching and research quality, there is no apparent relationship between teaching quality and research quality.” Again, zero. None. The unity of research and teaching is pure invention, a myth.

It is, however, a myth that spares these institutions a lot of effort. Because if teaching and research are basically two sides of the same coin, you can train, socialise, hire and promote faculty based on their research accomplishments and just assume that teaching will take care of itself. For the most part, that is what colleges and universities do. In his 2010 book Crisis on Campus: A Bold Plan for Reforming Our Colleges and Universities, Mark Taylor, a professor in the department of religion at Columbia University, puts it this way: “Though most universities pay lip service to teaching and rely on student course evaluations, in my experience, teaching ability plays no significant role in hiring and promotion decisions. Publications and evaluations of other specialists in the field are virtually all that count.”

What makes this acutely ironic is that faculty members spend, on average, a great deal more time on their teaching than they do on their research. So they are, in essence, promoted and rewarded on the basis of a small minority of their actual work, while the bulk of that work is ignored in the reward system.

These same institutions, which can’t tell you how their teachers teach or what their students learn, offer education as their central product and promote their excellence to students, parents and society at large. Economists Dahlia K. Remler and Elda Pema, in a 2009 paper for the National Bureau of Economic Research, found that universities and colleges “reward research while selling education”. The paper, “Why do institutions of higher education reward research while selling education?”, goes on: “Empirically, whatever competitive forces are at work in the education services market, they appear to reward the research reputations of institutions of higher education far more than they reward their teaching reputations.”

The reason they find for this is that institutions invest heavily in the machinery for evaluating research and reward research excellence. On the other hand, they invest little, in terms of time, effort or money, in evaluating or improving teaching. In the realm of research, higher education knows what it is doing. In the realm of teaching, it does not.

So the competitive market for higher education is a strange market, in which what you buy is not what you get and reputation is unconnected to performance. Institutions compete for students by in essence imitating high-prestige models, even though there is no evidence that prestige has any bearing on what actually happens to students after they get to college.

A college education should make students observant, flexible and responsive. It should help them to evaluate evidence and weigh probabilities, to distinguish between true and false, between real and unreal. Well-educated students, we hope, know what they are doing and can shape their behaviour to the changing demands of the world. Yet in the US the institutions that we trust to effect this transformation have become blinkered and rigid. They generate an endless flow of self-referential data that tell them little or nothing about the real consequences of their work. They don’t know what they’re doing, and so they are not getting much better at doing it.

They face many challenges: cost, access and completion, all genuinely important. But fundamental to meeting any of these is the need to see what they are really doing, and to learn to do it better.

John Tagg is professor emeritus of English at Palomar College, California. His latest book, The Instruction Myth: Why Higher Education is Hard to Change, and How to Change It, is published by Rutgers University Press.


Print headline: Teaching is not making the grade

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please Login or Register to read this article.

Related articles

Reader's comments (5)

The first half of this article is very good-- highlighting solid evidence that, over the past 50 years, there has been simultaneously massive grade inflation and a huge decline in the amount of time students actually spend studying, and pointing to evidence from recent years, from Arum and Roksa's study and the Wabash study, that today's students don't appear to be learning or developing much. Then the second half of the article falls back on the same old cliche: it is the teachers' fault if students don't learn. This belief is precisely the problem. The average student 50 years ago was probably aware that, if he or she wanted a good grade, he or she would have to do a lot of work. The average student today believes that it is the teacher's responsibility to 'deliver' education to them, such that they should get good grades while doing very little work. Given that, as a matter of fact, no one can 'deliver' learning to you-- the only way anyone ever learns anything is through his/her own repeated and diligent practice-- of course the false belief that teachers are responsible for students' learning leads to poor educational outcomes. Arum and Roksa found that the factor most closely correlated with students actually learning and developing was quite simply the number of pages of reading assigned to them per week; this was associated with teachers and the institution having high expectations of students-- making high demands on them; and this was associated with students doing more work and thus learning more. Of course "research credentials" of the teacher are unlikely to bear any relationship to students' learning, because such credentials are unlikely to bear any relationship to the teacher's willingness to make high demands of his/her students-- or the institution's willingness to permit the teacher to do so.
@Excelsior I completely agree! In fact, I would go a step further: effective learning involves at least three parties, the instructor, the learner, and the instructional infrastructure (e.g., learning resources). People appear to often simplify by laying the blame on one of these three aspects (e.g., lack of student engagement, lack of trained instructors, or lack of govt funding for HE) rather than see it as a combination of all three. All three appear to move towards having undergraduates do less to achieve the same grades than before.
During my years as a University administrator and VC, I came across inspiring and exceptional teachers who were not strong researchers- thereby their research credentials were lower than others- who were not good teachers ( could not inspire their students). What a loss, they stayed on the lower cadres! we need to find a way to reward good teachers.
There are two issues here. One is the commodification of education, in which students are increasingly viewed as "customers" hardly different from those at a fast-food restaurant. They pay their money and expect, in return, "good grades". This is where the other issue -- that of how universities are ranked -- comes into play. For good or for ill, universities are simply not ranked on how well their students have learned to do anything (which is, admittedly, a difficult thing to assess); they are ranked on the ability of their faculty to produce publications and attract funding (which, in contrast, are very easy things to assess). No surprise, then, that when faced with unhappy "customers" (i.e. students who want good grades in return for having paid their tuition, not in return for learning anything), the simple solution is to simply inflate the grades. After all, doing so has no impact on the professors "real" jobs, which are to publish and attract funding. In fact, the less professors have to worry about whether students are learning anything (and the less time professors invest in mollifying unhappy "customers"), the more time they can put into publishing, etc. There is simply no way that this situation will change unless the way universities are ranked is changed. (Mind you, I'm not suggesting that conducting research is _not_ something on which universities should be evaluated. Rather just that, if one thinks teaching/learning is something that should happen at universities, then it needs to be part of how those universities are themselves evaluated.)
I think Excelsior's comment above nails it: if we care about learning, then the focus must be on producing hard-working students rather than happy 'customers'. But I wouldn't be surprised if instead the future holds additional bureaucracy to generate marketable numbers that we produce happy (but incompetent) customers. .. more bureaucracy to produce additional measures