America’s colleges and universities, as educational institutions, don’t know what they’re doing. I don’t mean this in the casual and idiomatic sense that they are incompetent and inept – although some are. I mean it literally. They do not know what the consequences of their actions are; they don’t understand what results their behaviour produces.
The result is a vast disconnect between what these institutions claim to accomplish and what they really do accomplish. And this, I would argue, has been conspicuously the case for many years and has been systematically ignored by most of those in a position to do anything about it.
An egregious and even disabling paradox is at work here. The faculties and administrators in colleges and universities are nearly all very smart people and, in many cases, are deeply knowledgeable experts in a variety of fields. At the same time, decision making about educational policy, especially as it affects undergraduate students, is conducted largely in a world of myth: of unsupported assumptions and unsupportable beliefs, often advanced with reasoning laced with superstition and bias.
The foundational myth guiding the behaviour of colleges and universities is what I call the instruction myth. This consists of the belief that what colleges do – their mission, if you will – is to provide instruction in the form of discrete classes.
Students go to college to get an education, of course. But what should be the measure of whether they receive one? Colleges’ response is to count classes. At nearly all of the colleges and universities in the US, a bachelor’s degree is granted to students who have completed 120 credit hours of coursework, distributed as prescribed by the institution. Of course, there is some quality control: they need to pass the classes, on average receiving a grade of C or better: this adds up to their transcript. But that still provides little information on the question of what students actually learn in their classes, how well they learn it, how long they remember it, or what they can do with it: all those things that in normal usage we would include under the heading of education.
Most colleges and universities pay no systematic attention to what students learn or how they learn it. If the teacher fails to meet her classes or fails to assign letter grades, attention will be paid: steps will be taken to bring the system back to equilibrium. But if the students forget everything they “learned” within a month of the final exam, nobody at the institution will even be aware of the fact, much less do anything about it.
I am not suggesting that college faculty do not care about what students learn. Of course they do. For most, the desire to advance learning is what drew them to the profession in the first place. Anonymous surveys have shown for years that most of the people who go into faculty work do so primarily because they want to teach. But college faculty, like everybody else, respond to incentives. And most of the incentives in their work are based not on student learning but on the instruction myth.
The instruction myth leads to and reinforces what I call the myth of quality. We often hear that the US has the best universities in the world. But that claim is based entirely on the research accomplishments of the great institutions. When we look at undergraduate education, we see a very different picture. Accepting the instruction myth, institutions proceed on the idea that if the classes are full and students are completing them, all is well and the job is getting done. This is not true. The average grade at Harvard University lately is an A-. This means that half of students get grades higher than A-. Wow! But is it all that it seems? Probably not.
What does a B mean? “Good” is the most common definition we will find in college catalogues (“average” is supposed to be a C). But how good? And good at what? A B in one class may mean that the student did excellent work but had two late assignments. In another, it may mean that the student struggled to reach minimum standards but did extra credit to bring up the grade. In another, it might mean that the student understood nothing but successfully crammed for the two multiple-choice tests, a mid-term and a final, before forgetting it all with a sigh of relief.
In most cases, an end-of-semester grade is a snapshot of a student’s performance at a specific time. In 2006, one institution paid several hundred students to retake their final exams two or three years after completing several introductory courses. The (unpublished) results reveal that the average grade fell from a B+ to a D. There’s no way to tell what a B means.
One thing we can tell is that grades mean less than they used to. With a brief decline in the late 1970s, grades in the US have been going up for the past 50 years. At the same time, by every plausible indicator, student effort has been going down. In 2010, Philip Babcock and Mindy Marks, two economists from the University of California, examined the credible evidence going back to 1961 on the question of how much time students spend on academic work. They found that the average number of hours per week devoted to school and study fell from 40 in 1961 to 27 in 2004. And this was not a result of changes in the student population. According to Babcock and Marks’ National Bureau of Economic Research working paper, “The falling time cost of college: evidence from half a century of time”, “study time fell for students from all demographic subgroups; within race, gender, ability, and family background; overall and within major; and for students who worked in college and for those who did not. The declines occurred at four-year colleges of every type, size, degree structure, and level of selectivity.”
In 2012, Stuart Rojstaczer of Duke University and Christopher Healy of Furman University tracked the progress of college grades going back to the 1930s, comparing it with the evidence of outcomes and student effort. Their paper in the Teachers College Record, “Where A is ordinary: the evolution of American college and university grading, 1940-2009”, concluded that “the cause of the renewal of grade inflation, which began in the 1980s and has yet to end, is subject to debate, but it is difficult to ascribe this rise in grades to increases in student achievement”. Grades are a floating exchange rate that tell us little about the value of the experiences behind them.
For a growing portion of those who hire graduates, college grades are basically irrelevant. In 2013, Laszlo Bock, senior vice-president of people operations at Google, told The New York Times that “one of the things we’ve seen from all our data crunching is that GPA’s are worthless as a criteria [sic] for hiring...We found that they don’t predict anything.”
Serious research on what students learn in college has all pointed to one conclusion: they don’t learn much, and it’s hard to generalise about what it is. Most famously, Richard Arum and Josipa Roksa’s 2013 book Academically Adrift: Limited Learning on College Campuses followed the same students through all four years using the sophisticated Collegiate Learning Assessment and showed that many learned very little about how to reason their way through problems during that time.
The Wabash National Study, a consortium of a couple of dozen colleges and universities that attempted to monitor their students’ progress between 2006 and 2012, reached similar conclusions. In a 2011 article for Change: The Magazine of Higher Learning, “How robust are the findings of Academically Adrift?”, several of those involved in the study write that the Wabash study’s findings, “based on an independent sample of institutions and students and using a multiple-choice measure of critical thinking substantially different in format than the Collegiate Learning Assessment, closely match those reported by Arum and Roksa”. The idea that American colleges and universities consistently provide a high-quality education is a myth.
Keeping in mind that the people who run universities are very smart people, how can this situation persist? The answer resides in another myth that is widely accepted in higher education and demonstrably false: the myth of the unity of teaching and research.
How do college teachers become college teachers? In the case of four-year colleges and universities (and often today for community colleges as well) the minimum requirement for a faculty position is a doctorate. And a doctorate is granted on the basis of evidence that the recipient can do original academic research.
A task force on doctoral study set up in 2014 by the Modern Language Association, the largest professional association in the humanities, put it like this: “Doctorate-granting universities confer prestige on research rather than teaching. A coin of the realm is the reduced teaching load – even the term load conveys a perception of burdensomeness – while honor and professional recognition, not to mention greater compensation, are linked largely to research achievements. The replication of the narrative of success incorporates this value hierarchy and projects it as a devaluation of teaching.” The same is true in every academic discipline.
While most graduate students also work as teaching assistants in their departments, few get much systematic training or preparation for teaching. New faculty are hired overwhelmingly on the evidence of their research prowess. Usually, that’s all the credible evidence there is.
The party line among college presidents and leaders is aptly summarised by James Duderstadt, who was president of the University of Michigan in the 1990s: “Teaching and scholarship are integrally related and mutually reinforcing and their blending is key to the success of the American system of higher education,” he wrote in his 2000 book, A University for the 21st Century. “Student course evaluations suggest that more often than not, our best scholars are also our best teachers.” This reasoning allows universities to substitute the research question for the teaching question: Is Professor Schmedlap a good teacher? Of course: he’s written a book and 12 articles.
Truth be told, this is pure invention, wishful thinking. There is no demonstrable connection at all between the ability to publish research articles and the ability to teach well. In the 1990s, two scholars, John Hattie and Herbert Marsh, conducted an analysis of the most credible evidence they could find on the connection between research and teaching. They reviewed 58 studies for their 1996 paper, “The relationship between research and teaching: a meta-analysis”, published in the Review of Educational Research. All of them involved outside evaluations of both teaching and research quality for university teachers. The conclusion: “The evidence suggests a zero relationship.”
Research has continued on the question, of course. One of the most interesting studies was done at Northwestern University in 2017. The Illinois institution’s president, Morton Schapiro, and fellow economist David Figlio gathered an unusually large body of evidence on the teachers of introductory courses. And they devised a clever way of assessing teaching quality by looking at students’ performance in subsequent classes. To measure research effectiveness, they looked at an index of how influential a professor’s writings were in the field, how often they were cited and referred to in other research, and whether they had been recognised by university or national organisations for research work. They applied these tests to 170 tenured faculty, who taught more than 15,000 first-quarter students between 2001 and 2008. This study, written up as a working paper titled “Are tenure track professors better teachers?”, is one of the most sophisticated and persuasive I have ever encountered on the question.
What did Figlio and Schapiro find? Exactly the same thing that Hattie and Marsh found 20 years earlier: “Our bottom line is that, regardless of our measure of teaching and research quality, there is no apparent relationship between teaching quality and research quality.” Again, zero. None. The unity of research and teaching is pure invention, a myth.
It is, however, a myth that spares these institutions a lot of effort. Because if teaching and research are basically two sides of the same coin, you can train, socialise, hire and promote faculty based on their research accomplishments and just assume that teaching will take care of itself. For the most part, that is what colleges and universities do. In his 2010 book Crisis on Campus: A Bold Plan for Reforming Our Colleges and Universities, Mark Taylor, a professor in the department of religion at Columbia University, puts it this way: “Though most universities pay lip service to teaching and rely on student course evaluations, in my experience, teaching ability plays no significant role in hiring and promotion decisions. Publications and evaluations of other specialists in the field are virtually all that count.”
What makes this acutely ironic is that faculty members spend, on average, a great deal more time on their teaching than they do on their research. So they are, in essence, promoted and rewarded on the basis of a small minority of their actual work, while the bulk of that work is ignored in the reward system.
These same institutions, which can’t tell you how their teachers teach or what their students learn, offer education as their central product and promote their excellence to students, parents and society at large. Economists Dahlia K. Remler and Elda Pema, in a 2009 paper for the National Bureau of Economic Research, found that universities and colleges “reward research while selling education”. The paper, “Why do institutions of higher education reward research while selling education?”, goes on: “Empirically, whatever competitive forces are at work in the education services market, they appear to reward the research reputations of institutions of higher education far more than they reward their teaching reputations.”
The reason they find for this is that institutions invest heavily in the machinery for evaluating research and reward research excellence. On the other hand, they invest little, in terms of time, effort or money, in evaluating or improving teaching. In the realm of research, higher education knows what it is doing. In the realm of teaching, it does not.
So the competitive market for higher education is a strange market, in which what you buy is not what you get and reputation is unconnected to performance. Institutions compete for students by in essence imitating high-prestige models, even though there is no evidence that prestige has any bearing on what actually happens to students after they get to college.
A college education should make students observant, flexible and responsive. It should help them to evaluate evidence and weigh probabilities, to distinguish between true and false, between real and unreal. Well-educated students, we hope, know what they are doing and can shape their behaviour to the changing demands of the world. Yet in the US the institutions that we trust to effect this transformation have become blinkered and rigid. They generate an endless flow of self-referential data that tell them little or nothing about the real consequences of their work. They don’t know what they’re doing, and so they are not getting much better at doing it.
They face many challenges: cost, access and completion, all genuinely important. But fundamental to meeting any of these is the need to see what they are really doing, and to learn to do it better.
John Tagg is professor emeritus of English at Palomar College, California. His latest book, The Instruction Myth: Why Higher Education is Hard to Change, and How to Change It, is published by Rutgers University Press.
Print headline: Teaching is not making the grade