Who are you calling a pseudoscientist?

Academics need to think far more carefully about how they define and police the boundaries between legitimate and illegitimate knowledge, argues Michael D. Gordin

June 24, 2021
A couple viewing the head of Italian criminologist Cesare Lombroso preserved in a jar as a metaphor for academics need to think far more carefully about how they define and police the boundaries between legitimate and illegitimate knowledge
Source: Getty

If you try to picture a “pseudoscientist”, you might imagine an astrologer or a creationist. But that is not what practising scientists tend to mean by the term.

When I used to ask them at social events to describe a pseudoscientist, they would often say: “You see, there is this person in my field who has published this crazy result. I’ve pointed it out, in print and in person, several times, and he [it was always ‘he’] refuses to correct it.” I must have received some variant of this response at least a dozen times. It was easily the most common answer and it has something important to teach us about the way science operates within universities and more generally.

We can start with two significant points. The first is that the fringe beliefs most frequently associated with the term “pseudoscience” in public discussions – alchemy, extra-sensory perception (ESP), Bigfoot or Flat Earth – do not preoccupy practising scientists at the lab bench. Those particular denizens of the fringe had already been demarcated out of bounds and tacitly dismissed by my interlocutors as beneath notice.

Even more important, however, is the fact that when prodded to explore the boundary between science and pseudoscience, the scientists invoked not epistemological first principles but the rough-and-tumble context of scientific disputation and publishing. That is, they referred to research in their own fields as normally practised, but confidently demarcated legitimate from illegitimate approaches.

That is, when scientists think about “pseudoscience”, they no doubt include the exotic phrenologists and Lysenkoists, but at the forefront of their minds are the more humdrum colleagues arguing in bad faith. To be clear, I am not claiming that there is a continuity between the content of mainstream science and the demonised fringe – rather the opposite. If you imagine a spectrum running from excellent science through good science, mediocre science (most of it, by definition) and poor science to execrable science, then “pseudoscience” isn’t even on it. It is not “bad science” by another name; instead, it is an impostor, masquerading as legitimate knowledge.

What is often presented as a purely intellectual exercise of demarcation – sifting the scholars from the cranks – is actually a way of policing a contentious border area. As with all policing actions, the fundamental criteria regarding what is acceptable deviance (or innovation) and what is beyond the pale are political. 

I claim that the very process of today’s mainstream science necessarily produces a host of discarded doctrines which can take forms that, under certain conditions, could be recategorised as “pseudo”. Since demarcation is inevitable and the edges of the scientific frontier are highly dynamic, universities and others who allocate resources to research should reflect explicitly on how these transformations can happen.

The process is an unintended consequence of the adversarial organisation of scientific research, dominant for at least the past two centuries. The way a scientist makes her reputation is by building on past findings, of course, but if all she does is confirm what everyone knew before, then her career stagnates. The pressures in scientific publication are to do something new, and that usually means refuting a claim associated with the consensus. Typically, this isn’t a challenge to a core tenet – electrons don’t exist! – but rather to a small or medium-scale position.

Credit in science is allocated for priority (being first) and for being more correct than your competitors investigating the same questions. There will always be winners and losers. Eventually, many of today’s winners will become losers, as their accepted positions are in turn displaced by new scientific research. This is the ordinary yet incredible dynamism of science that has elicited so many accolades.

Yet it also produces an instability about the nature of scientific claims. Pluto was a planet, until it wasn’t (and so on). At any one point in time, there is a mainstream scientific consensus, but there are also doctrines that are being displaced from it, shunted to the fringe, often by radical theories that came from the fringe themselves, such as that an asteroid killed the dinosaurs.

Some of these superannuated ideas become what we might call “vestigial” pseudosciences: doctrines that were once considered mainstream, or at least candidates for mainstream validity, but which over time are relegated to the dust heap by the consensus.

A good example is astrology. In 16th-century Europe, astrology was so far from being a pseudoscience that it was arguably the leading science. Based on an ever-expanding collection of empirical data organised through quite sophisticated mathematics, it made detailed predictions and enjoyed munificent support from wealthy patrons. Its status was always contested, but it took centuries before it faded away as a legitimate domain of elite natural philosophy. 

The pseudoscientific status ascribed to astrology, in other words, is not hardwired into its tenets, but a product of their interaction with the context of contemporary scientific knowledge (which always changes). A great many of the theories most frequently called pseudosciences – creationism, phrenology, eugenics – were at one point either mainstream or reasonable candidates for mainstream status. They were displaced by the confrontational attacks that are the mainstay of scientific debate.

A scientist holds up a test tube in which we see a mushroom cloud, a symbolic image for nuclear cold fusion.

Over centuries, this process is easy to observe, but it is harder to evaluate in the here and now, especially when the science is innovative and controversial. Nobody self-consciously decides to be a pseudoscientist. Those saddled with this label by orthodox scientists often see themselves as simply doing science, just on the more innovative fringe neglected by their stodgier, consensus-driven colleagues. Even in cases where the knowledge claims involved are arguably hoaxes or based on clear mistakes, there is still the potential for one of the discarded doctrines – should it garner enough adherents – to establish an existence on the fringe. 

Consider a classic example: cold fusion. Unlike astrology, the trajectory to the fringe took less than two months. On 23 March 1989, Stanley Pons and Martin Fleischmann, two electrochemists at the University of Utah, held a press conference to announce a revolutionary discovery. Using a very simple set-up of electrodes immersed in solution, they claimed that the palladium electrode – which highly concentrates hydrogen ions – had generated a huge heat spike. Their interpretation for these anomalous results was that they had succeeded in fusing hydrogen nuclei into helium. Since they produced much more energy than they had put in, the name given to this phenomenon was “cold fusion”.

If these results had been confirmed, it would have been the most important scientific result of the century, even the millennium. Cold fusion could easily satisfy all of humanity’s energy needs without carbon-dioxide emissions, without radioactive waste, thereby transforming the economy and the planet.

The University of Utah’s technology transfer office organised the press conference – something still unusual in 1989 while the peer-review process was ongoing – and then flew Pons and Fleischmann to Washington, DC to lobby Congress for funding to scale up cold fusion. It is worth underscoring that this was almost a textbook example of what contemporary universities are supposed to do: encouraging innovative science that pushes the boundaries of knowledge, especially cutting-edge work that can yield economic benefits.

And then the bottom fell out. At first, numerous labs leapt at the opportunity to replicate these amazing findings, but most efforts stalled, in part because the Utahns did not share information readily, pleading the sanctity of the refereeing process. The few confirmations that were announced were quickly retracted: a faulty neutron-detector here, a miscalibrated thermometer there.

It got worse. On 1 May, at the annual meeting of the American Physical Society, a group of physicists and chemists eviscerated the central claims of the Pons-Fleischmann experiment. For example, to generate the amount of heat they claimed, Pons and Fleischmann would have been killed by the neutron flux. The two electrochemists moved to France, and the tempest quieted down. It’s a fascinating story, and I encourage you to read the accounts in books such as Bart Simon’s Undead Science: Science Studies and the Afterlife of Cold Fusion and Frank Close’s Too Hot to Handle: The Race for Cold Fusion.

What happened next might surprise you. A small group of researchers continues to this day to explore the Pons-Fleischmann approach to energy generation. Specialised journals emerged in the mid-1990s, and there have been dedicated conferences ever since. While mainstream nuclear researchers declare the phenomenon of palladium-induced fusion to be pseudoscientific, it has not simply vanished. It was a controversial idea that tried to move from the fringe to the mainstream and failed – but then retreated to a different part of the fringe, where it lives on. Sometimes, fields like this gain a foothold at universities and survive for longer than you might expect, like the Princeton Engineering Anomalies Research (PEAR) lab at my own institution, which investigated the psychic manipulation of electronics from 1979 until 2007, when it moved off campus.

So my nosy questions at academic cocktail parties raise an important concern for everyone involved in science, especially in the resource-scarce conditions of today’s universities. We are faced with a gigantic number of claims to knowledge, and nobody has the time, energy or resources to investigate them all. We all necessarily engage in acts of demarcation, deciding which are worth exploring and which should be discarded as likely nonsense. We often use the consensus as a benchmark for reasonable investigation, but this is not foolproof. The consensus is not always correct, and what was tossed aside as mistaken or even “pseudoscientific” can turn out to be important.

It would be nice if we had a bright-line demarcation standard, such as Karl Popper’s falsifiability criterion, but we don't (creationists make many falsifiable claims, for instance). The demarcation criteria we use in practice are more ad hoc, calibrated to fluctuating standards of how much of the fringe to tolerate. These can differ strongly by discipline, by institution and even by researcher.

I call this problem the “central dilemma”. We can set our standards for what seems a plausible knowledge claim extremely high, so that we only entertain small deviations from orthodoxy, but if we do so, we will strangle exciting breakthroughs in the cradle. (Neither relativity nor quantum theory would have earned a hearing.) So most of us allow some of the fringe into our inboxes and journals, hoping that new ideas will propel the adversarial mechanisms of science to a deeper understanding of nature. The problem is that there is no way to know in advance whether a radical claim is brilliant or nonsense – you have to take each on a case-by-case basis.

Every grants review panel, every tenure committee, every dissertation adviser is confronted daily with the central dilemma. If you want to brand your institution a maverick university that attracts outside-the-box talent, you’ll have to engage in more debunking and filtering. Without a bright-line solution, you have to work by rules of thumb.

One possible first step would be to divide the domains of research according to how potentially harmful fringe theories could be, or to work out how costly it would be (in terms of money or time) to debunk a proliferation of unconventional claims. Fringe theories in the areas of environmental pollution and public health, for instance, can have severe effects on people’s welfare. But we probably need more outside-the-box thinking in the former area than in the latter (where the orthodox principles generally work pretty well). So perhaps the bar should be set differently in those two cases.

As for the costs involved in filtering out the potentially credible from the false, scientists already make calculations based on the potential fruitfulness of new ideas and their own capacities. More explicit discussion of how those implicit criteria function would help decision-makers at all levels – including students – navigate the hazards of the everyday deluge of information. Demarcation is inevitable, and there is no shortcut.

This is why my interlocutors always found their pseudoscientists within their own disciplines, and not ranting in a tinfoil hat in the village square. It’s the right place to look.

Michael D. Gordin is Rosengarten professor of modern and contemporary history at Princeton University. His latest book, On the Fringe: Where Science Meets Pseudoscience, was recently published by Oxford University Press.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please Login or Register to read this article.

Related articles

Reader's comments (1)

Please see "How Scientific is Chemistry?" Isr. J. Chem. 2020, https://doi.org/10.1002/ijch.202000112 Same ideas. Even the cold fusion case.