CONCEPTS. Where cognitive science went wrong. By Jerry A. Fodor. 174pp. Oxford University Press. Pounds 30 (paperback, Pounds 12.99). - 0 19 823637 9.
Fodor's theory of mind and its problems
With no loss to her rationality, Lois Lane believes that Superman adores groundhogs and that Clark Kent loathes woodchucks. Yet how can this be, given that Superman is Clark Kent and the property of being a groundhog is the property of being a woodchuck? A not very informative answer, at least initially, but one with which virtually all philosophers and cognitive psychologists would agree, is that Lois has distinct concepts of Superman/Kent which she fails to realize are concepts of one and the same person, and distinct concepts of the property of being a groundhog/woodchuck which she fails to realize are concepts of one and the same property. Concepts, as that term is currently used in philosophy and psychology, are whatever things acquit the rationality of thinkers like Lois. But what things are concepts, and what theory can we give of them?
Jerry Fodor's answer is given against the background of certain hypotheses he has been defending most of his adult life. He calls the first hypothesis the representational theory of mind (or "RTM"). In its most generic form, RTM holds that we think in a language-like system of mental representation - that the neural configurations which realize our beliefs, desires, intentions, and so on, are like natural-language sentences in the way they have constituents and structure and in the way their meaning depends upon the meanings of their parts and struc-ture. Meaning in a "language of thought", or "Mentalese", is what underlies the propositional content of one's thoughts, and it does this in the following way. One believes that such-and-such provided that one stands in a certain computational relation to a sentence of one's language of thought which means that such-and-such, and likewise, mutatis mutandis, for desiring, intending, fearing and so on that such-and-such. The idea is that a belief that it will rain and a hope that it will rain both involve a Mentalese sentence which means that it will rain; the two mental states differ in that the sentence underlying the belief is disposed to behave computationally in one sort of way, while the one underlying the hope is disposed to behave computationally in another sort of way. There's supposed to be one kind of computational relation constitutive of a state's being a belief, another that's constitutive of a state's being a hope, and so on, although RTM theorists have done little so far to identify these computational relations.
One of the biggest questions confronting RTM is the nature of Mentalese meaning, and two further background hypotheses invoked by Fodor concern the meaning of those mental representations that are Mentalese predicates. The first of these is that the meaning of a Mentalese predicate is the property it ascribes. Thus, "dog" (strictly, its neural counterpart) means the property of being a dog, "brown" means the property of being brown, "brown dog" means the property of being a brown dog, and so on. The second of these hypotheses Fodor calls informational semantics; it's a partial theory of that meaning relation which relates Mentalese predicates to the properties they mean. To a first approximation, the theory may be put as holding that, subject to certain qualifications, the correct account of the meaning relation for Mentalese predicates will take the form of a completion of the following schema: A representation R means property P in x's language of thought, if and only if it's a law that things that have P cause - in such-and-such way and under such-and-such conditions - occurrences of R in occurrences of sentences in x's language of thought which constitute beliefs.
So if we pretend that Al thinks in English and that his standing in the relevant computational relation to a sentence is his having that sentence in his "belief box" (that box in his head, metaphorically speaking, wherein his beliefs are stored), then the basic idea of informational semantics is simply this: It's a law of nature that when, in circumstances that are good for seeing, Al's open-eyed head is directly facing a nearby clear example of a dog, then, ceteris paribus, the sentence "That's a dog" features in Al's belief box, and it's in virtue of this that "dog" means doghood in Al's language of thought.
The qualifications alluded to are as follows. First, the schema can be correct only for semantically primitive representations - those representations, like "dog" and "red", whose meanings are not even partly determined by the meanings of constituent representations. This is so because for Fodor the meaning of "brown cow" is entirely fixed by its syntax and the meanings of "brown" and "cow". Second, the schema can't hope to be correct for all semantically primitive representations, but only for those that can enter into causal relations of the required sort (this excludes predicates like "number" and "unicorn", since numbers and unicorns don't enter into causal interactions). Third, strong constraints must be put on the kind of Mentalese sentence that instances of R occur in. For example, if the representation is "dog", then the sentence in the belief box must be a sentence like "Fido is a dog", in which doghood is actually being ascribed to something, and not a sentence like "Fido is not a dog" or "Fido is either a dog or a cat".
This brings us to Fodor's theory of concepts, whose essence has two parts. The first part is the claim that concepts are mental representations conjoined with the further claim that: x and y are instances of the same concept provided that x and y have the same meaning and are instances of the same mental representation.
So, suppose Lois Lane thinks in English and has in her belief box "Superman adores groundhogs", "Clark Kent loathes woodchucks", and "Superman flies". Then the two occurrences of "Superman" are occurrences of the same concept - SUPERMAN, to adopt Fodor's capital-letter convention for referring to concepts - but an occurrence of "Superman" and the occurrence of "Clark Kent" are instances of different concepts, for although both have the same meaning, they are instances of different representations. Similarly, the occurrence of "woodchuck" and the occurrence of "groundhog" are instances of distinct concepts, notwithstanding their sameness of content.
Fodor calls the second part of his theory conceptual atomism. Conceptual atomism claims, "to put it very roughly, that satisfying the metaphysically necessary conditions for having one (primitive) concept never requires satisfying the metaphysically necessary conditions for having any other concept". This, however, is ambiguous. It could mean that it's possible for one to have any particular primitive concept without having any other concept at all; or it could mean that, while one perhaps can't have a primitive concept without having some concepts or other, there is no particular concept one must have.
Fodor clearly accepts the second reading, but does he also accept the first? Remarkably, he does appear to accept it (see, for example, page 14). This is remarkable because the first reading is obviously false; one couldn't have a mental representation which means a property, unless one could join that mental representation with others to get a Mentalese sentence that means a proposition which contains the property; and this entails that it's not possible to have a concept without having any other concept. Still, the weaker reading is interesting, and I'll understand Fodor to mean that it is the antidote to "where cognitive science went wrong" - to repeat his book's subtitle.
Where cognitive science - the interdisciplinary study of the mind/brain engaged in by philosophers, psychologists, linguists, computer scientists and neurophysiologists - went wrong, according to Fodor, was in subscribing to a theory of concepts he calls inferential role semantics (or "IRS"). IRS is incompatible with conceptual atomism, for it holds that a metaphysically necessary condition for having the concepts Fodor takes to be primitive is that those concepts enjoy specific inferential liaisons with particular other concepts.
There are at least three ways of being an IRS theorist. First, one may hold that there are "analytic" relations among concepts. Such a theorist is apt to hold, for example, that one can't have the concept MURDER without having the concept KILL, since one can't have the former concept without knowing that being murdered entails being killed. Second, even if one denies that there are analytic connections among concepts, one may hold that certain inferential connections are essential to certain concepts owing to their being recognitional concepts, concepts - perhaps PAIN and RED - whose possession requires an ability to recognize certain clear examples (for instance, one's own pains) of things falling under them. Third, one may hold that non-recognitional concepts may be individuated by the inferential connections of mental representations, even when those connections don't generate necessary truths. Thus, it may be that the inferential role constitutive of SUPERMAN requires one to have a thought that contains the concept SUPERHERO WHO FLIES whenever one has a thought containing SUPERMAN, even though "Superman is a superhero who flies" doesn't state a necessary truth, since Superman might not have been a flying superhero.
What arguments does Fodor offer for his various claims about concepts, and how good are those arguments? There isn't space left in this review to do justice to either question, and this is unfortunate both because of the richness of considerations adduced by Fodor in this important, aggressively argued, accessible, witty, irreverent, wide-ranging, provocative and bound-to-be-influential book written in his famous no-niceties, shoot-from-the-hip style, and because there is, I dare say, a lot to be said against Fodor's account of what concepts are, what individuates them, and what it is for them to have meaning. I'll close by mentioning just two of the problems I find.
The first problem is that there are certain concepts for which an IRS account seems utterly inescapable. I have especially in mind logical concepts like AND, NOT, OR, IF-THEN, SOME and EVERY, as well as mathematical concepts like NUMBER, SET, ADDITION, and so on. For consider what would show that "&" means conjunction in Al's language of thought. We would conclude that "&" meant conjunction if we found that whenever P and Q were in Al's belief box, then so was "P & Q", and that whenever "P & Q" was in his belief box, so were P and Q, and unless something very close to that were found, we wouldn't conclude that "&" meant conjunction in Al's language of thought. To accept this is to accept an IRS account of what bestows its meaning on "&" in Al's head. Curiously, Fodor nowhere discusses such concepts in his book. In the only place where he even mentions logical concepts, he implies that he accepts an IRS account of them. But the need for an IRS account for some concepts would appear to undercut several of Fodor's arguments against IRS. For example, his biggest complaint against IRS is that there's no principled way of determining which inferential roles are constitutive of a concept and which aren't. But if there can be a principled way for logical and mathematical concepts, why not for others?
Fodor also argues that IRS is incompatible with a computational theory of thinking, one which sees mental processes as operations on symbols: "For fear of circularity, I can't both tell a computational story about what inference is and tell an inferential story about what content is." Now I don't see that there is any problem here to begin with, but, if there is, why isn't the computational story about inference undermined by the fact that for very many concepts we must tell an inferential story about what determines their content?
The second problem is considerably more serious. In Chapter Two, Fodor lists five non-negotiable conditions on a theory of concepts. The fifth, and surely correct, condition is that "concepts are public; they're the sorts of things that lots of people can, and do, share". Any adequate theory of concepts, Fodor insists, ought to allow that both he and Aristotle shared the concept FOOD and that both he and Helen Keller shared the concept TREE. Moreover, Fodor clearly intends the publicity constraint to hold even for thinkers who think in different languages of thought. If I think in neural English and you think in neural French, it should still be possible for us to share innumerable concepts. But it follows from Fodor's theory of concept individuation - viz, that x and y are instances of the same concept, provided that x and y have the same meaning and are instances of the same mental representation - that the only people who can share concepts are those who think in the same language of thought. For, as Fodor makes clear, mental representations - ie, mental symbols - are syntactically individuated, individuated by neural properties in the way that spoken symbols are individuated by phonological properties and written symbols by shape properties. So, if in your head the neural variant of "dog" (ie, the neural word that stands to spoken and written "dog" as spoken "dog" stands to written "dog") means doghood and in Pierre's head the neural variant of "chien" means doghood, then, unacceptably, you and Pierre don't share the concept DOG.
The fact that Fodor's theory of concepts violates his own publicity constraint in this way shows that his account of concept individuation fails to state a necessary condition: it's not the case that x and y are instances of the same concept only if they're instances of the same mental representation. But his theory also fails to state a sufficient condition: it's not the case that x and y are instances of the same concept if they have the same meaning and are instances of the same mental representation. For it may be that the concept I express with "rabbit" is the one you express with "hare", and that the concept I express with "hare" is the one you express with "rabbit". This is clearly possible, but it wouldn't be if Fodor's account provided a sufficient condition for interpersonal sameness of concepts.
The moral of this last problem seems to be that the required revision of Fodor's account is: x and y are instances of the same concept provided that x and y have the same meaning and are instances of mental representations that have the relevantly same inferential roles.
But this, alas, is an IRS account of concepts which is incompatible with conceptual atomism. Perhaps it was Fodor, and not cognitive science, who went wrong.
Stephen Schiffer is Professor of Philosophy at New York University.