Skip to main content Accessibility help
×
Hostname: page-component-68c7f8b79f-xc2tv Total loading time: 0 Render date: 2025-12-29T05:40:25.749Z Has data issue: false hasContentIssue false

Chapter 1 - Introduction and Background Assumptions

Published online by Cambridge University Press:  17 April 2025

Peter Carruthers
Affiliation:
University of Maryland, College Park

Summary

This chapter begins by sketching the standard belief-desire model of action and action-explanation employed by most philosophers, while also noting some variations. Of necessity, given the immense literature on the topic, this is presented at a high level of generality, abstracting over many local disagreements. The model and its variations provide the main set of foils for the scientifically grounded accounts to be discussed in later chapters, which are then briefly previewed. The remaining sections of the chapter go on to explain and quickly motivate three major assumptions that are taken for granted throughout the remainder of the book (and also by many philosophers and almost all cognitive scientists, it should be said): realism, physicalism, and representationalism.

Information

Type
Chapter
Information
Explaining our Actions
A Critique of Common-Sense Theorizing
, pp. 1 - 19
Publisher: Cambridge University Press
Print publication year: 2025

Chapter 1 Introduction and Background Assumptions

This chapter begins by sketching the standard model of action and action-explanation employed by most philosophers, while also noting some variations. Of necessity, given the immense literature on the topic, this will be presented at a high level of generality, abstracting over many local disagreements. The model and its variations will provide the main set of foils for the scientifically grounded accounts to be discussed in later chapters, which are briefly previewed in Section 1.2. The remaining sections of the chapter then go on to explain and quickly motivate three major assumptions that are taken for granted throughout the remainder of the book (and that are agreed upon by many philosophers and almost all cognitive scientists, it should be said): realism, physicalism, and representationalism.

1.1 The Standard Model and Variations

The sub-discipline of philosophy called “the philosophy of action” is almost exclusively about what philosophers call “intentional action” (Mele Reference Mele2003; Wilson & Shpall Reference Wilson and Shpall2012; Piñeros Glasscock & Tenenbaum Reference Piñeros Glasscock and Tenenbaum2023). These are actions done with a goal or intention in mind, and are done for reasons. In their most general form, the reasons that motivate actions (“motivating reasons”) are things toward which one holds a “pro-attitude” of some sort (a desire) or a “con-attitude” (a repulsion), together with beliefs about the states of the world that might enable one to achieve or avoid the things in question (Searle Reference Searle1983). This is so-called “belief-desire psychology.” Actions are taken to achieve things one wants in light of one’s beliefs about the means of getting them. For example, Mary wants to go to medical school and believes she needs to study hard to get there, which is why she is studying.

Most philosophers have followed Davidson (Reference Davidson1963) in thinking that reasons motivate actions by causing them. When one acts because one wants something and believes that the action will achieve what one wants, the desires and beliefs in question are the causes of one’s subsequent action. So action-explanation is a species of causal explanation. As for the nature of the explaining attitudes themselves (beliefs and desires), there are varying opinions, as we will see. But it is nevertheless agreed that they are propositional attitudes. They have contents that can be correctly expressed using a that-clause. Mary wants that she should get accepted into medical school and believes that she needs to study hard to get in. And these states (in the presence of a host of enabling conditions) causally explain the fact that she is studying.

Moreover, beliefs and desires are thought to be distinguished from each other by their functional roles. It is common to use the phrase “direction of fit” in this connection (Searle Reference Searle1983). Thus beliefs have a mind-to-world direction of fit: their role is to “fit” or “match” the state of the world, and to guide action-selection appropriately in the light of that fit. Desires, in contrast, have world-to-mind direction of fit: their role is to change the world in such a way as to “fit” or “match” the content of the desire. Thus beliefs have correctness-conditions whereas desires have satisfaction-conditions. Mary’s belief that she needs to study hard to get into medical school is true if she really does need to do that, false otherwise. Her desire is satisfied if she succeeds in getting into medical school, frustrated if she fails. And it is the desire that motivates studying/causes her to study, given her belief.

While Davidson (Reference Davidson1963) assumed that beliefs and desires are the only categories of mental state that are needed to explain action (in addition to perceptual states, of course), most of the field has since become convinced by Bratman (Reference Bratman1987, Reference Bratman1999) and others that one also needs to appeal to plans or intentions. Beliefs and desires inform planning and decision-making, and action itself is subsequently controlled and guided by the plans decided upon. These intentions, once formed, are thought to constrain one’s reasoning thereafter, pre-empting further decision-making and motivating the intended actions. Intentions, too, are thought to be propositional attitudes. Knowing that there is a test coming up tomorrow, Mary might decide that she should study until late tonight, thereby forming an intention that guides and constrains her plans for the evening. (For example, she might turn down an invitation to a party.)

Much less attention has been paid to forms of action other than intentional action. As Frankfurt (Reference Frankfurt1978) points out, however, there is a sense in which a spider making its way across the floor is acting. Its movements are controlled ones, and might be guided by a simple aim, such as to reach the darkness under the bed. But it is thought that most human actions are distinctively different. (In part this may be because of a reluctance to attribute propositional attitudes or even simple forms of practical reason to spiders and other animals. If so, this is surely a mistake; Carruthers Reference Carruthers and Lurz2009, Reference Carruthers2019.) Human actions, insofar as they conform to the standard model, are thought to reflect our distinctive rational capacities. And while philosophers of action might occasionally notice that some human actions can be habitual ones, rather than intentional, these are apt to be set aside as not really paradigmatic cases of action. (This may be because people vastly underestimate the extent to which human life is governed by habits, as we will see in Chapter 2.) Indeed, some say that we are passive with respect to such actions (Wu Reference Wu2023): they are things that happen to us, rather than things that we genuinely do. Only intentional actions are thought to be truly expressive of our selves.

While discussion of intentional action in the 1960s and 1970s had little to say about the role of emotion in motivating action, this has changed in recent decades. But many of the resulting theories still cleave to the basics of the belief-desire model. Thus on some views emotions are identified with a special class of judgments (Solomon Reference Solomon1976; Nussbaum Reference Nussbaum2001). To be afraid of the spider is to judge that it is dangerous, or at least to entertain the thought that it is (Greenspan Reference Greenspan1988). Others have maintained that emotions are combinations of judgments with desires (Marks Reference Marks1982; Gordon Reference Gordon1987), together with some degree of bodily arousal, perhaps. So to fear the spider is to think of it as dangerous and to want to get away from it. But some argue that emotions are more like a form of evaluative perception (Tappolet Reference Tappolet2016). So to fear the spider is to see it as dangerous, in some kind of simple nonconceptual manner. But still the motivational role of emotion is thought to involve desire (in this case, the desire to get away).

While everyone in the field agrees that desires admit of degrees, and can come in a variety of different strengths, belief-desire theorists are more divided on the question of degrees of belief. Some think that beliefs of all kinds are graded in nature, referred to in much of the literature as “credences” (Jeffrey Reference Jeffrey and Swain1970; Levi Reference Levi1991). Others think that while some beliefs are graded there is another set of beliefs that are categorical or “all or nothing” (Pollock Reference Pollock1983; Sosa Reference Sosa2011). And yet others think that all beliefs are categorical but can embed degrees of likelihood in their contents (Harman Reference Harman1986; Schiffer Reference Schiffer2003). We will return to these issues in Chapter 8, along with the question of degrees of intention-strength, or “strength of will.”

According to the dominant version of the standard model in philosophy, then, intentional actions are caused and controlled by intentions. Intentional actions are thought to either constitute the vast majority of human actions, or to be distinctively human actions, or to be the actions that are of special philosophical interest insofar as they reflect and express one’s agency or self – or some combination of the above. Intentions, in turn, result from decision-making influenced causally by one’s beliefs and desires. And all three categories of state (intention, belief, and desire) are propositional attitudes, some at least of which admit of degrees, or variations in strength. Moreover, both intentions and desires have satisfaction-conditions and world-to-mind direction of fit, whereas beliefs have truth-conditions and mind-to-world direction of fit.

1.2 A Brief Look at the Way Ahead

The goal of this book is to evaluate the standard model and its variants, confronting it with our best science. While some aspects of the model turn out to be correct, much proves to be erroneous. The philosophy of action is in need of a substantial make-over, as we will see. Here I will give just a brief indication of what is to come.

Chapters 2 through 4 discuss a range of action-types that fall outside the standard model altogether, and require a different sort of explanation. Chapter 2 discusses habitual, speeded, and skilled actions, which together make up a very large proportion of our daily activities. None are caused and controlled in the manner envisaged by the standard model. The chapter also critiques intellectualist accounts of so-called “know-how” that have been advanced by some philosophers. Chapter 3 then discusses actions that are a direct product of affective (emotional and emotion-like) states, both expressive and seemingly (but not really) instrumental. These, too, fail to conform to the standard model.

Chapter 4 turns to discussion of mental actions, notably mental rehearsals of action, prospection of the future, inner speech, attention, memory search, mind-wandering, and creative insight. Many of these also fail to conform to a (suitably internalized) version of the standard model, and need to be explained differently. But at the same time, two categories of (alleged) mental action widely accepted by philosophers – judgments and decisions – turn out not to be action-like at all.

Chapter 5 moves on to the very heartland of the standard model: reflective forms of decision-making. It first shows that cognitive science vindicates a separate category of intentions, before outlining the evidence that valence (pleasure and displeasure when conscious) is the common currency of all, or almost all, decision-making. Finally, it explains how intentions and desires interact with one another, in such a way that while intentions generally exclude and pre-empt pursuit of conflicting desires, sometimes they do not, and one decides to set aside one’s goal in favor of a new desire instead.

The remaining chapters concern the nature of the attitudes appealed to by the standard model, especially beliefs and desires. Chapter 6 explains the true (scientifically informed) nature of desire, and its relationship to pleasure. In contrast to claims made by philosophers, pleasure is neither a motivating intrinsic feeling nor is it a desired experience. It is, rather, an analog-magnitude / nonconceptual representation of value.Footnote 1 And desires themselves are neither propositional attitudes, nor do they have world-to-mind direction of fit. On the contrary, they have correctness-conditions, since they represent the object of desire as to some degree (nonconceptually) good.

Chapter 7 is about the nature of belief, demonstrating the reality of at least two forms of belief – episodic and semantic memory. In contrast, many of the states that result from what philosophers regard as the very paradigm of belief-formation – making up one’s mind after considering the evidence – aren’t really beliefs at all, but are, rather, intention-like. They are commitments to think and act in the future as if the proposition were true. The chapter also critiques a view that has become quite popular among philosophers, that knowledge is a basic kind of factive mental state. This finds no place in cognitive science; indeed, it is inconsistent with the latter’s core commitments.

Chapter 8 then addresses the question of degrees. It argues that neither beliefs nor desires admit of degrees of attitude-strength. Rather, degrees are built into the contents of those attitudes – analog-magnitude representations of value, in the case of desire, and analog-magnitude representations of likelihood, in the case of belief. In contrast, there is at least one good sense in which intentions can vary in strength, albeit not in the sort of way that common sense suggests.

Chapter 9 then briefly summarizes the main conclusions of the book. The standard belief-desire model both ignores and is incapable of explaining vast swaths of human action. And even within its proprietary domain (so-called “intentional action”), the standard model mischaracterizes many of the explanatory states involved. It is right about the nature and role of intentions (except that not all intentions have propositional contents). But it treats states as beliefs that are actually more intention-like, while falsely assuming that all beliefs are propositional attitudes; and it is deeply mistaken about the nature of desire. What will emerge is that, although some aspects of the standard model can pass scientific muster, many cannot. And many of the ancillary claims that have been defended by some armchair-philosophers are false, too (e.g. that skills are a form of propositional knowledge, that knowledge is a basic kind of mental state, and that credences – degrees of belief – are real).

Chapter 9 concludes by drawing some methodological morals for the way in which philosophy of mind should properly be conducted. If philosophers are to take the nature of action and action-explanation seriously, then they need to engage with the science much more seriously than most of them currently do.

In what remains of this chapter I propose to lay out and briefly motivate three assumptions that are foundational to all, or almost all, cognitive science. These will be taken for granted in the remainder of the book. While these assumptions are consistent with the standard model, they aren’t components or entailments of it. Moreover, many philosophers accept them, although by no means all do. Readers already familiar with the issues can, if they wish, jump straight to Chapter 2 without much loss.

1.3 Assumption (1): Realism

Most philosophers who endorse some version of the standard belief-desire model of action-explanation are realists about the states that do the explaining. Following Davidson (Reference Davidson1963) and many others, they think that beliefs and desires are discrete mental states that can be acquired or lost on an individual basis, and which causally explain the behaviors to which they give rise. But not everyone agrees. Dennett (Reference Dennett1971, Reference Dennett1987), in particular, denies that belief-desire psychology makes any real-world commitments. Rather, it is a useful – indeed, indispensable – instrument for predicting human behavior, as well as for explaining it in the weak sense of rendering it predictable.

According to Dennett, when we attribute mental states to people we adopt a particular stance toward them and their behavior – the intentional stance. We make a background assumption of rationality, and then make predictions based on what a rational and well-designed organism with such-and-such access to the world and so-and-so previous experiences of the world would be likely to do. On this view, for it to be the case that an organism believes that P or desires that Q is just for its overall behavior to be reliably and voluminously predicted via a package of state-attributions that includes the belief that P or the desire that Q. There is no deeper reality to possessing a particular belief than for one’s overall behavior to be best predicted by attributing to one that belief. So there is nothing, here, to be scientifically falsified or corrected. Provided the intentional strategy works (as plainly it does, in general) there is nothing to prevent us from continuing to use it; indeed, practically speaking, we cannot stop using it; it is practically indispensable.

Dennett (Reference Dennett1991b) insists that the patterns in people’s behavior are real, of course. But that is all they are: patterns that the intentional stance enables us to see and predict. The terms we use when we adopt that stance have no deeper reality than this. In embracing such a position, however, Dennett violates one of the core methodological assumptions of both science and ordinary everyday theory choice. This is to prefer theories that causally explain observable patterns of events over theories that don’t (Carruthers Reference Carruthers2006; Emery Reference Emery2023). This is what realism about belief-desire psychology enables us to do. Beliefs, desires, and other types of mental state interact causally during decision-making to select and issue in actions. So the patterns that mental-state-attribution enables us to discern in people’s behavior aren’t just predictable, they are causally explicable.

Moreover, there seems little doubt that ordinary folk are committed to the real existence of the attitudes and perceptual states they ascribe to themselves and others. Indeed, any suggestion to the contrary will be met with an incredulous stare. In part this is because one is often aware, in one’s own case, of specific mental events that seem to be causes of one’s subsequent behavior. One is aware of the longing one feels at the sight of a piece of chocolate cake before one grabs it; and one is aware of judging that a shelf is too high to reach before turning to get a footstool to climb up on. Now, one doesn’t have to believe in any sort of Cartesian introspective infallibility to take such data seriously. Similar points can be made even from the standpoint of self-interpretative theories of self-knowledge (Carruthers Reference Carruthers2011). While one might indeed take an “interpretative stance” toward oneself, among the data for interpretation are conscious mental events of various kinds.

Moreover, ordinary folk are quite prepared to entertain the idea that someone paralyzed throughout their life, or people with severe cerebral palsy who have only minimal control of their movements, might nevertheless have a rich internal mental life, with many of the experiences, thoughts, and desires that others possess. But there is no behavior here to be voluminously predicted and made sense of from the intentional stance; and the assumption of optimal design is plainly not operative. So if the common-sense psychology of the folk were really just a stance, then we would expect them to withhold mentality in such circumstances.Footnote 2

In any case, whatever might be true of the folk, there is no doubt that cognitive science is committed to the reality and causal efficacy of the states with which it deals. And this is what we will assume going forward.

1.4 Assumption (2): Physicalism

While ordinary folk and most philosophers are agreed in being realists about the mental states they appeal to when explaining people’s actions, they differ about the ontological status of those states. While a bare majority of philosophers have embraced physicalism about the mind (Bourget & Chalmers Reference Bourget and Chalmers2023), most ordinary people have not: they are ontological dualists. Indeed, belief in an ontological separation of mind and body appears to be almost a human universal (Boyer Reference Boyer2001; Cohen et al. Reference Cohen, Burdett, Knight and Barrett2011; Roazzi et al. Reference Roazzi, Nyhof and Johnson2013). All of the world’s major religions (as well as most of its minor ones) involve a belief in some sort of life after death. According to some, the afterlife is purely ethereal or non-physical; according to others it will involve being reincarnated into a new body; and for yet others it will involve the resurrection of one’s original body, with an attenuated self that exists in a kind of limbo in the interim. Moreover, throughout history and across cultures, people have believed in spirits and gods – minded agents who aren’t subject to ordinary physical constraints (Brown Reference Brown2001). Many people today continue to possess such beliefs – or at least believe that an afterlife is possible – although with the advance of science and consequent decline of religion in much of the developed world, such beliefs may be becoming less common.

Although belief in an afterlife isn’t endorsed by everyone, it appears that some form of mind/body dualism is either a human universal, or close to being one. Indeed, it can persist even among highly educated academics. Moreover, there is good evidence of dualist intuitions even among people who explicitly endorse physicalism. As with other implicit folk theories, it seems that people can retain intuitions that conflict with what they explicitly accept (Shtulman & Valcarcel Reference Shtulman and Valcarcel2012; Kelemen et al. Reference Kelemen, Rottman and Seston2013). Thus dualist responses to imagined scenarios become significantly greater when people are placed under cognitive load (and are thus unable to answer reflectively), and also when they are primed to adopt an intuitive rather than an analytical mindset (Forstmann & Burgmer Reference Forstmann and Burgmer2015). Moreover, Chudek et al. (Reference Chudek, McNamara, Birch, Bloom and Henrich2018) used a simple animated-shapes task with adults and children (some as young as two) from North America and Fiji designed to elicit body-swap intuitions. They found compelling evidence of the presence of such intuitions across cultures. This is especially striking since social norms in indigenous iTaukei Fijian culture discourage discourse about people’s mental states altogether.

There should be no doubt, then, that common sense is committed to the non-physical nature of mental states (whether explicitly or implicitly), as are many philosophers. But there should also be no doubt that ontological dualism is false. The mind is physical through-and-through. I won’t argue for this in any detail here, except to make the following brief points.

  • Decades of work with patients who have undergone brain damage demonstrate how the brain is necessary for mentality. Thus damage to primary visual cortex causes blindness, damage to auditory cortex causes deafness, and so on. And this is not just true of the inputs to the mind. Damage to the prefrontal cortex causes an array of deficits of reasoning and decision-making (Manes et al. Reference Manes, Sahakian, Clark, Rogers, Antoun, Aitken and Robbins2002), and in extreme cases can cause people to become completely unresponsive to stimuli (Knight & Grabowecky Reference Knight, Grabowecky and Gazzaniga1995).

  • We already know enough about the brain to know that there are no physical processes within it that are initiated in the absence of any physical cause. So there is no way for non-physical states or processes to have a causal impact on behavior (as mental states seem obviously to do).

  • Scientific inquiry assumes that the natural world is layered, with higher-level properties and processes being realized in lower-level ones. (So neuroscience is realized in biochemistry, which in turn is realized in chemistry, and so on.) If mental processes are real, we should expect them to be realized in neurological ones.

  • The physically layered nature of the natural world has been the guiding methodological assumption of science for centuries. The ongoing success of scientific inquiry provides good reason to think that the assumption is true.

That having been said, in the remainder of this book I propose just to take physicalism about the mind for granted, as do many philosophers and almost all cognitive scientists.

1.5 Assumption (3): Representationalism

It is widely accepted among philosophers that many (perhaps all) mental states possess the property of intentionality – they are about something. And this is essential to their predictive and explanatory roles. It is because one has a desire that is about coffee, and a belief that is about coffee being in the mug, that one reaches for it. And it is because one sees the coffee mug (is in a perceptual state that is about the coffee mug and its position on the desk) that one knows where to reach. Turing (Reference Turing1950), reflecting on the significance of newly invented computing machines, was among the first to suggest how such facts about the mental can be physically realized; ideas that were later developed in detail by Fodor (Reference Fodor1975) and many others – especially influential among whom is Marr (Reference Marr1982).

The proposal is that one can physically explain the aboutness of mental states, and also how aboutness contributes to their causal roles in selecting and guiding behavior, by dividing the problem into two components. One is to suggest that the mind/brain computes over physical symbols of some sort; and the second involves an account of how these symbols come to be about anything (relying especially on the concept of information). Roughly, it is because the physical symbols carry information about the world, and because the brain computes over them in ways appropriate for the informational content that they carry, that behavior can be caused and explained by mental states that are about the world. This is accepted by many philosophers and is the guiding framework of all (or almost all) cognitive science.

Although mostly interested in vision, Marr (Reference Marr1982) famously suggests that a complete explanation of any given mental phenomenon should proceed on three levels, ultimately integrating all three. At the top is a representational, content-involving, specification of the process or phenomenon in question. (Marr calls this the “computational level,” which is somewhat confusing since it is also a representation-involving level; Ritchie Reference Ritchie2019.) Below that (the “algorithmic level”) is an account of the algorithms that execute the content-involving process, transforming physical symbols of some sort into other such symbols. And then at the bottom is the detailed neurological implementation of those algorithms and symbols. Our focus for the moment will be on algorithmic explanation (more generally described in terms of computations over symbols), later returning to discuss representational content and its place in cognitive science.

Early iterations of cognitive science postulated a “language of thought” involving discrete componentially structured symbols (Fodor Reference Fodor1975) – somewhat like a language with its sentential syntax and component words – together with formal (logic-like) transformations and inferences over those symbols. There then followed a period of competition between this approach and various kinds of distributed connectionist modeling (Fodor & Pylyshyn Reference Fodor and Pylyshyn1988; Smolensky Reference Smolensky1988; Elman et al. Reference Elman, Bates, Johnson, Karmiloff-Smith, Parisi and Plunkett1999; Smolensky & Legendre Reference Smolensky and Legendre2006). Given the extraordinary advances in both the computing power and speed of modern-day computers, the fields of artificial intelligence and machine learning have increasingly (almost exclusively) gone in the direction of distributed multi-layered neural networks. But much actual cognitive modeling designed to explain human performance has continued to operate with discrete symbols, albeit using probabilistic representations – especially employing Bayesian statistical inferences (Goodman et al. Reference Goodman, Tenenbaum, Feldman and Griffiths2008; Erdogan et al. Reference Erdogan, Yildirim and Jacobs2015; Piantadosi & Jacobs Reference Piantadosi and Jacobs2016; Overlan et al. Reference Overlan, Jacobs and Piantadosi2017).

There is a persuasive case for claiming that many cognitive representations have a discrete compositional (language-like) structure (Quilty-Dunn et al. Reference Quilty-Dunn, Porot and Mandelbaum2023). For example, as we will see in Chapter 7, semantic memory is organized around individual-object files and object-kind files, which have a noun-like file-header combined with embedded information about the object or kind in question. In this respect representations in semantic memory have structures much like natural-language generics, such as “Birds fly,” or sentences about individual people, like “John is tall.” But many cognitive processes compute over analog-magnitude representations, too – which approximately map some real-world continuous quantity such as time, color, direction of motion, or approximate number – employing a continuously varying internal magnitude of some sort as a symbol. Thus many foraging animals can compute a rate of return from a given foraging site, which requires them to integrate an analog-magnitude representation of the approximate number of items foraged with an analog-magnitude representation of an approximate time-interval, dividing the former by the latter (Gallistel & Gibbon Reference Gallistel and Gibbon2001). Other cognitive processes compute over iconic (or picture-like) representations (Kosslyn et al. Reference Kosslyn, Thompson and Ganis2006; Toribio Reference Toribio2011).

Less is known about the way cognitive representations as a whole are neurally realized. But some kinds are well studied. In connection with spatial navigation, for example, we know that individual cells in the hippocampal formation represent particular places (becoming especially active when and only when the animal is in a particular place in the environment), whereas others represent particular head-directions (O’Keefe Reference O’Keefe1976; Grieves & Jeffery Reference Grieves and Jeffery2017). Analog magnitudes, in contrast, are generally represented via the activity of populations of neurons, using so-called “population coding” (Deneve et al. Reference Deneve, Latham and Pouget1999; Pouget et al. Reference Pouget, Dayan and Zemel2000; Averbeck et al. Reference Averbeck, Latham and Pouget2006). For example, there are a great many neurons in an area of visual cortex known as MT that increase their firing-rate for a range of directions of motion, with their activity taking the form of a bell-curve centered on a particular direction, and responding with greater or lesser precision. An animal’s judgment of the actual direction of a motion-stimulus is computed across this population. And where precise discrimination is required, reliance is placed on the set of neurons with the greatest precision (Purushothaman & Bradley Reference Purushothaman and Bradley2005).

There seems little doubt, then, that mental states with their property of aboutness can be scientifically vindicated by explanatory frameworks that postulate computations over content-bearing symbols or representations, provided that an account can be given of what makes a given symbol be about a particular thing or property, and provided that the representational contents in question play an important role in cognitive-science explanations.

All naturalistic theories of representation are either built around the notion of information (Millikan Reference Millikan1984, Reference Millikan1989; Papineau Reference Papineau1987; Dretske Reference Dretske1988, Reference Dretske1995; Fodor Reference Fodor1990; Neander Reference Neander2017; Rupert Reference Rupert2018), or else the idea of a structure-preserving mapping – an isomorphism – between representations and their content (Cummins Reference Cummins1996; Gallistel & King Reference Gallistel and King2010), or both (Shea Reference Shea2018). I am happy to join Shea in endorsing both, accepting that representations can acquire content in a number of different ways. We consider informational content first.

When one type of thing causes, or is apt to cause, another, the latter event carries information about the former. Roughly speaking, knowing the latter, one can deduce, with some degree of confidence, that the former event has occurred. So when events in the mind/brain are caused by properties or events in the world or body, the mind/brain events carry information about the world or body. But this is not enough, as yet, for them to qualify as representational. Someone’s blistering skin carries the information that their skin has been exposed to the sun for a significant amount of time, but the blisters don’t represent previous sun exposure. For that, there has to be some downstream consuming system or process that can make use of that information – that is, one that responds to, uses, or computes with the information-carrying state in a manner that is somehow appropriate for, or dependent on, the information carried.

As a first approximation, one can say information-carrying states of the mind/brain qualify as representations provided that they cause downstream effects (whether other mind/brain events or overt behavior) that are sensitive to the information carried. This means that, in one way or another, those downstream effects occur as they do because of the distal cause of the state. Teleological theories cash this out in terms of evolution by natural selection (Millikan Reference Millikan1984; Papineau Reference Papineau1987; Neander Reference Neander2017). The downstream effects occur as they do because they are an adaptive response to the distal cause, selected through evolutionary processes. But other approaches can instead (or better, also) emphasize learning as the process that gives rise to adaptive responses to information (Dretske Reference Dretske1988; Shea Reference Shea2018).

Any representational state will carry much more information than is actually represented, of course. Consider the state caused by perceiving a raccoon. In addition to carrying information about the presence of a raccoon, it carries information about a complex pattern of stimulation on the retina, about light passing through the intervening space, as well as about the fact that a raccoon-mating event and a kit-birthing event occurred sometime in the past (in that temporal order). But it represents none of the latter set of things. In addition, a raccoon-representing perceptual state can be caused by a cat seen at twilight, when one mistakes a cat for a raccoon. But it doesn’t represent the disjunctive property raccoon-or-cat-at-twilight, either. On the contrary, if one takes the cat to be a raccoon, then that is a misrepresentation, and is false or incorrect.

What, then, picks out the represented information from among the total set of causes and possible causes of the state in question? Many of the writers cited above offer detailed and subtly different answers to this question. Those details won’t matter for our purposes. Here I propose to follow the account provided by Shea (Reference Shea2018), which is designed specifically to explain the notion of representation employed in cognitive science. He argues that what makes it the case that something is an information-based representation is that it plays a computational / functional role in some cognitive process. And what fixes the content (or correctness-condition) for a representation from among all the information that it carries is what causally stabilized the role of that representation in the computations that it enters into, either through natural selection, or through learning, or through its contribution to individual survival. The content of the representation is the information carried that we need to appeal to in explaining the role that the representation plays in determining the behavior of the organism.

When applied to the case of visually representing a raccoon, this account delivers the right answers. What explains the evolution of animate-object perception in general is the proximal stimulus-object (e.g. a raccoon); for it is this that underlies the success of subsequent actions (such as approaching or avoiding the creature in question). Carrying information about temporally distal events (an earlier raccoon mating) played no role in stabilizing the success of the system; nor did the patterns of stimulation on the retina (except insofar as they carry information about the distal object). And when perceptual learning leads one to distinguish raccoons from other living creatures, it is the presence of raccoons (and only raccoons) that explains the successful accumulation of knowledge about raccoons in the resulting animate-object kind-file.

Consider, now, representation via structural mapping. The simplest example concerns representation within the visual system. Neurons in primary and secondary visual cortex (and indeed, in other regions of cortex further downstream as well) are arranged retinotopically. That is to say, neurons that are adjacent to one another in visual cortex carry information about (and represent) information received from adjacent neurons in the retina. Lines, edges, and shapes represented in the visual system are constructed through the firing-patterns of adjacent neurons that carry information about adjacent patterns of stimulation on the retina, and thereby information about adjacent items in the perceived world. In effect (and simplifying hugely), the visual system uses spatial relations within itself to represent spatial relations in the world.

For a different, more complex, example of structural mapping, recall hippocampal place-cells, which fire when and only when the organism is in a particular location in the environment. So each place-cell, when firing, carries the information that the organism is in a particular place. But this is not yet to represent that it is in that place. For unless that place can be positioned in relation to others in a mental map, the information is useless. What transforms place-cells into representations of places are the associative links between them. Activations of one place-cell will tend to co-activate others in proportion to their adjacency in the environment. (These associative links get set up as the animal explores.) As a result, organisms can (and do) plan to get to place A from place B via activation of the links between the connecting nodes. Here the relation of associative co-activation is in structural correspondence with spatial distances and directions, and animals can use prospectively generated sequences of such activations when planning routes through their environments.

As Shea (Reference Shea2018) makes clear, the intentional contents of the symbol-like representations that cognitive processes compute over (i.e. their truth-conditions, correctness-conditions, or satisfaction-conditions) play an essential role in cognitive-science explanations. For the correctness-condition of a numerosity-representation, for example, is the information that it carries about approximate number. And it is this that explains how the inferences and behaviors that it guides have become stabilized by evolution or learning (such as when taking foraging decisions). In order to understand why cognitive processes work as they do, and to explain patterns of success and failure, scientists need to appeal to the worldly contents that are represented by the symbols and structures undergoing those processes. In which case, one central defining feature of the mind (aboutness) is vindicated by the cognitive sciences, not undermined.

1.6 Philosophical Challenges

Some philosophers have mounted challenges to computational / representational forms of cognitive science (Chemero Reference Chemero2009; Hutto & Myin Reference Hutto and Myin2013). The most general umbrella term for the view of the mind that is intended as a replacement is that minds are “embodied” (Shapiro & Spaulding Reference Shapiro and Spaulding2021).Footnote 3 The mind is said to be constituted by interactions between the organism and its environment in a way that leaves no room for (or at least, that has no need of) explanation in terms of computations over representations. What is urged on us by these philosophers is a radical new direction for the sciences of the mind. Meanwhile, tens of thousands of real cognitive scientists continue to make new findings and offer successful explanations within a variety of computational / representational frameworks, and carry on advancing our scientific understanding of the mind.

That mind and body, as well as mind and action, interact with each other is no surprise, of course. And many phenomena of this general sort have been known about and studied for decades within standard cognitive-science frameworks. Vision scientists have always known, for example, that visual processing cannot involve an isolated set of computations proceeding from retinal stimulation to perception of the world. For one’s eyes are constantly in motion, causing shifting patterns of stimulation on the retina. In fact so-called “efference copies” of the motor instructions that initiate these movements are among the main inputs to the visual system, where they are integrated with information deriving from the retina to enable perception of a stable world. Likewise for head movements and the motion of one’s body through the environment. If this makes visual processes “embodied,” so be it. But there is nothing radical here; nor is there any challenge to regular forms of cognitive science.

It is also widely recognized that even a core cognitive-science construct like attention is actually action-like, and operates in close interaction with affective valuational systems (Hickey & Peelen Reference Hickey and Peelen2015; Vuilleumier Reference Vuilleumier2015; Anderson Reference Anderson2016). Patterns of scanning across the visual scene, for example, partly depend on contextually cued habits (including tendencies to look toward faces), but partly on patterns of previous reward and punishment. One looks at and attends to things anticipating that they may be relevant, or that they might be important sources of knowledge. So here, too, there is constant feedback and interaction between vision, action, attentional networks, and value systems. Surprising, perhaps; but fully accountable for within standard cognitive-science frameworks.

Likewise, we have known for decades that thought and planning about the future utilize the resources of bodily motor-control systems. As we will see in more detail in later chapters, efference-copies of motor instructions (with overt execution thereafter suppressed) are used to generate “sensory forward models” of what it would feel like and look like if the action were executed. People and other animals are constantly simulating and evaluating the various possible actions that are open to them (and their likely outcomes) during decision-making (Seligman et al. Reference Seligman, Railton, Baumeister and Sripada2016). And the uses of these sensory forward models in fine-grained motor control, too, have been well known and well studied for decades, not only in humans (Wolpert & Kawato Reference Wolpert and Kawato1998; Jeannerod Reference Jeannerod2006), but even in dragonflies (Mischiati et al. Reference Mischiati, Lin, Herold, Imler, Olberg and Leonardo2014), as we will see. Again, that the body and mind interact and depend on one another is no news to standard cognitive science.Footnote 4

It might be thought that the thesis of “embodied concepts” (Barsalou Reference Barsalou1999) provides support for embodied cognition more generally. On this view, concepts are stored as networks of sensorimotor representations, not as abstract, amodal, symbols. But this would be to misunderstand computational cognitive science, which need not be committed to the thesis that all computations operate over abstract symbols like the “0”s and “1”s in a digital computer. All it really requires is that there are processes that create and transform structured physical symbols of some sort. And that is true of the modality-specific computations that take place within the visual system just as much as those that take place elsewhere. That said, the evidence adduced in support of embodied concepts is less than convincing, and can be better explained in terms of bi-directional spreading activation between amodal concepts and associated sensory and motor representations (Mahon & Caramazza Reference Mahon and Caramazza2008). And even if it were true that many concepts are sensorimotor in nature, we have good reason to think that plenty are not, including representations of mental states, action-concepts, and representations of shape, space, and physical causality (Carruthers Reference Carruthers2015; Spelke Reference Spelke2022).

Those who defend embodied-cognition approaches have drawn their inspiration from robotics and forms of artificial intelligence that operate without internal representations (Chemero Reference Chemero2009; Hutto & Myin Reference Hutto and Myin2013). Early iterations of robotics had some success with this idea (Brooks Reference Brooks1991). And of course we now have deep-learning systems that can hold conversations, compose essays, and drive cars. But there are compelling reasons to think that the ways in which these multi-layered deep-learning networks operate and achieve their success is quite different from the manner in which human and animal minds work. One is that they require huge numbers of training runs, generally in the hundreds of millions or hundreds of billions (Schrittwieser et al. Reference Schrittwieser, Antonoglou, Hubert, Simonyan, Sifre, Schmitt, Guez, Lockhart, Hassabis, Graepel, Lillicrap and Silver2020), and large-language models like ChatGPT are trained on something close to the entire contents of the internet. Humans and other animals, in contrast, manage to learn just as effectively using much more minimal feedback, often requiring just one or a few exposures to the material to be learned. The successes achieved by deep-learning forms of technology provide no reason to doubt that actual minds are representational.

A second point is that deep neural networks remain vulnerable to adversarial examples, not just in theory but in the real world (Goodfellow et al. Reference Goodfellow, Shlens and Szegedy2015; Kurakin et al. Reference Kurakin, Goodfellow, Bengio and Yampolskiy2018; Hendrycks et al. Reference Hendrycks, Zhao, Basart, Steinhardt and Song2021), and there is reason to think that these failures are endemic (Ilyas et al. Reference Ilyas, Santurkar, Tsipras, Engstrom, Tran and Madry2019; Shafahi et al. Reference Shafahi, Huang, Studer, Feizi and Goldstein2020). Adversarial examples are inputs that are adjusted in minor ways – so minor as to be undetectable to human observers – but which can lead a fully trained image-classifier to classify what is obviously a cat as a dog, or a dog as an ostrich, or (more worryingly) a stop-sign as a go-sign, for example. This, too, is powerful evidence that deep neural networks provide poor models of how human and animal cognition actually work. If our goal is to understand minds (and not just to make smart machines) then there is currently no viable alternative to representation-employing forms of cognitive science.

Chemero (Reference Chemero2009) also appeals to dynamical-systems theory as a non-representational alternative to cognitive science. This attempts to model interactions between organisms and their environment using differential equations. Organism and environment are here thought of as co-equal partners, and no attempt is made to understand what may be going on inside the organism. Dynamical-systems approaches have been used to successfully describe some simple phenomena, such as coordinated finger-wagging, infant walking, and perseverative reaching behavior in infants (Thelen & Smith Reference Thelen and Smith1993; Kelso Reference Kelso1995; Thelen et al. Reference Thelen, Schöner, Scheier and Smith2001). But such models are merely descriptive and fail to causally explain the phenomena they describe (Spivey Reference Spivey2007). Moreover, there is no reason to think that they can “scale up” to more complex behavioral and cognitive phenomena. It seems fair to say that the dynamical-systems framework is not a viable alternative to the causal explanations that have been provided successfully for a wide range of cognitive and behavioral phenomena by cognitive scientists over the last few decades.Footnote 5

1.7 Conclusion

This chapter has sketched the standard model of action and action-explanation that provides the main foil for the rest of the book. On this account, actions are caused and controlled by intentions, which in turn result from decision-making in light of one’s motivating reasons (one’s beliefs and desires). The chapter has also explained and briefly sketched support for some key assumptions of the remainder of the book. These are that mental states are real (and not just an interpretative gloss on behavior), that mental states are physical, and that mental states are representational. All three assumptions are now shared by many philosophers, but are central to current cognitive science.

Footnotes

1 An analog-magnitude property (in the world) is one that is continuously graded. Color is an analog-magnitude property of surfaces, and length and mass are analog-magnitude properties of objects. An analog-magnitude representation is one that represents an analog-magnitude property through the use of an analog-magnitude symbol of some sort (such as a rate of neural firing). The height of the mercury in a classical mercury thermometer, for example, provides an analog-magnitude representation of the ambient temperature.

2 It is very hard to see the behavior of people with cerebral palsy as imbued with mentality, of course, and interacting with such people can be difficult – which is why they often face prejudice and discrimination. But the point is that ordinary people are nevertheless quite prepared to believe (rightly) that there can still be a rich mental life behind the mask of disability.

3 The thesis of the extended mind is often treated alongside the idea of the embodied mind. (The former claims that the mind literally incorporates external objects and processes such as notebooks and pencil-and-paper calculations; Clark & Chalmers Reference Clark and Chalmers1998; Clark Reference Clark2008.) But in fact the extended mind presents no challenge to traditional forms of cognitive science. Rather, it proposes an extension of the scope of the latter to include external representational processes. For my own discussion and critique of the extended-mind idea, see Carruthers (Reference Carruthers2015).

4 Strikingly, not a single one of the empirical papers cited by Shapiro and Spaulding (Reference Shapiro and Spaulding2021) in their review of the field – said to be among those relied on by embodied-mind philosophers to support their proposed radical replacement for representation-involving cognitive science – actually provides any significant challenge to the latter at all. All admit of other interpretations.

5 Indeed, consider Favela’s (Reference Favela2020) favorable review of dynamical-systems theory, in which he proposes it as a replacement for representationalist forms of cognitive science. On examination, hardly any of the empirical papers cited in the review provide any real challenge or alternative to the latter. More than thirty years after it was first proposed, a dynamical-systems account of the mind still remains almost entirely aspirational. It is what philosophers of science refer to as a “degenerating research program.”

Accessibility standard: Unknown

Why this information is here

This section outlines the accessibility features of this content - including support for screen readers, full keyboard navigation and high-contrast display options. This may not be relevant for you.

Accessibility Information

Accessibility compliance for the HTML of this book is currently unknown and may be updated in the future.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×