Logo for University of Central Florida Pressbooks

Chapter 3: The mind-body problem

The mind-body problem

Matthew Van Cleave

Introduction: A pathway through this chapter

What is the relationship between the mind and the body? In contemporary philosophy of mind, there are a myriad of different, nuanced accounts of this relationship. Nonetheless, these accounts can be seen as falling into two broad categories: dualism and physicalism. [1] According to dualism , the mind cannot be reduced to a merely physical thing, such as the brain. The mind is a wholly different kind of thing than physical objects. One simple way a dualist might try to make this point is the following: although we can observe your brain (via all kinds of methods of modern neuroscience), we cannot observe your mind. Your mind seems inaccessible to third-person observation (that is, to people other than you) in a way that your brain isn’t. Although neuroscientists could observe activation patterns in your brain via functional magnetic resonance imagining, they could not observe your thoughts. Your thoughts seem to be accessible only in the first person—only you can know what you are thinking or feeling directly. Insofar as other can know this, they can only know it indirectly, though your behaviors (including what you say and how you act ). Readers of previous chapters will recognize that dualism is the view held by the 17th century philosopher, René Descartes, and that I have referred to in earlier chapters as the Cartesian view of mind . In contrast with dualism, physicalism is the view that the mind is not a separate, wholly different kind of thing from the rest of the physical world. The mind is constituted by physical things. For many physicalists, the mind just is the brain. We may not yet understand how mind/brain works, but the spirit of physicalism is often motivated by something like Ockham’s razor : the principle that all other things being equal, the simplest explanation is the best explanation. Physicalists think that all mind related phenomena can be explained in terms of the functioning of the brain. So a theory that posits both the brain and another sui generis entity (a nonphysical mind or mental properties) violates Ockham’s razor: it posits two kinds of entities (brains and minds) whereas all that is needed to explain the relevant phenomena is one (brains).

The mind-body problem is best thought of not as a single problem but as a set of problems that attach to different views of the mind. For physicalists, the mind-body problem is the problem of explaining how conscious experience can be nothing other than a brain activity—what has been called “ the hard problem .” For dualists, the mind-body problem manifests itself as “ the interaction problem ”—the problem of explaining how nonphysical mental phenomena relate to or interact with physical phenomena, such as brain processes. Thus, the mind-body problem is that no matter which view of the mind you take, there are deep philosophical problems. The mind, no matter how we conceptualize it, seems to be shrouded in mystery. That is the mind-body problem. Below we will explore different strands of the mind-body problem, with an emphasis on physicalist attempts to explain the mind. In an era of neuroscience, it seems increasingly plausible that the mind is in some sense identical to the brain. But there are two putative properties of minds—especially human minds—that appear to be recalcitrant to physicalist explanations. The two properties of minds that we will focus on in this chapter are “original intentionality” (the mind’s ability to have meaningful thoughts) and “qualia” (the qualitative aspects of our conscious experiences).

We noted above the potential use of Ockham’s razor as an argument in favor of physicalism. However, this simplicity argument works only if physicalism can explain all of the relevant properties of the mind. A common tactic of the dualist is to argue that physicalism cannot explain all of the important aspects of the mind. We can view several of the famous arguments we will explore in this chapter—the “Chinese room” argument, Nagel’s “what is it like to be a bat” argument, and Jackson’s “knowledge argument”—as manifestations of this tactic. If the physicalist cannot explain aspects of the mind like “original intentionality” and “qualia” then the simplicity argument fails. In contrast, a tactic of physicalists is to either try to meet this explanatory challenge or to deny that these properties ultimately exist. This latter tactic can be clearly seen in Daniel Dennett’s responses to these challenges to physicalism since he denies that original intentionality and qualia ultimately exist. This kind of eliminativist strategy, if successful, would keep in place Ockham simplicity argument.

Representation and the mind

One aspect of mind that needs explaining is how the mind is able to represent things. Consider the fact that I can think about all kinds of different things— about this textbook I am trying to write, about how I would like some Indian food for lunch, about my dog Charlie, about how I wish I were running in the mountains right now. Medieval philosophers referred to the mind as having intentionality —the curious property of “aboutness”—that is, the property of an object to be able to be about some other object. In a certain sense, the mind seems to function kind of like a mirror does—it reflects things other than itself. But unlike a mirror, whose reflected images are not inherently meaningful, minds seem to have what contemporary philosopher John Searle calls “ original intentionality .” In contrast, the mirror has only “ derived intentionality ”—its image is meaningful only because something else gives it meaning or sees it as meaningful. Another thing that has derived intentionality is words, for example the word “tree.” “Tree” refers to trees, of course, but it is not as if the physical marks on a page inherently refer to trees. Rather, human beings who speak English use the word “tree” to refer to trees. Spanish speakers use the word “arbol” to refer to trees. But in neither case do those physical marks on the page (or sound waves in the air, in the case of spoken words) inherently mean anything. Rather, those physical phenomena are only meaningful because a human mind is representing those physical phenomena as meaningful. Thus, words are only meaningful because a human mind represents them in a meaningful way. Although we speak of the word itself as carrying meaning, this meaning has only derived intentionality. In contrast, the human mind has original intentionality because only the mind is the ultimate creator of meaningful representations. We can explain the meaningfulness of words in terms of thoughts, but then how do we explain the meaningfulness of the thoughts themselves? This is what philosophers are trying to explain when they investigate the representational aspect of mind.

There are many different attempts to explain what mental representation is but we will only cursorily consider some fairly rudimentary ideas as a way of building up to a famous thought experiment that challenges a whole range of physicalist accounts of mental representation. Let’s start with a fairly simple, straightforward idea—that of mental images. Perhaps what my mind does when it represents my dog Charlie is that it creates a mental image of Charlie. This account seems to fit our first person experience, at least in certain cases, since many people would describe their thoughts in terms of images in their mind. But whatever a mental image is, it cannot be like a physical image because physical images require interpretation in terms of something else. When I’m representing my dog Charlie it can’t be that my thoughts about Charlie just are some kind of image or picture of Charlie in my head because that picture would require a mind to interpret it! But if the image is suppose to represent the thing that has “original intentionality,” then if our explanation requires some other thing that has that has original intentionality in order to interpret it, then the mental image isn’t really the thing that has original intentionality. Rather, the thing interpreting the image would have original intentionality. There’s a potential problem that looms here and threatens to drive the mental image view of mental representation into incoherence: the object in the world is represented by a mental image but that mental image itself requires interpretation in terms of something else. It would be problematic for the mental image proponent to then say that there is some other inner “understander” that interprets the mental image. For how does this inner understander understand? By virtue of another mental image in its “head”? Such a view would create what philosophers call an infinite regress : a series of explanations that require further explanations, thus, ultimately explaining nothing. The philosopher Daniel Dennett sees explanations of this sort as committing what he calls “ the homuncular fallacy ,” after the Latin term, homunculus , which means “little man.” The problem is that if we explain the nature of the mind by, in essence, positing another inner mind, then we haven’t really explained anything. For that inner mind itself needs to be explained. It should be obvious why positing a further inner mind inside the first inner mind enters us into an infinite regress and why this is fatal to any successful explanation of the phenomenon in question—mental representation or intentionality.

Within the cognitive sciences, one popular way of understanding the nature of human thought is to see the mind as something like a computer. A computer is a device that takes certain inputs (representations) and transforms those inputs in accordance with certain rules (the program) and then produces a certain output (behavior). The idea is that the computer metaphor gives us a satisfying way of explaining what human thought and reasoning is and does so in a way that is compatible with physicalism. The idea, popular in philosophy and cognitive science since the 1970s, is that there is a kind of language of thought which brain states instantiate and which is similar to a natural language in that it possesses both a grammar and a semantics, except that the representations in the language of thought have original intentionality, whereas the representations in natural languages (like English and Spanish) have only derived intentionality. One central question in the philosophy of mind concerns how these “words” in the language of thought get their meaning? We have seen above that these representations can’t just be mental images and there’s a further reason why mental images don’t work for the computer metaphor of the mind: mental images don’t have syntax like language does. You can’t create meaningful sentences by putting together a series of pictures because there are no rules for how those pictures create a holistic meaning out of the parts. For example, how could pictures represent the thought, Leslie wants to go out in the rain but not without an umbrella with a picture (or pictures)? How do I represent with a picture someone’s desire? Or how do I represent the negation of something with only a picture? True, there are devices that we can use within pictures, such as the “no” symbol on no smoking signs. But those symbols are already not functioning purely as pictorial representations that seem to represent in virtue of their similarity. There is no pictorial similarity between the purely logical notion “not” and any picture we could draw. So whatever the words of the language of thought (that is, mental representations) are, their meaning cannot derive from a pictorial similarity to what they represent. So we need some other account. Philosophers have given many such accounts, but most of those accounts attempt to understand mental representation in terms of a causal relationship between objects in the world and representations. That is, whatever types of objects cause (or would cause) certain brain states to “light up,” so to speak, are what those brain states represent. So if there’s a particular brain state that lights up any time I see (or think about) a dog, then that is what those mental representations stand for. Delving into the nuances of contemporary theories of representation is beyond the scope of this chapter, but the important point is that the language of thought idea that these theories support is supposed to be compatible with physicalism as well as the computer analogy of explaining the mind. On this account, the “words” of the language of thought have original intentionality and thinking is just the manipulation of these “words” using certain syntactic rules (the “program”) that are hard-wired into the brain (either innately or by learning) and which are akin to the grammar of a natural language.

There is a famous objection to the computer analogy of human thought that comes from the philosopher John Searle, who thinks that it shows that human thought and understanding cannot be reduced to the kind of thing that a computer can do. Searle’s thought experiment is called the Chinese Room . Imagine that there is a room with a man inside of it. What the man does is take slips of paper that are passed into the room via a slit. The slips of paper have writing on them that look like this:

essay on the mind body problem

The room also contains a giant bookshelf with many different volumes of books. Those books are labeled something like this:

essay on the mind body problem

The man writes the symbols and then passes it back through the slit in the wall. From the perspective of the man in the room, this is what he does. Nothing more nothing less. The man inside the room doesn’t understand what these symbols mean; they are just meaningless squiggles on a page to him. He sees the difference between the different symbols merely in terms of their shapes. However, from outside the room Chinese speakers who are writing questions on the slips of paper and passing them through the slot in the room come to believe that the Chinese room (or something inside it) understands Chinese and is thus intelligent.

The Chinese room is essentially a scenario in which a computer program passes the Turing Test . In paper published in 1950, Alan Turing proposed a test for how we should determine whether or not a machine can think. Basically, the test is whether or not the machine can make a human investigator believe that the machine is a human. The human investigator is able to ask the machine any questions they can think of (which Turing imagined would be conducted via types responses on a keyboard). Imagine what some of the questions might be. Here is one such potential question one might ask:

Rotate a capital letter “D” 90 degrees counterclockwise and place it atop a capital letter “J.” What kind of weather does this make you think of?

A computer that could pass the Turing Test would be able to answer questions such as this and thus would make a human investigator believe that the computer was actually another human being. Turing thought that if a machine could do this, we should count that machine as having intelligence. The Chinese Room thought experiment is supposed to challenge Turing’s claim that something that can pass the Turing Test is thereby intelligent. The essence of a computer is that of a syntactic machine —a machine that takes symbols as inputs, manipulates symbols in accordance with a series of rules (the program), and gives the outputs that the rules dictate. Importantly, we can understand what syntactic machines do without having to say that they interpret or understand their inputs/outputs. In fact, a syntactic machine cannot possibly understand the symbols because there’s nothing there to understand. For example, in the case of modern-day computers, the symbols being processed are strings of 1s and 0s, which are physically instantiated in the CPU of a computer as a series of on/off voltages (that is, transistors). Note that a series of voltages are no more inherently meaningful than a series of different fluttering patterns of a flag waving in the wind, or a series of waves hitting a beach, or a series of footsteps on a busy New York City subway platform. They are merely physical patterns, nothing more, nothing less. What a computer does, in essence, is “reads” these inputs and gives outputs in accordance with the program. This simple theoretical (mathematical) device is called a “ Turing machine ,” after Alan Turing. A calculator is an example of a simple Turing machine. In contrast, a modern day computer is an example of what is called a “ universal Turing machine ”— universal because it can run any number of different programs that will allow it to compute all kinds of different outputs. In contrast, a simple calculator is only running a couple different simple programs—ones that correspond to the different kinds of mathematical functions the calculator has (+, −, ×, ÷). The Chinese room has all the essential parts of the computer and is functioning exactly as a computer does: he “reads” these symbols and produces outputs using symbols, in accordance with what the program dictates. If the program is sufficiently well written, then the man’s responses (the room’s output) will be able to convince someone outside the room that the room (or something inside it) understands Chinese.

But the whole point is that the there is nothing inside the room that understands Chinese. The man in the room doesn’t understand Chinese—they are just meaningless symbols to him. The written volumes don’t understand Chinese either—how could they?—books don’t understand things. Furthermore, Searle argues that the understanding of Chinese doesn’t just magically emerge from the combination of all the parts of the Chinese room: if no one of the parts of the room has any understanding of Chinese, then neither does the whole room. Thus, the Chinese room thought experiment is supposed to be a counterexample to the Turing Test: the Chinese room passes the Turing Test but the Chinese room doesn’t understand Chinese. Rather, it just acts as if it understands Chinese. Without understanding, there can be no thought. The Chinese room, impressive as it is for passing the Turing Test, lacks any understanding and therefore is not really thinking. Likewise, a computer cannot think because a computer is merely a syntactic machine that does not understand the inputs or the outputs. Rather, from the perspective of the computer, the strings of 1s and 0s are just meaningless symbols. [2] The people outside the Chinese room might ascribe thought and understanding of Chinese to the room, but there is neither thought nor understanding involved. Likewise, at some point in the future, someone may finally create a computer program that would pass the Turing Test [3] and we might think that machine has thought and understanding, but the Chinese room is supposed to show that we would be wrong to think this. No merely syntactic machine could ever think because no merely syntactic machine could ever understand. That is the point of the Chinese room thought experiment.

We could put this point in terms of the distinction between original vs. derived intentionality: no amount of derived intentionality will ever get you original intentionality. Computers have only derived intentionality and since genuine thought requires original intentionality, it follows that computers could never think. Here is a reconstructed version of the Chinese room argument:

  • Computers are merely syntactic machines.
  • Therefore, computers lack original intentionality (from 1)
  • Thought requires original intentionality.
  • Therefore, computers cannot think (from 2-3)

How should we assess the Chinese room argument? One thing to say is that it seems to make a lot of simplifying assumptions about his Chinese room. For example, the philosopher Daniel Dennett suggests that in order to pass the Turing Test a computer would need something on the order of 100 billion lines of code. That would take the man inside the room many lifetimes to hand simulate the code in the way that we are invited to imagine. Searle thinks that these practical kinds of considerations can be dismissed—for example, we can just imagine that the man inside the room can operate faster than the speed of light. Searle thinks that these kinds of assumptions are not problematic, for why should mere speed of operation make any difference to the theoretical point he is trying to make—which is that the merely syntactic processing of a digital computer could not achieve understanding? Dennett, on the other hand, thinks that such simplifying assumptions should alert us that there is something fishy going on with the Chinese room thought experiment. If we were really, truly imagining a computer program that could pass the Turing Test, Dennett thinks, then it wouldn’t sound nearly as absurd to say that the computer had thought.

There’s a deeper objection to the Chinese room argument. This response is sometimes referred to as the “other minds reply.” The essence of the Chinese room rebuttal of the Turing Test involves, so to speak, looking at the guts of what is going on inside of a computer. When you look at it “up close,” it certainly doesn’t seem like all of that syntactic processing adds up to intelligent thought. However, one can make exactly the same point about the human brain (something that Searle believes is undoubtedly capable of thought): the functioning of neurons, or even whole populations of neurons in neuronal spike trains, do not look like what we think of as intelligent thought. Far from it! But of course it doesn’t follow that human brains aren’t thinking! The problem is that in both cases we are looking at the wrong level of description. In order for us to be able to “see” the thought, we must be looking in the right place.

Zooming in and looking at the mechanics of the machines up close is not going to enable us to see the thought and intelligence. Rather, we have to zoom out to the level of behavior and observe the responses in their context. Thought isn’t something we can see up close; rather, thought is something that we attribute to something whose behavior is sufficiently intelligent. Dennett suggests the following cartoon as a reductio ad absurdum of the Chinese room argument:

image

In the cartoon, Dennett imagines someone going inside the Chinese room to see what is going on inside the room. Once inside they see the man responding to the questions of Chinese speakers outside the room. The woman tells the man (perhaps someone she knows), “I didn’t know you knew Chinese!” In response the man explains that he doesn’t and that he is just looking up the relevant strings Chinese characters to write in response to the inputs he receives. The woman’s interpretation of this is: “I see! You use your understanding of English in order to fake understanding Chinese!” The man’s response is: “What makes you think I understand English?” The joke is that the woman’s evidence for thinking that the man inside the room understands English is her evidence of his spoken behavior. This is exactly the same evidence that the Chinese speakers have of the Chinese room. So if the evidence is good enough for the woman inside the room to say that the man inside the room understands Chinese, why is the evidence of the Chinese speakers outside the room any different? We can make the problem even more acute. Suppose that we were to look inside the man inside the room’s brain. We would see all kinds of neural activity and then we could say, “Hey, this doesn’t look like thought; it’s just bunches of neurons sending chemical messages back and forth and those chemical signals have no inherent meaning.” Dennett’s point is that this response makes the same kind of mistake that Searle makes in supposing a computer can’t think: in both cases, we are focusing on the wrong level of detail. Neither the innards of the brain nor the innards of a computer looks like there’s thinking going on. Rather, thinking only emerges at the behavioral level; it only emerges when we are listening to what people are saying and, more generally, observing what they are doing . This is what is called the other minds reply to the Chinese room argument.

Interlude: Interpretationism and Representation

The other minds reply points us towards a radically different account of the nature of thought and representation. A common assumption in the philosophy of mind (and one that Searle also makes) is that thought (intentionality, representation) is something to be found within the inner workings of the thinking thing, whether we are talking about human minds or artificial minds. In contrast, on the account that Dennett defends, thought is not a phenomenon to be observed at the level of the inner workings of the machine. Rather, thought is something that we attribute to people in order to understand and predict their behaviors. To be sure, the brain is a complex mechanism that causes our intelligent behaviors (as well as our unintelligent ones), but to try to look inside the brain for some language-like representation system is to look in the wrong place. Representations aren’t something we will find in the brain, they are just something that we attribute to certain kinds of intelligent things (paradigmatically human beings) in order to better understand those beings and predict their behaviors. This view of the nature of representation is called interpretationism and can be seen as a kind of instrumentalism . Instrumentalists about representation believe that representations aren’t, in the end, real things.

Rather, they are useful fictions that we attribute in order to understand and predict certain behaviors. For example, if I am playing against the computer in a game of chess, I might explain the computer’s behavior by attributing certain thoughts to it such as, “The computer moved the pawn in front of the king because it thought that I would put the king in check with my bishop and it didn’t want to be in check.” I might also attribute thoughts to the computer in order to predict what it will do next: “Since the computer would rather lose its pawn than its rook, it will move the pawn in front of the king rather than the rook.” None of this requires that there be internal representations inside the computer that correspond to the linguistic representations we attribute. The fundamental insight about representation, according to interpretationism, is that just as we merely interpret computers as having internal representations (without being committed to the idea that they actually contain those representations internally), so too we merely interpret human beings as having internal representations (without being committed to whether or not they contain those internal representations). It is useful (for the purposes of explaining behavior) to interpret humans as having internal representations, even if they don’t actually have internal representations.

Interpretationist accounts of representation raise deep questions about where meaning and intentionality reside, if not in the brain, but we will not be able to broach those questions here. Suffice it to say that the disagreement between Searle and Dennett regarding Searle’s Chinese room thought experiment traces back to what I would argue is the most fundamental rift within the philosophy of mind: the rift between the Cartesian view of the mind, on the one hand, and the behaviorist tradition of the mind, on the other. Searle’s view of the mind, specifically his notion of “original intentionality,” traces back to a Cartesian view of the mind. On this view, the mind contains something special—something that cannot be capture merely by “matter in motion” or by any kind of physical mechanism. The mind is sui generis and is set apart from the rest of nature. For Searle, meaning and understand have to issue back to an “original” mean-er or understand-er. And that understand-er cannot be a mindless mechanism (which is why Searle thinks that computers can’t think). For Searle, like Descartes, thinking is reserved for a special (one might say, magical) kind of substance. Although Searle himself rejects Descartes’s conclusion that the mind is nonphysical, he retains the Cartesian idea that thinking is carried out by a special, quasi-magical kind of substance. Searle thinks that this substance is the brain, an object that he thinks contains special causal powers and that cannot be replicated or copied in any other kind of physical object (for example, an artificial brain made out of metal and silicon). Dennett’s behaviorist view of the mind sees the mind as nothing other than a complex physical mechanism that churns out intelligent behaviors that we then classify using a special mental vocabulary—the vocabulary of “minds,” “thoughts,” “representations,” and “intentionality.” The puzzle for Dennett’s behaviorist view is: How can there meaning and understanding without any original meaner/understander? How can there be only derived intentionality and no original intentionality?

Consciousness and the mind

Interpretationism sees the mind as a certain kind of useful fiction: we attribute representational states (thoughts) to people in virtue of their intelligent behavior and we do so in order to explain and predict their behavior. The causes of one’s intelligent behavior are real, but the representational states that we attribute need not map neatly onto any particular brain states. Thus, there need not be any particular brain state that represents the content, “Brittney Spears is a washed up pop star,” for example.

But there another aspect of our mental lives that seems more difficult to explain away in the way interpretationism explains away representation and intentionality. This aspect of our mind is first-person conscious experience . To borrow a term from Thomas Nagel, conscious experience refers to the “what it’s like” of our first person experience of the world. For example, I am sitting here at my table with a blue thermos filled with coffee. The coffee has a distinctive, qualitative smell which would be difficult to describe to someone who has never smelled it before. Likewise, the blue of the thermos has a distinctive visual quality—a “what it’s like”—that is different from what it’s like to see blue. These experiences—the smell of the coffee, the look of the blue—are aspects of my conscious experience and they have a distinctive qualitative dimension—there is something it’s like to smell coffee and to see blue. This qualitative character seems in some sense to be ineffable—that is, it would be very difficult if not impossible to convey what it is like to someone who had never smelled coffee or to someone who had never seen the color blue. Imagine someone who was colorblind. How would you explain what blue was to them? Sure, you could tell them that it was the color of the ocean, but that would not convey to them the particular quality that you (someone who is not color blind) experience when you look at a brilliant blue ocean or lake. Philosophers have coined a term that they use to refer to the qualitative aspects of our conscious experience: qualia . It seems that our conscious experience is real and cannot be explained away in the way that representation can. Maybe there needn’t be anything similar to sentences in my brain, but how could there not be colors, smells, feels? The feeling of stubbing your toe and the feeling of an orgasm are very different feels (thank goodness), but it seems that they are both very much real things. That is, if neuroscientists were to be able to explain exactly how your brain causes you to respond to stubbing your toe, such an explanation would seem to leave something out if it neglected the feeling of the pain. From our first person perspective, our experiences seem to be the most real thing there are, so it doesn’t seem that we could explain their reality away.

Physicalists need not disagree that conscious experiences are real; they would simply claim that they are ultimately just physical states of our brain. Although that might seem to be a plausible position, there are well known problems with claiming that conscious experiences are nothing other than physical states of our brain. The problem is that it does not seem that our conscious experience could just reduce to brain states—that is, to our neurons in our brain sending lots and lots of chemical messages back and forth simultaneously. The 17th century philosopher Gottfried Wilhelm Leibniz (1646-1716) was no brain scientist (that would take another 250 to develop) but he put forward a famous objection to the idea that consciousness could be reduced to any kind of mechanism (and the brain is one giant, complex mechanism). Leibniz’s objection is sometimes referred to as “ Leibniz’s mill .” In 1714, Leibniz wrote:

Moreover, we must confess that perception , and what depends on it, is inexplicable in terms of mechanical reasons , that is, through shapes and motions. If we imagine that there is a machine whose structure makes it think, sense, and have perceptions, we could conceive it enlarged, keeping the same proportions, so that we could enter into it, as one enters into a mill. Assuming that, when inspecting its interior, we will only find parts that push one another, and we will never find anything to explain a perception ( Monadology , section 17).

Leibniz uses a famous form of argument here called reductio ad absurdum : He assumes for the sake of the argument that thinking is a mechanical process and then shows how that leads to the conclusion that thinking cannot be a mechanical process.We could put Leibniz’s exact same point into the language of 21st century neuroscience: imagine that you could enlarge the size of the brain (in a sense, we can already do with the help of the tools of modern neuroscience). If we were to enter into the brain (perhaps by shrinking ourselves down) we would see all kinds of physical processes going on (billions of neurons sending chemical signals back and forth). However, to observe all of these processes would not be to observe the conscious experiences of the person whose brain we were observing. That means that conscious experiences cannot reduce to physical brain mechanics. The simple point being made is that in conscious experience there exist all kinds of qualitative properties (qualia)—red, blue, the smell of coffee, the feeling of getting your back scratched—but none of these properties would be the properties observed in observing someone’s brain. All you will find on the inside is “parts that push one another” and never the properties that appear to us in first-person conscious experience.

The philosopher David Chalmers has coined a term for the problem that Leibniz was getting at. He calls it the hard problem of consciousness and contrasts it with easy problems of consciousness . The “easy” problems of mind science involve questions about how the brain carries out functions that enable certain kinds of behaviors—functions such as discriminating stimuli, integrating information, and using the information to control behavior. These problems are far from easy in any normal sense—in fact, they are some of the most difficult problems of science. Consider, for example, how speech production occurs. How is it that I decide what exactly to say in response to a criticism someone has just made of me? The physical processes involved are numerous and include the sounds waves of the person’s question hitting my eardrum, those physical signals being carried to the brain, that information being integrated with the rest of my knowledge and, eventually, my motor cortex sending certain signals to my vocal chords that then produce the sounds, “I think you’re misunderstanding what I mean when I said…” or whatever I end up saying. We are still a long way from understanding how this process works, but it seems like the kind of problem that can be solved by doing more of the same kinds of science that we’ve been doing. In short, solving easy problems involves understanding the complex causal mechanisms of the brain. In contrast, the hard problem is the problem of explaining how physical processes in the brain give rise to first- person conscious experience. The hard problem does not seem to be the kind of problem that could be solved by simply investigating in more detail the complex causal mechanism that is the brain. Rather, it seems to be a conceptual problem: how could it be that the colors, and sounds, the smells that constitute our first-person conscious experience of the world are nothing other than neurons firing electrical-chemical signals back and forth? As Leibniz pointed out over 250 years ago, the one seems to be a radically different kind of thing than the other.

In fact, it seems that a human being could have all of the functioning of normal human being and yet lack any conscious experience. There is a term for such a being: a philosophical zombie . Philosophical zombies are by definition being that are functionally indistinguishable from you or I but who lack any conscious experience. If we assume that it’s the functioning of the brain that causes all of our intelligent behaviors, then it isn’t clear what conscious experience could possibly add to our repertoire of intelligent behaviors. Philosophical zombies can help illustrate the hard problem of consciousness since if such creatures are theoretically possible then consciousness doesn’t seem to reduce to any kind of brain functioning. By hypothesis the brain of the normal human being and the brain of the philosophical zombie are identical. It’s just that the latter lacks consciousness whereas the former doesn’t. If this is possible then it does indeed seems to make consciousness seem like quite a mysterious thing for the physicalist.

There are two other famous thought experiments that illustrate the hard problem of consciousness: Frank Jackson’s knowledge argument and Thomas Nagel’s what it’s like to be a bat argument.

Nagel’s argument against physicalism turns on a colorful example: Could we (human beings) imagine what it would be like to be a bat? Although bats are still mammals, and thus not so different than human beings phylogenetically, their experience would seem to be radically different than ours. Bats echolocate around in the darkness, they eat bugs at night, and they sleep while hanging upside down. Human beings could try to do all these things, but even if they did, they would arguably not be experiencing these activities like a bat does. And yet it seems pretty clear that bats (being mammals) have some kind of subjective experience of the world—a “what it’s like” to be a bat. The problem is that although we can figure out all kinds of physical facts about bats—how they echolocate, how they catch insects in the dark, and so on—we cannot ever know what it’s like to be a bat. For example, although we could understand enough scientifically to be able to send signals to the bat that would trick it into trying to land on what it perceived as a ledge, we could not know what it’s like for the bat to perceive an object as a ledge. That is, we could understand the causal mechanisms that make the bat do what the bat does , but that would not help us to answer the question of what it’s like to experience the world the way a bat experiences the world . Nagel notes that it is characteristic of science to study physical facts (such as how the brain works) that can be understood in a third-person kind of way. That is, anyone with the relevant training can understand a scientific fact. If you studied the physics of echolocation and also a lot of neuroscience of bat brains, you would be able to understand how a bat does what a bat does. But this understanding would seem to bring you no closer to what it’s like to be a bat—that is, to the first-person perspective of the bat. We can refer to the facts revealed in first-person conscious experience as phenomenal facts . Phenomenal facts are things like what it’s like to see blue or smell coffee or experience sexual pleasure…or echolocate around the world in total darkness. Phenomenal facts are qualia, to use our earlier term. Nagel’s point is that if the phenomenal facts of conscious experience are only accessible from a first-person perspective and scientific facts are always third-person, then it follows that phenomenal facts cannot be grasped scientifically. Here is a reconstruction of Nagel’s argument:

  • The phenomenal facts presented in conscious experience are knowable only from the first-person (subjective) perspective.
  • Physical facts can always be known from third-person (objective) perspective.
  • Nothing that is knowable only from the first person perspective could be the same as (reduce to) something that is knowable from the third-person perspective.
  • Therefore, the phenomenal facts of conscious experience are not the same as physical facts about the brain. (from 1-3)
  • Therefore, physicalism is false. (from 4)

Nagel uses an interesting analogy to explain what’s wrong with physicalism—the claim that conscious states are nothing other than brain states. He imagines an ancient Greek saying that “matter is energy.” It turns out that this statement is true (Einstein’s famous E = mc 2 ) but an ancient Greek person could not have possibly understood how it could be true. The problem is that the ancient Greek person could not have had the conceptual resources needed for being able to understand what this statements means. Nagel claims that we are in the same position today when we say something like “conscious states are brain states” is true. It might be true, we just cannot understand what that could possibly mean yet because we don’t have the conceptual resources for understanding how this could be true. And the conceptual problem is what Nagel is trying to make clear in the above argument. This is another way at getting at the hard problem of consciousness.

Frank Jackson’s famous knowledge argument is similar and makes a similar point. Jackson imagines a super scientist, whom he dubs “Mary,” knows all the physical facts about color vision. Not only is she the world’s expert on color vision, she knows all there is to know about color vision. She can explain how certain wavelengths of light strike the cones in the retina and send signals via the optic nerve to the brain. She understands how the brain interprets these signals and eventually communicates with the motor cortex that sends signals to produce speech such as, “that rose is a brilliant color of red.” Mary understands all the causal processes of the brain that are connected to color vision. However, Mary understands this without ever having experienced any color. Jackson imagines that this is because she has been kept in a black and white room and has only ever had access to black and white things. So the books she reads and the things she investigates of the outside world (via a black and white monitor in her black and white room) are only ever black and white, never any other color. Now what will happen when Mary is released from the room and sees color for the first time? Suppose she is released and sees a red rose. What will she say? Jackson’s claim was that Mary will be surprised because she will learn something new: she will learn what it’s like to see red. But by hypothesis, Mary already knew all the physical facts of color vision. Thus, it follows that this new phenomenal fact that Mary learns (specifically, what it’s like to see red) is not the same as the physical facts about the brain (which by hypothesis she already knows).

  • Mary knows all the physical facts about color vision.
  • When Mary is released from the room and sees red for the first time, she learns something new—the phenomenal fact of what it’s like to see red.
  • Therefore, phenomenal facts are not physical facts. (from 1-2)
  • Therefore, physicalism is false. (from 3)

The upshot of both Nagel and Jackson’s arguments is that the phenomenal facts of conscious experience—qualia—are not reducible to brain states. This is the hard problem of consciousness and it is the mind-body problem that arises in particular for physicalism. The hard problem is the reason why physicalists can’t simply claim a victory over dualism by invoking Ockham’s razor. Ockham’s razor assumes that the two competing explanations equally explain all the facts but that one does so in a simpler way than the other. The problem is that if physicalism cannot explain the nature of consciousness—in particular, how brain states give rise to conscious experience—then there is something that physicalism cannot explain and, therefore, physicalists cannot so simply invoke Ockham’s razor.

Two responses to the hard problem

We will consider two contemporary responses to the hard problem: David Chalmers’s panpsychism and Daniel Dennett’s eliminativism . Although both Chalmers and Dennett exist within a tradition of philosophy that privileges scientific explanation and is broadly physicalist, they have two radically different ways of addressing the hard problem. Chalmers’s response accepts that consciousness is real and that solving the hard problem will require quite a radical change in how we conceptualize the world. On the other hand, Dennett’s response attempts to argue that the hard problem isn’t really a problem because it rests on a misunderstanding of the nature of consciousness. For Dennett, consciousness is a kind of illusion and isn’t ultimately real, whereas for Chalmers consciousness is the most real thing we know. The disagreement between these two philosophers returns as, again, to the most fundamental divide within the philosophy of mind: that between Cartesians, on the one hand, and behaviorists, on the other.

To understand Chalmers’s response to the hard problem , we must first understand what he means by a “basic entity.” A basic entity is one that science posits but that cannot be further analyzed in terms of any other kind of entity. Can you think of what kinds of entities would fit this description? Or which science you would look to in order to find basic entities? If you’re thinking physics, then you’re correct. Think of an atom. Originally, atoms were thought of as the most basic building blocks of the universe; the term “atom” literally means “uncuttable” (from the Greek “a” = not + “tomos” = cut ). So atoms were originally thought of as basic entities because there was nothing smaller. As we now know, this turned out to be incorrect because there were even smaller particles such as electrons, protons, quarks, and so on. But eventually physics will discover those basic entities that cannot be reduced to anything further. Mental states are not typically thought of as basic entities because they are studied by a higher order science—psychology and neuroscience. So mental states, such as my perception of the red rose, are not basic entities. For example, brain states are ultimately analyzable in terms of brain chemistry and chemistry, in turn, is ultimately analyzable in terms of physics (not that anyone would care to carry out that analysis!). But Chalmers’s radical claim is that consciousness is a basic entity. That is, the qualia—what it’s like to see red, smell coffee, and so on—that constitute our first-person conscious experience of the world cannot be further analyzed in terms of any other thing. They are what they are and nothing else. This doesn’t mean that our conscious experiences don’t correlate with the existence of certain brain states, according to Chalmers. Perhaps my experience of the smell of coffee correlates with a certain kind of brain state. But Chalmers’s point is that that correlation is basic; the coffee smell qualia are not the same thing as the brain state with which they might be correlated. Rather, the brain state and the conscious experience are just two radically different things that happen to be correlated. Whereas brain states reduce to further, more basic, entities, conscious states don’t. As Chalmers sees it, the science of consciousness should proceed by studying these correlations. We might discover all kinds of things about the nature of consciousness by treating the science of consciousness as irreducibly correlational. Chalmers suggests as an orienting principle the idea that consciousness emerges as a function of the “informational integration” of an organism (including artificially intelligent “organisms”). What is informational integration? In short, informational integration refers to the complexity of the organism’s control mechanism—its “brain.” Simple organisms have very few inputs from the environment and their “brains” manipulate that information in fairly simple ways. Take an ant, for example. We pretty much understand exactly how ants work and as far as animals go, they are pretty simple. We can basically already duplicate the level of intelligence of an ant with machines that we can build. So an informational integration of an ant’s brain is pretty low. A thermostat has some level of informational integration, too. For example, it takes in information about the ambient temperature of a room and then sends a signal to either turn the furnace on or off depending on the temperature reading. That is a very simple behavior and the informational integration inside the “brain” of a thermostat is very simple. Chalmers’s idea is that complex consciousness like our emerges when the informational integration is high—that is, when we are dealing with a very complex brain. The less complex the brain, the less rich the conscious experience. Here is a law that Chalmers suggests could orient the scientific study of consciousness:

image

This graph just says that as informational integration increases, so does the complexity of the associated conscious experience. Again, the conscious experience doesn’t reduce to informational integration, since that would only run headlong into the hard problem—a problem that Chalmers thinks is unsolvable.

The graph also says something else. As drawn, it looks like even information processing systems whose informational integration is low (for example, a thermostat or tree) also has some non-negligible level of conscious experience. That is a strange idea; no one really thinks that a thermostat is conscious and the idea that plants might have some level of conscious experience will seem strange to most. This idea is sometimes referred to as panpsychism (“pan” = all, “psyche” = mind)—there is “mind” distributed throughout everything in the world. Panpsychism is a radical departure from traditional Western views of the mind, which sees minds as the purview of animals and, on some views, of human beings alone . Chalmers’s panpsychism still draws a line between objects that process information (things like thermostats, sunflowers, and so on) and those that don’t (such as rocks), but it is still quite a radical departure from traditional Western views. It is not, however, a radical departure from all sorts of older, prescientific and indigenous views of the natural world according to which everything in the natural world, including plants and streams, as possessing some sort of spirit—a mind of some sort. In any case, Chalmers thinks that there are other interpretations of his view that don’t require the move to panpsychism. For example, perhaps conscious experience only emerges once information processing reaches a certain level of complexity. This interpretation would be more consistent with traditional Western views of the mind in the sense that one could specify that only organisms with a very complex information processing system, such as the human brain, possess conscious experience. (Graphically, based on the above graph, this would mean the lowest level of conscious experience wouldn’t start until much higher up the y-axis.)

Daniel Dennett’s response to the hard problem fundamentally differs from Chalmers’s. Whereas Chalmers posits qualia as real aspects of our conscious experience, Dennett attempts to deny that qualia exist. Rather, Dennett thinks that consciousness is a kind of illusion foisted upon us by our brain. Dennett’s perennial favorite example to begin to illustrate the illusion of consciousness concerns our visual field. From our perspective, the world presented to us visually looks to be unified in color and not possessing any “holes.” However, we know that this is not actually the case. The cones in the retina do not exist on the periphery and, as a result, you are not actually seeing colors in the objects at the periphery of your visual field. (You can test this by having someone hold up a new object on one side of your visual field and moving it back and forth until you are able to see the motion. Then try to guess the color of the object. Although you’ll be able to see the object’s motion, you won’t have a clue as to its color, if you do it correctly.) Although it seems to us as if there is a visual field that is wholly colored, it isn’t really that way. This is the illusion of consciousness that Dennett is trying to get us to acknowledge; things are not really as they appear. There’s another aspect of this illusion of our visual field: our blind spot. The location where the optic nerve exits the retina does not convey any visual information since there are no photoreceptors; this is known as the blind spot. There are all kinds of illustrations to reveal your blind spot . However, the important point that Dennett wants to make is that from our first-person conscious experience it never appears that there is any gap in our picture of the world. And yet we know that there is. This again is an illustration of what Dennett means by the illusion of conscious experience. Dennett does more than simply give fun examples that illustrate the strangeness of consciousness; he has also famously attacked the idea that there are qualia. Recall that qualia are the purely qualitative aspects of our conscious experiences—for example, the smell of coffee, the feeling of a painful sunburn (as opposed to the pain of a headache), or the feeling of an orgasm. Qualia are what are supposed to create problems for the physicalist since it doesn’t seem that that purely qualitative feels could be nothing more than the buzzing of neurons in the brain. Since qualia are what create the trouble for the physicalism and since Dennett is a physicalist, one can understand why Dennett targets qualia and tries to convince us that they don’t exist.

If you’re going to argue against something’s existence, the best way to do that is first precisely define what it is you are trying to deny. Then you argue that as defined such things cannot exist. This is exactly what Dennett does with qualia. [4] He defines qualia as the qualitative aspects of our first-person conscious experience that are a) irreducibly first-person (meaning that they are inaccessible to third-person, objective investigation) and b) intrinsic properties of one’s conscious experience (meaning that they are what they are independent of anything else). Dennett argues that these two properties (irreducibly first person and intrinsic) are in tension with each other—that is, there can’t be an entity which possesses both of these properties. But since both of these properties are part of the definition of qualia, it follows that qualia can’t exist—they’re like a square circle.

Change blindness is a widely studied phenomenon in cognitive psychology. Some of the demonstrations of it are quite amazing and have made it into the popular media many times over the last couple of decades. One of the most popular research paradigms to study change blindness is called the flicker paradigm. In the flicked paradigm, two images that are the same with the exception of some fairly obvious difference are exchanged in a fairly rapid succession, with a “mask” (black or white screen) between them. What is surprising is that it is very difficult to see even fairly large differences between the two images. So let’s suppose that you are viewing these flickering images and trying to figure out what the difference between them is but that you haven’t yet figured it out yet. As Dennett notes, there are of course all kinds of changes going on in your brain as these images flicker. For example, the photoreceptors are changing with the changing images. In the case of a patch of color that is changing between the two images, the cones in your retina are conveying different information for each image. Dennett asks: “Before you noticed the changing color, were your color qualia changing for that region?” The problem is that any way you answer this question spells defeat for the defender of qualia because either they have to give up (a) their irreducible subjectiveness or their intrinsicness (b). So suppose the answer to Dennett’s question is that your qualia are changing. In that case, you do not have any special or privileged access to your qualia, in which case they aren’t irreducibly subjective, since subjective phenomena are by definition something we alone have access to. So it seems that the defender of qualia should reject this answer. Then suppose, on the other hand, that your qualia aren’t changing. In that case, your qualia can’t change unless you notice them changing. But that makes it looks like qualia aren’t really intrinsic, after all since their reality is constituted by whether you notice them or not. And “noticings” are relational properties, not intrinsic properties. Furthermore, Dennett notes that if the existence of qualia depend on one’s ability to notice or report them, then even philosophical zombies would have qualia, since noticings/reports are behavioral or functional properties and philosophical zombies would have these by definition. So it seems that the qualia defender should reject this answer as well. But in that case, there’s no plausible answer that the qualia defender can give to Dennett’s question. Dennett’s argument has the form of a classic dilemma , as illustrated below:

image

Dennett thinks that the reason there is no good answer to the question is that the concept of qualia is actually deeply confused and should be rejected. But if we reject the existence qualia it seems that we reject the existence of the thing that was supposed to have caused problems for physicalism in the first place. Qualia are a kind of illusion and once we realize this, the only task will be to explain why we have this illusion rather than trying to accommodate them in our metaphysical view of the world. The latter is Chalmers’s approach whereas the former is Dennett’s.

Study questions

  • True or false: One popular way of thinking about how the mind works is by analogy with how a computer works: the brain is a complex syntactic engine that uses its own kind of language—a language that has original intentionality.
  • True or false: One good way of explaining how the mind understands things is to posit a little man inside the head that does the understanding.
  • True or false: The mind-body problem is the same, exact problem for both physicalism and dualism.
  • True or false: John Searle agrees with Alan Turing that the relevant test for whether a machine can think is the test of whether or not the machine behaves in a way that convinces us it is intelligent.
  • True or false: One good reply to the Chinese Room argument is just to note that we have exactly the same behavioral evidence that other people have minds as we would of a machine that passed the Turing Test.
  • True or false: According to interpretationism, mental representations are things we attribute to others in order to help us predict and explain their behaviors, and therefore it follows that mental representations must be real.
  • True or false: This chapter considers two different aspects of our mental lives: mental representation (or intentionality) and consciousness. But the two really reduce to the exact same philosophical problem of mind.
  • True or false: The hard problem is the problem of understanding how the brain causes intelligent behavior.
  • True of false: The knowledge argument is an argument against physicalism.
  • True or false: Dennett’s solution to the hard problem turns out to be the same as Chalmers’s solution.

For deeper thought

  • How does the hard problem differ from the easy problems of brain science?
  • If the Turing Test isn’t the best test for determining whether a machine is thinking, can you think of a better test?
  • According to physics, nothing in the world is really red in the way we perceive it. Rather, redness is just a certain wavelength of light that our senses interpret in a particular way (some other creature’s sensory system might interpret that same physical phenomenon in a very different way). By the same token, redness does not exist in the brain: if you are seeing red then I cannot also see the red by looking at your brain. In this case, where is the redness if it isn’t in the world and it also isn’t in the brain? And does this prove that redness is not a physical thing, thus vindicating dualism? Why or why not?
  • Could someone be in pain and yet not know it? If so, how would we be able to tell they were in pain? If not, then aren’t pain qualia real? And so wouldn’t that prove that qualia are real (if pain is)?
  • According to Chalmers’s view, is it theoretically possible for a machine to be conscious? Why or why not?
  • Readers who are familiar with the metaphysics of minds will notice that I have left out an important option: monism , the idea that there is ultimately only one kind of thing in the world and thus the mental and the physical do not fundamentally differ. Physicalism is one version of monism, but there are many others. Bishop George Berkeley’s idealism is a kind of monism as is the panpsychism of Leibniz and Spinoza . I have chosen to focus on physicalism for pedagogical reasons, because of its prominence in contemporary philosophy of mind, because of its intuitive plausibility to those living in an age of neuroscience, and because the nuances of the arguments for monism are beyond the scope of this introductory treatment of the problem. ↵
  • We could actually retell the Chinese room thought experiment in such a way that what the man inside the room was manipulating was strings of 1s and 0s (what is called “binary code”). The point remains the same in either case: whether the program is defined over Chinese characters or strings of 1s and 0s, from the perspective of the room, none of it has any meaning and there’s no understanding required in giving the appropriate outputs. ↵
  • Nothing has yet, claims to the contrary notwithstanding. ↵
  • Daniel Dennett, Sweet Dreams: Philosophical Obstacles to a Science of Consciousness. MIT Press. 2006. ↵

Introduction to Philosophy Copyright © by Matthew Van Cleave is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

Descartes and the Discovery of the Mind-Body Problem

essay on the mind body problem

Consider the human body, with everything in it, including internal and external organs and parts — the stomach, nerves and brain, arms, legs, eyes, and all the rest. Even with all this equipment, especially the sensory organs, it is surprising that we can consciously perceive things in the world that are far away from us. For example, I can open my eyes in the morning and see a cup of coffee waiting for me on the bedside table. There it is, a foot away, and I am not touching it, yet somehow it is making itself manifest to me. How does it happen that I see it? How does the visual system convey to my awareness or mind the image of the cup of coffee?

Jacket cover for "The Mind-Body Problem" by Jonathan Westphal

The answer is not particularly simple. Very roughly, the physical story is that light enters my eyes from the cup of coffee, and this light impinges on the two retinas at the backs of the eyes. Then, as we have learned from physiological science , the two retinas send electrical signals past the optic chiasm down the optic nerve. These signals are conveyed to the so-called visual cortex at the back of the brain. And then there is a sort of a miracle. The visual cortex becomes active, and I see the coffee cup. I am conscious of the cup, we might even say, though it is not clear what this means and how it differs from saying that I see the cup.

One minute there are just neurons firing away, and no image of the cup of coffee. The next, there it is; I see the cup of coffee, a foot away. How did my neurons contact me or my mind or consciousness, and stamp there the image of the cup of coffee for me?

It’s a mystery. That mystery is the mind-body problem.

Our mind-body problem is not just a difficulty about how the mind and body are related and how they affect one another. It is also a difficulty about how they can be related and how they can affect one another. Their characteristic properties are very different, like oil and water, which simply won’t mix, given what they are.

There is a very common view which states that the French philosopher René Descartes discovered, or invented, this problem in the 17th century. According to Descartes, matter is essentially spatial, and it has the characteristic properties of linear dimensionality. Things in space have a position, at least, and a height, a depth, and a length, or one or more of these. Mental entities, on the other hand, do not have these characteristics. We cannot say that a mind is a two-by-two-by-two-inch cube or a sphere with a two-inch radius, for example, located in a position in space inside the skull. This is not because it has some other shape in space, but because it is not characterized by space at all.

The difficulty is not merely that mind and body are different. It is that they are different in such a way that their interaction is impossible.

What is characteristic of a mind, Descartes claims, is that it is conscious , not that it has shape or consists of physical matter. Unlike the brain, which has physical characteristics and occupies space, it does not seem to make sense to attach spatial descriptions to it. In short, our bodies are certainly in space, and our minds are not, in the very straightforward sense that the assignation of linear dimensions and locations to them or to their contents and activities is unintelligible. That this straightforward test of physicality has survived all the philosophical changes of opinion since Descartes, almost unscathed, is remarkable.

This issue aroused considerable interest following the publication of Descartes’s 1641 treatise “ Meditations on First Philosophy ,” the first edition of which included both Objections to Descartes, written by a group of distinguished contemporaries, and the philosopher’s own Replies . Though we do find in the “Meditations” itself the distinction between mind and body, drawn very sharply by Descartes, in fact he makes no mention of our mind-body problem. Descartes is untroubled by the fact that, as he has described them, mind and matter are very different: One is spatial and the other not, and therefore one cannot act upon the other. Descartes himself writes in his Reply to one of the Objections:

The whole problem contained in such questions arises simply from a supposition that is false and cannot in any way be proved, namely that, if the soul and the body are two substances whose nature is different, this prevents them from being able to act on each other.

Descartes is surely right about this. The “nature” of a baked Alaska pudding, for instance, is very different from that of a human being, since one is a pudding and the other is a human being — but the two can “act on each other” without difficulty, for example when the human being consumes the baked Alaska pudding and the baked Alaska in return gives the human being a stomachache.

essay on the mind body problem

The difficulty, however, is not merely that mind and body are different. It is that they are different in such a way that their interaction is impossible because it involves a contradiction. It is the nature of bodies to be in space, and the nature of minds not to be in space, Descartes claims. For the two to interact, what is not in space must act on what is in space. Action on a body takes place at a position in space, however, where the body is. Apparently Descartes did not see this problem. It was, however, clearly stated by two of his critics, the philosophers Princess Elisabeth of Bohemia and Pierre Gassendi. They pointed out that if the soul is to affect the body, it must make contact with the body, and to do that it must be in space and have extension. In that case, the soul is physical, by Descartes’s own criterion.

In a letter dated May 1643, Princess Elisabeth wrote to Descartes,

I beg you to tell me how the human soul can determine the movement of the animal spirits in the body so as to perform voluntary acts—being as it is merely a conscious substance. For the determination of the movement seems always to come about from the moving body’s being propelled—to depend on the kind of impulse it gets from what it sets in motion, or again, on the nature and shape of this latter thing’s surface. Now the first two conditions involve contact, and the third involves that the impelling [thing] has extension; but you utterly exclude extension from your notion of soul, and contact seems to me incompatible with a thing’s being immaterial.

Propulsion and “the kind of impulse” that set the body in motion require contact, and “the nature and shape” of the surface of the site at which contact is made with the body require extension. We need two further clarifications to grasp this passage.

The first is that when Princess Elisabeth and Descartes mention “animal spirits” (the phrase is from the ancient Greek physician and philosopher Galen) they are writing about something that plays roughly the role of signals in the nerve fibers of modern physiology. For Descartes, the animal spirits were not spirits in the sense of ghostly apparitions, but part of a theory that claimed that muscles were moved by inflation with air, the so-called balloonist theory. The animal spirits were fine streams of air that inflated the muscles. (“Animal” does not mean the beasts here, but is an adjective derived from “anima,” the soul.)

The second clarification is that when Princess Elisabeth writes that “you utterly exclude extension from your notion of soul,” she is referring to the fact that Descartes defines mind and matter in such a way that the two are mutually exclusive. Mind is consciousness, which has no extension or spatial dimension, and matter is not conscious, since it is completely defined by its spatial dimensions and location. Since mind lacks a location and spatial dimensions, Elisabeth is arguing, it cannot make contact with matter. Here we have the mind-body problem going at full throttle.

It was Descartes’ critics who discovered the problem, right in his solution to it.

Descartes himself did not yet have the mind-body problem ; he had something that amounted to a solution to the problem. It was his critics who discovered the problem, right in Descartes’s solution to the problem, although it is also true that it was almost forced on them by Descartes’s sharp distinction between mind and body. The distinction involved the defining characteristics or “principal attributes,” as he called them, of mind and body, which are consciousness and extension.

Though Descartes was no doubt right that very different kinds of things can interact with one another, he was not right in his account of how such different things as mind and body do in fact interact. His proposal, in “The Passions of the Soul,” his final philosophical treatise, was that they interact through the pineal gland, which is, he writes, “the principal seat of the soul” and is moved this way and that by the soul so as to move the animal spirits or streams of air from the sacs next to it. He had his reasons for choosing this organ, as the pineal gland is small, light, not bilaterally doubled, and centrally located. Still, the whole idea is a nonstarter, because the pineal gland is as physical as any other part of the body. If there is a problem about how the mind can act on the body, the same problem will exist about how the mind can act on the pineal gland, even if there is a good story to tell about the hydraulics of the “pneumatic” (or nervous) system.

We have inherited the sharp distinction between mind and body, though not exactly in Descartes’s form, but we have not inherited Descartes’s solution to the mind-body problem. So we are left with the problem, minus a solution. We see that the experiences we have, such as experiences of color, are indeed very different from the electromagnetic radiation that ultimately produces them, or from the activity of the neurons in the brain. We are bound to wonder how the uncolored radiation can produce the color, even if its effects can be followed as far as the neurons in the visual cortex. In other words, we make a sharp distinction between physics and physiology on the one hand, and psychology on the other, without a principled way to connect them. Physics consists of a set of concepts that includes mass , velocity , electron , wave , and so on, but does not include the concepts red , yellow , black , and the like. Physiology includes the concepts neuron , glial cell , visual cortex , and so on, but does not include the concept of color. In the framework of current scientific theory, “red” is a psychological term, not a physical one. Then our problem can be very generally described as the difficulty of describing the relationship between the physical and the psychological, since, as Princess Elisabeth and Gassendi realized, they possess no common relating terms.

Was there really no mind-body problem before Descartes and his debate with his critics in 1641? Of course, long before Descartes, philosophers and religious thinkers had spoken about the body and the mind or soul, and their relationship. Plato, for example, wrote a fascinating dialogue, the Phaedo, which contains arguments for the survival of the soul after death, and for its immortality. Yet the exact sense in which the soul or mind is able to be “in” the body, and also to leave it, is apparently not something that presented itself to Plato as a problem in its own right. His interest is in the fact that the soul survives death, not how, or in what sense it can be in the body. The same is true of religious thinkers. Their concern is for the human being, and perhaps for the welfare of the body, but mainly for the welfare and future of the human soul. They do not formulate a problem with the technical precision that was forced on Princess Elisabeth and Gassendi by Descartes’s neatly formulated dualism.

Something important clearly had changed in our intellectual orientation during the mid-17th century. Mechanical explanations had become the order of the day, such as Descartes’s balloonist explanation of the nervous system, and these explanations left unanswered the question of what should be said about the human mind and human consciousness from the physical and mechanical point of view.

What happens, if anything, for example, when we decide to do even such a simple thing as to lift up a cup and take a sip of coffee? The arm moves, but it is difficult to see how the thought or desire could make that happen. It is as though a ghost were to try to lift up a coffee cup. Its ghostly arm would, one supposes, simply pass through the cup without affecting it and without being able to cause it or the physical arm to go up in the air.

It would be no less remarkable if merely by thinking about it from a few feet away we could cause an ATM to dispense cash. It is no use insisting that our minds are after all not physically connected to the ATM, and that is why it is impossible to affect the ATM’s output — for there is no sense in which they are physically connected to our bodies. Our minds are not physically connected to our bodies! How could they be, if they are nonphysical? That is the point whose importance Princess Elisabeth and Gassendi saw more clearly than anyone had before them, including Descartes himself.

Jonathan Westphal is a Permanent Member of the Senior Common Room at University College, Oxford, and the author of “ The Mind-Body Problem ,” from which this article is adapted.

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Mental Causation

Questions about the existence and nature of mental causation are prominent in contemporary discussions of the mind and human agency. Originally, the problem of mental causation was that of understanding how an immaterial mind, a soul, could interact with the body. Most philosophers nowadays repudiate souls, but the problem of mental causation has not gone away. Instead, focus has shifted to mental properties. How could mental properties be causally relevant to bodily behavior? How could something mental be a cause qua mental? After looking at the traditional Problem of Interaction, we survey several versions of the property-based problem along with potential solutions.

1.1 The Importance of Mental Causation

1.2 is this an empirical issue, 2.1 what is the mind-body nexus, 2.2 the pairing problem, 2.3 conservation laws, 2.4 the completeness of the physical, 3. the ascent to properties, 4. problem i: property dualism, 5.1 the argument for anomalous monism, 5.2 the charge of epiphenomenalism, 5.3 counterfactual dependence, 5.4 lawful sufficiency, 5.5 the ascent to properties reconsidered, 6.1 functionalism and multiple realizability, 6.2 the exclusion problem, 6.3 autonomy solutions, 6.4 inheritance solutions, 6.5 identity solutions, 6.6 necessary effects: a deeper problem for functionalism, 7.1 how could content make a causal difference, 7.2 intrinsic causal surrogates, 7.3 reasons as structuring causes, 7.4 broad behavior, 7.5 the appeal to explanatory practice, 8. metaphysics and the philosophy of mind, other internet resources, related entries, 1. preliminaries.

Mental causation—the mind’s causal interaction with the world, and in particular, its influence on behavior—is central to our conception of ourselves as agents. Mind–world interaction is taken for granted in everyday experience and in scientific practice. The pain you feel when you sprain your ankle causes you to open the freezer in search of an ice pack. An intention to go to the cinema leads you to get into your car. Psychologists tell us that mental images enable us to navigate our surroundings intelligently. Economists explain fluctuations in financial markets by citing traders’ beliefs about the price of oil next month. In each case, a mental occurrence appears to produce a series of complex and coordinated bodily motions that subsequently have additional downstream effects in the physical world. Instances of apparent mental causation are so common that they often go unremarked, but they are central to the commonsense picture we have of ourselves. It’s not surprising, then, that questions about the nature and possibility of mental causation arise in a variety of philosophical contexts.

Ontology: Suppose you accept the “Eleatic Principle” that power is the mark of being: to exist is to have causal powers (Armstrong 1978, pp. 45–6; Oddie 1982). It’s plausible to think that if the mental has any causal powers at all, it can affect the physical world. Without such powers, the mental faces ontological embarrassment, even elimination.

Metaphysics: Mental causation is “at the heart of the mind-body problem” (Shoemaker 2001, p. 74), often figuring explicitly in how the problem is formulated (Mackie 1979; Campbell 1984; Crane 1999). To ask how mind and body are related just is, in part, to ask how they could possibly affect one another.

Moral psychology: Agency of the sort required for free will and moral responsibility appears to require mental causation. If your behavior is not caused by your mind’s activities—its deliberations, decisions, and the like—what sense would it make to hold you responsible for what your body does? You would appear to be scarcely more than a passive observer of your body’s activities. We would then need to abandon what Strawson (1962) calls our “reactive attitudes”, the moral attitudes and feelings (e.g., gratitude, resentment) so central to our interpersonal lives.

Action theory: It is widely believed that psychological explanation hinges on the possibility of mental causation. If your mind and its states, such as your beliefs and desires, were causally isolated from your bodily behavior, then what goes on in your mind could not explain what you do (Davidson 1963; Mele 1992; for dissent, see “noncausalists” such as Ginet 1990; Sehon 2005; Tanney 2013; and see the essays in D’Oro 2013). These observations about agency suggest a more basic conceptual point: if minds did not influence behavior, in what sense would anyone truly act ? Sounds would be made, but no one would mean anything by them. Bodies would move, but no one would thereby do anything (Malcolm 1968; Horgan 2007).

Although each of the above points could be contested, collectively they create pressure to address the problem of mental causation—problem or problems : as will become clear, there is more than one way in which puzzles about the mind’s causal efficacy can arise.

At least since Hume , philosophers have assumed that causal questions are largely empirical. We look to science to tell us, for example, the moon’s role in causing the tides, or smoking’s contribution to lung cancer: these are not considered philosophical questions. It might seem equally obvious that the mind’s causal role in producing behavior is also a matter for science to settle. So is it in fact the case that working scientists, and in particular, psychologists, find it necessary to appeal to distinctively mental phenomena to account for behavior? Is there evidence in neuroscience that mental states and processes figure in the production of actions?

Although most psychologists would without hesitation accept the causal interaction of minds and bodies, a small but growing number of empirical researchers have insisted that the evidence supports some version of epiphenomenalism , the thesis that mental states, while caused by physical happenings, exert no efficacy in return. Wegner, a psychologist, contends that accumulated empirical evidence overwhelmingly supports epiphenomenalism, at least with respect to conscious willing (Wegner 2002, 2004). He draws on influential work by Libet (1985, 2001, 2004) and others to argue that conscious intending is itself a product of nonconscious processes that do the real causal work, so that free will is “an illusion”. If Wegner and his colleagues are right, these results could have ancillary implications for the physical efficacy of mental states generally. (Note, that some dualists (e.g. Lowe 2006; Gibb 2013) have appealed to the same work by Libet to defend their own non-traditional models of psychophysical causation).

Because this research has received extensive treatment in recent work on free will, we will not consider it further, but instead refer interested readers to the sources cited above and to Mele 2014 for critical discussion and references. Here we simply note that traditional and contemporary attempts to assess the efficacy of mental states have run up against philosophical difficulties as well, difficulties that tend to overshadow the experimental evidence accumulated thus far. In this sense, the efficacy of mind is quite unlike that of, say, the moon or smoking. This will, we hope, become clear in the discussion to follow.

2. The Problem of Interaction

Some historians (e.g., Matson 1966; King 2007) say the mind-body problem is relatively recent, the most important source being Descartes’s “real distinction” between mind and body. That said, you can find topics closely related to mental causation in, for example, Plato’s Phaedo and Aristotle’s De Anima , and it might turn out that many features of the contemporary debate are present in some form or other in pre-modern texts (Caston 1997). Skirting such historical questions, we begin with Descartes, who, for better or worse, set the agenda for modern discussions of mental causation. The cluster of causal problems arising from the Cartesian conception of mind is The Problem of Interaction .

According to Descartes, minds and bodies are distinct kinds of thing, or, in the technical terminology of the day, distinct kinds of substance . Bodies, he held, are spatially extended substances, incapable of feeling or thought; minds, in contrast, are unextended, thinking, feeling substances: souls. (We use “soul” with no theological implications to designate minds considered in the Cartesian way as immaterial substances.) Despite recognizing these deep differences, Descartes accepted the common belief that mind and body causally interact: “Everyone feels that he is a single person with both body and thought so related by nature that the thought can move the body and feel the things which happen to it” (in Cottingham et al. 1991, p. 228). But if minds and bodies are so radically different, it is not easy to see how they could interact. Descartes was well aware of the difficulty. Princess Elisabeth of Bohemia puts it forcefully to him in a 1643 letter, pressing Descartes to tell her

how the human soul can determine the movement of the animal spirits in the body so as to perform voluntary acts—being as it is merely a conscious substance. For the determination of movement seems always to come about from the moving body’s being propelled—to depend on the kind of impulse it gets from what sets it in motion, or again, on the nature and shape of this latter thing’s surface. Now the first two conditions involve contact, and the third involves that the impelling thing has extension; but you utterly exclude extension from your notion of soul, and contact seems to me incompatible with a thing’s being immaterial (in Anscombe and Geach 1954, pp. 274–5).

Elisabeth is expressing the prevailing mechanistic view as to how causation of bodies works: it must involve the cause’s impelling the body, where this requires contact between cause and effect. Since a soul could never come into contact with a body—souls have no spatial location—an immaterial soul could never impel, and so could never causally interact with, a body.

Elisabeth’s worries might seem quaint and outdated. Causal relations countenanced by contemporary physics can take several forms, not all of which are of the push-pull variety. Why shouldn’t soul–body interaction simply be included as another sort of “non-mechanistic” causation (Richardson 1982)? But Elisabeth’s objection is in fact just one version of a more general worry about soul–body interaction, a worry that rests on the following thesis about causation:

  • (CN) Any causal relation requires a nexus , some interface by means of which cause and effect are connected.

Elisabeth presumes that when an effect is bodily motion, the required nexus is spatial contact. But even if she is wrong about this (Garber 1983), (CN) nevertheless poses problems for the dualist: if contact is not the mind–body nexus, what is?

One line of thought appeals to the transference theory of causality. Here the idea is that identity—the persistence of something from cause to effect—provides the needed link. If something in a soul could become present in a body, this could bridge the immaterial and material. Descartes himself appears to accept such a theory, declaring in the Third Meditation that there could be nothing in an effect not present in its total efficient cause (Descartes 1642/1996, p. 28). But now the problem reasserts itself: if, as the substance dualist insists, bodies and minds are radically different, they have no properties in common. According to Descartes, a body’s properties are modes of extension, ways of being extended, while a soul’s properties are modes of something quite different, thought or consciousness. If causation involved transference, a Cartesian soul could not interact with a body (but see Hart 1988; Hoffman and Rosenkrantz 1991).

Does a dualist need to accept (CN), however? The notion of a causal nexus has come under criticism, often from philosophers working in the Humean tradition (Blackburn 1990). More generally, (CN) and kindred principles might be thought to rest on a conception of causality that is now obsolete, finding no place in modern physics (for further discussion, see the metaphysics of causation , §2). But the next three versions of the problem can arise even for those who reject the need for a causal nexus.

A second version of the Problem of Interaction is the “Pairing Problem” (Kim 1973, 2005; Sosa 1984; Foster 1991, ch. 6). Imagine two exactly similar minds M 1 and M 2 and the bodies B 1 and B 2 to which they are “attached”, that is, the bodies with which they directly interact. In virtue of what is M 1 causally paired with B 1 , and M 2 with B 2 ?

This is not the epistemological question of how we could know that these are the pairings (although this is troublesome, too). The question, rather, is metaphysical: in virtue of what are these the pairings? If minds were, like bodies, located in space, causal pairing could be achieved by the relative spatial locations of the substances (Bailey et al. 2011). Particular minds might be inside or “inhabit” particular bodies. But if minds are non-spatial souls, relative spatial location is unavailable to fill the pairing role. And since M 1 and M 2 are, by hypothesis, exactly similar, we cannot appeal to the different intrinsic properties that they might possess.

In reply, a dualist could appeal to “individualistic” powers (Unger 2006, pp. 242–59; Foster 1991, pp. 167–8). Powers are standardly thought of as powers to interact with some type of object. A key has the power to open this lock, but only by virtue of having the power to open any lock of this kind, the power to open any intrinsically comparable lock. Individualistic powers, in contrast, are powers possessed by an object to affect or be affected by a particular object. Think of a key with the power to open this lock, but without the power to open any intrinsically indiscernible lock. Likewise, a soul could have the power to interact with a particular body and no other. As the key example suggests, however, it is by no means obvious that powers could be individualistic in this sense (but see Audi 2011).

A third version of the Problem of Interaction appeals to conservation laws. The leading idea is simple: Soul–body interaction would have to change the amount of energy in the physical universe. When souls act, new energy would appear in, say, the brain. When souls are acted upon, some quantity of energy in the brain would vanish. But either scenario would contravene established conservation laws, which permit only the conversion and redistribution of energy (or mass–energy) within the physical universe, not its addition or subtraction.

This version of the problem has dogged dualism since the scientific revolution (Lowe 1992; Papineau 2000), and a number of contemporary philosophers present conservation as a major obstacle for dualists (Fodor 1981; Dennett 1991, p. 35; Heil 2012, p. 26; Papineau 2000). That said, turning the leading idea into a compelling argument has proven difficult. First, the conservation laws do not dictate what kinds of energy exist, only that they must operate conservatively. Hence, if sui generis mental energy existed, as long as it operated conservatively, this would be consistent with the conservation laws. Appealing to this fact, Hart (1988) advances a substance dualism and, combining it with a transference theory of causation (§2.1), argues that psychophysical causation consists in the transfer of such psychic energy. (For arguments against the existence of sui generis mental energy, see Papineau 2000). Secondly, what is needed is a conservation law weak enough to have been confirmed by physical science, but strong enough to preclude soul–body interaction. Averill and Keating (1981) consider a number of candidate “laws” and argue that none meets both criteria. Thirdly, it’s not clear in any case that a soul would have to add energy to (or receive it from) the brain in order to interact with it. Broad (1925, pp. 103–9) suggests a soul could act merely by redistributing the brain’s energy without changing its quantity. Furthermore, Lowe (2000) and Gibb (2013) both advance dualist models of psychophysical causation according to which the mental does not affect the brain either by affecting the amount of energy in it or by redistributing it. (For more recent discussion of these and other complexities, see Montero 2006; Koksvik 2007; Gibb 2010.)

A fourth version of the Problem of Interaction is related to the third, but, because it is more prominent in the contemporary literature, especially in some of the “property-based” problems we examine below, we will develop this last version at greater length. The first premise is:

The Completeness of the Physical : Every physical effect has a sufficient physical cause.

When you trace the causal history of any physical effect—that is, of anything physical that has a cause—you will never need to appeal to anything non-physical. The physical universe contains within itself the resources for a full causal explanation of any of its (caused) elements, and in this sense is “complete”. The point applies, then, to whatever might occur to or within our bodies. Any instance of bodily behavior has a sufficient physical cause, which itself has a sufficient physical cause, and so on. In tracing the causal history of what we do, we need never appeal to anything non-physical.

This principle appears frequently in the mental causation literature under a number of labels: most common are variations of Completeness of the Physical (Crane 1995, 2001; Papineau 1993, 2000; O’Connor and Churchill 2010) or Physical Closure (Crane 1992; Baker 1993; Melnyk 2003; Kim 2005). We’ll call it Completeness for short.

Labels aside, several versions of the premise appear in the literature, and they can differ in strength. Note that the principle as formulated says nothing about whether the non-physical can affect the physical; a strengthened version prohibits this. ( Closure is sometimes reserved for this stronger principle: LePore and Loewer 1987; Kim 1998, p. 40; Marcus 2005; compare Strong Causal Closure in Montero 2003.) An even more ambitious version blocks the non-physical from being cause or effect; such is suggested in Davidson’s work (see §5.1 and McLaughlin 1989, who uses Physical Comprehensiveness for this thesis.) As for weaker versions, Completeness could be limited to physical effects within the human body without affecting its relevance to the current topic. Note also that the principle is apparently committed to deterministic physical causation; a weakened version permits probabilistic causes. (For complications with such a weakening, see Montero 2003, and for other challenges with formulating Completeness , Lowe 2000; Gibb 2015.)

For simplicity, we stay with the principle as formulated at the outset. Why think that it’s true? Perhaps it is a conceptual truth: for an effect to be physical is, at least in part, for it to have a physical cause. This defense turns on the proper analysis of the concept physical , itself the subject of a contentious literature (see physicalism ). Here we simply note that the principle does not seem analytic; it appears to be a substantive, empirical claim about the causal structure of the universe. (For more on the conceptual defense, see Crane 1991; Papineau 1991, 1993, §1.9; Lowe 1996, p. 56.)

It’s natural, then, to look to science for a defense, and especially physics (or physiology). Appeals to “current physical theory” (Antony and Levine 1997, p. 100), “the development of the sciences” (LePore and Loewer 1987, p. 630), and “physics textbooks” (Melnyk 2003, p. 289) are common, but what exactly in physical science supports the premise? An appeal to the conservation laws (2.3) might be thought to generate one such argument. Another argument is the no-gap argument. (See, for example Melnyk 2003, pp. 288–90; Papineau 1993, pp. 31–32). Physics has been hugely successful in identifying the causes of various kinds of physical event. To do so physicists have only needed to appeal to physical events. Not once have they had to appeal to sui generis mental events. Without doubt the causal account that physics provides of physical events contains gaps. But the crucial point is that is highly unlikely that physics will ever need to appeal to sui generis mental causes to fill these gaps—or so proponents of the no-gap argument claim. A similar no-gap argument can be presented at the level of neurophysiology. (See, for example, Melnyk 2003, p.187).

We will look at challenges to Completeness in a moment, but note for now that the premise by itself does not preclude the efficacy of souls. Even if every physical effect has a sufficient physical cause, some physical effects might have non-physical causes as well. This latest version of the Problem of Interaction thus requires a second premise:

No Overdetermination : There is no systematic overdetermination of physical effects.

This principle enjoys wide support in the literature. It is said that postulating systematic overdetermination in this context is “absurd” (Kim 1993a, p. 281), one of the “nonstarters” in the mental causation debate (Kim 1998, p. 65). But why? Perhaps it just looks like bad engineering (Schiffer 1987, p. 148). Or maybe the problem is that it would involve an “intolerable coincidence” (Melnyk 2003, p. 291): every time you act, there are two independent causal processes—one from your brain, another from your soul—converging on the same effect.

With the two premises now in place, the Problem of Interaction in our final version is straightforward. Assume for reductio that our souls routinely cause behavior. By Completeness , such effects also have sufficient physical causes, so behavior is systematically overdetermined. But this contradicts No Overdetermination . The dualist’s options would then seem to be severely limited. One is to embrace epiphenomenalism, a doctrine on which the mental, while caused by the physical, exerts no “downward” causal influence in return. A more radical option, parallelism, depicts bodies and souls as running in tandem, with no causal influence in either direction.

The two premises can, however, be challenged. Start with Completeness . Baker (1993), not herself a Cartesian dualist, argues that if the principle threatens to undermine our ordinary (and scientific) explanatory practices—many of which cite the mental—it’s Completeness that has to go. Entrenched explanatory practices trump any abstract metaphysical principles with which they might conflict (see also §§6.3, 7.5). Others argue that physical science, far from supporting the principle, may in fact undermine it. Hendry (2006) finds indications of “downward causation” in chemistry, while Stapp (2005) culls evidence from contemporary physics suggesting that there are, contrary to Completeness , causal gaps in the physical world, gaps filled in by the mental (see also Sturgeon 1998; Davies 2006). Emergentists in general deny the principle, either on scientific grounds or by appeal to our conscious experiences of agency (see emergent properties , esp. §4). And although the death of emergentism has been declared more than once on empirical grounds (McLaughlin 1992; Papineau 2000), the view continues to attract philosophers and scientists. (See Wilson 2021 and the contributions to Clayton and Davies 2006; Bedau and Humphreys 2008; Macdonald and Macdonald 2010; Paoletti and Orilia 2017; and Gibb, Hendry and Lancaster 2019.)

No Overdetermination has been targeted as well. Mills (1996), for example, defends mental–physical overdetermination as the most plausible route for the dualist to take. Overdetermination is plausible, the reasoning goes, if for any behavioral effect B , both a non-physical (mental) cause M and physical cause P satisfy the following counterfactual conditionals (among others):

  • If M had occurred in the absence of P , B would still have occurred.
  • If P had occurred in the absence of M , B would still have occurred.

If the dualist can reasonably claim that (1) and (2) are true, this will make a strong prima facie case for overdetermination. Along different lines, Lowe (2003) presents a model of dualist interaction on which, owing to systematic mind–body dependencies, overdetermination is not the intolerable coincidence worrying opponents of dualism. And more generally, the ban on systematic overdetermination has come under increased scrutiny in the context of the Exclusion Problem, to be discussed in §6.

Cartesian dualism has fallen out of favor among philosophers and cognitive scientists. There are, to be sure, non-Cartesian forms of substance dualism that might have the resources to confront the Problem of Interaction in its various guises (Hasker 1999; Lowe 2006). But the dominant view today would appear to be that if the mind is a substance at all, it is a physical substance—the brain, for instance. This sort of “substance monism” is in fact a consequence of the more general token identity theory: every concrete mental particular (token) is physical. We will assume token identity in what follows: minds, mental events, and any other mental “objects” are physical (see the mind/brain identity theory ).

What becomes of the Problem of Interaction on such a view? It would seem to dissolve. While causation between brain and body is complex, even to the point of being empirically inscrutable, it does not pose the same problems as soul–body interaction. There are no special philosophical problems with brain–body interaction, nor is there anything especially odd or worrisome about an event in your brain causing, say, your arm to go up. Any philosophical questions here belong to the metaphysics of causation generally and have no special application to mental causation.

Nevertheless, philosophical worries about mental causation persist. Theoretical and commonsensical considerations leading us to think the mind or mental events cause behavior should also make us think that they do so as mental, i.e., in virtue of their mental properties . Properties figure in causal relations (Kim 1973; Mackie 1974, ch. 10; Armstrong 1989, pp. 28–9; Ehring 1997). Drop a square paperweight into soft clay and it will produce an impression. The shape of the impression can be traced to the shape of the paperweight, the depth of the impression to the mass of the paperweight. Here shape and mass are “causally relevant” or “causally efficacious” properties. In particular, they are relevant to certain properties of the impression. By contrast, other properties of the paperweight, such as its color or value, appear to be irrelevant to producing this kind of impression. Or consider a soprano who sings a high note, thereby shattering a glass. The sound, we can suppose, has a meaning—a semantic property—but it is the sound’s acoustic properties that are operative in producing the shattering; the semantic properties play no causal role, at least not with respect to this effect (Dretske 1989).

By themselves, these observations pose no special problem for the philosopher of mind. While the notion of a causally relevant property calls for analysis (Horgan 1989; Dardis 1993; Braun 1995), there is no reason at the outset for a token-identity theorist to be especially concerned about the efficacy of mental properties. Gus smiles because of the way his food tastes, that phenomenal property; Lilian walks to school along a particular route because of what she believes, that representational property. Assuming the mind is something physical, why should a mind’s causing behavior in virtue of its mental properties be any more puzzling than a paperweight’s causing a square impression in virtue of its shape?

Recent philosophical work on mental properties has revealed that matters are not so simple, however. Mental properties are alleged to have, not just one, but up to four features that make their efficacy philosophically puzzling, no less problematic than mind–body interaction is for the Cartesian dualist. These features will be discussed in the following sections. Each feature makes it appear as though mental properties, or some important family of them, are irrelevant to the production of behavior. The threat is a form of epiphenomenalism: even if minds and mental events are causes, they are not causes as (or qua ) mental.

This “new epiphenomenalism” (Campbell 1984, ch. 7) immediately confronts a particularly strong version of property dualism , one insisting that mental properties are sui generis , perhaps dependent on, but in no way reducible to the dispositional and structural properties recognized by the physical sciences. Some property dualists accord this status only to a certain class of mental property, namely qualia , the “what it’s like” features of conscious experience. Other property dualists, including some emergentists , are willing to extend the thesis to all mental properties.

Suppose that this robust form of property dualism is true. Can mental substances or events cause what they do qua mental, in virtue of their mental properties? The arguments against soul–body interaction, now couched in terms of properties, could enter again here. For example, if you were worried about the mind–body nexus for souls (§2.1), it seems you should also wonder how non-physical properties can find any traction in the physical world. Similarly, Completeness (§2.4) seems to lose none of its attractiveness when formulated explicitly in terms of properties. You could add to the principle a clause stipulating that a “sufficient physical cause” is one that’s sufficient in virtue of its physical properties (see also §5.4). Bring in No Overdetermination , and the efficacy of mental properties is again threatened. The arguments here and the responses to it are structurally similar to those in §2, so we will not pursue further this version of the property-based problem. (Property dualism also faces the Exclusion Problem, to be discussed in §6.)

5. Problem II: Anomalous Monism

Another version of the property-based problem of mental causation can be traced to Davidson’s influential paper, “Mental Events” (Davidson 1970). There Davidson defends an account of the mind–body relation he calls “ anomalous monism ”, a view that at first appears to save mental causation, but in the end might deny efficacy to mental properties.

At the core of anomalous monism are three principles:

Principle of Causal Interaction : Some mental events interact causally with physical events.

Principle of the Nomological Character of Causality : Events related as cause and effect fall under strict laws.

Anomalism of the Mental : There are no strict laws on the basis of which mental events can be predicted and explained.

According to Davidson, the apparent tension among these principles gives rise to the mind–body problem. Most of us unquestioningly assent to the first principle. The second is more controversial, although Davidson provides little argument for it. Here we just note that it’s not as strong as it seems, for “strict” is not synonymous with “deterministic”. A strict law is exceptionless, but could be either deterministic or probabilistic.

The third principle is the most contested of the three. It rules out strict laws in psychology; in particular—and most importantly for present concerns—it rules out strict psychophysical laws, that is, laws connecting the mental and physical. According to Davidson, application conditions for mental predicates feature a rationality constraint absent from the application conditions for physical predicates. In ascribing beliefs to others, for instance, we employ a principle of charity that counsels us to make these believers as rational as possible. But this normative constraint has, as Davidson puts it, “no echo” in the physical realm. In this regard, mental and physical predicates are misaligned in a way that precludes strict psychophysical laws.

Now the second two principles seem to rule out the first. If causation requires strict laws, and there are no strict psychophysical laws, how can the mental be causally efficacious? But Davidson notes there is a way to save the first principle: as long as every mental event is physical, the first principle is compatible with the other two. In this way, the three principles entail event monism . At the same time, Davidson’s view entails type dualism , for the anomalism of the mental (the third principle) precludes identities between mental and physical types. Most philosophers find it natural to say that types are properties, so Davidson is sometimes described as a property dualist, a convenient label for the time being (but see §5.5).

Davidson’s property dualism, and the principle that lies behind it, have led to a serious charge: anomalous monism robs mental properties of any causal significance.

Suppose Gus decides to illuminate the room and subsequently flips a switch, thereby turning on the light. In this case we have a cause that, if Davidson is right, could be given both a mental and a physical description, and an effect that has a physical description. If this means that the cause has a mental property (in virtue of which it satisfies a mental description) and a physical property (in virtue of which it satisfies a physical description), we are faced with a further question. Granting that the event with the mental property is the event with a physical property, why should we think that the mental property had anything at all to do with the event’s physical effect? Davidson’s second two principles appear to block such relevance. If all causal relations are subsumed under strict laws, and if there are no strict psychophysical laws, then any instance of mind–body causation is subsumed only by physical laws. But then it looks as though only a mental event’s physical properties are relevant to what it causes. The mental properties (or mental types) are causally irrelevant (see, e.g., Stoutland 1980; Honderich 1982; Sosa 1984; a review of this literature is in McLaughlin 1989).

LePore and Loewer (1987) look to counterfactuals to answer this charge (see also Horgan 1989; LePore and Loewer 1989; Block 1990; Loewer 2007). The central idea is that anomalous monism permits physical effects to depend counterfactually on mental properties. And such dependence secures an important kind of causal relevance for the mental, the sort that LePore and Loewer call “bringing about”. On their view, a ’s being F brings about b ’s being G when the following conditions are met:

  • a causes b .
  • a is F and b is G .
  • If a had not been F , b would not have been G .
  • a ’s being F and b ’s being G are logically and metaphysically independent.

Now suppose a mental event, such as a decision to turn on the light, causes Gus to move his finger, thereby flipping the light switch. Here the crucial counterfactual is: If the cause had not been a decision to turn on the light, the effect would not have been a switch-flipping. This is plausible, as are similar counterfactuals in a wide range of cases. But are such counterfactuals compatible with anomalous monism? LePore and Loewer say Yes: while Davidson prohibits strict laws connecting mental and physical properties, he apparently leaves room for non-strict laws. Such laws are enough to ground or “support” counterfactuals. Consider, by analogy, the properties of being a match-striking and being a match-lighting . If there is a law connecting such properties, it is evidently non-strict: striking causes lighting only ceteris paribus . Nevertheless, we can assert with confidence, after a given lighting, that if the match had not been struck, it wouldn’t have lit. Non-strict psychophysical laws would similarly appear to ground counterfactuals connecting mental and behavioral properties.

This counterfactual defense is attractive for a number of reasons. It captures a sense in which mental properties make a difference to behavior, but in a way that’s apparently compatible with anomalous monism. It respects our causal intuitions about a wide range of cases. And it fits well with the more general counterfactual theory of causation , which many philosophers have found independently plausible. Moreover, Davidson himself seems sympathetic to the defense (Davidson 1993; but see §5.5).

In spite of these advantages, a worry is that the pertinent counterfactuals don’t after all ensure causal relevance, and in this sense don’t vindicate anomalous monism. This objection can take the form of direct counterexamples (Braun 1995; Garrett 1999), but here we look at a broader concern.

When a counterfactual is true, there should be something in the world that makes it true. Even granting that, if the cause had not had its mental property, the effect would not have had its behavioral property, in virtue of what is this true? This truthmaker, not the counterfactual itself, is what matters in determining whether a property is causally relevant. And the worry is that once we look at the truthmakers in the mental case, the threat of epiphenomenalism crops up again. Although the effect counterfactually depends on the mental property, this is only because the mental property depends on a physical property doing the real work. The mental property looks like a freeloader (Kim 1998, pp. 70–3, 2007; compare Crane 2008 on a similar issue).

LePore and Loewer discuss a version of this worry. Condition (3), an objector might say, is too crude to test for causal relevance, for the counterfactual holds only because removing F from a also removes some other property F* of a , and it’s the absence of F* that’s responsible for b ’s not being G . A better counterfactual test evaluates the effect’s status given that a is not F and all of a ’s other properties—or at least all that are potential causal rivals to F —are held fixed. If b is not G in that case, only then can we credit F with causal relevance. But mental properties fail this more refined test. Consider again Gus’s decision to turn on the light, and remove its mental property, this time holding fixed its physical properties. It seems clear that he would still flip the switch. After all, the physical properties of the cause figure in an exceptionless law according to Davidson. It looks as if the physical properties of your decision “screen off” the mental property, making the latter irrelevant.

LePore and Loewer concede that mental properties are screened off by physical properties. But they argue that this more refined test is too demanding, for it would also mean that the physical properties of a mental cause are irrelevant. Note in particular that the decision’s mental properties screen off its physical properties: if the cause had lacked its physical properties yet had still been a decision to turn on the light, it would have caused Gus to flip the switch ( ceteris paribus : here a hedged law, which anomalous monism permits, is in play). Screening off thus goes both ways, and since few would want to deny causal relevance to the physical properties, we should not let screening off impugn the significance of mental properties either.

Antony (1991) replies that there is no symmetry here, at least not given anomalous monism. While the decision’s physical properties screen off its mental properties, the reverse doesn’t hold. Suppose again that the cause had lacked its physical properties but had still been a decision to turn on the light. On anomalous monism, Antony argues, there’s no saying what Gus’s decision would have caused, for mental properties, being anomalous, place no constraints on the causal structure of the world. (See also Leiter and Miller 1994.)

The freeloader problem arises in a variety of contexts in the mental causation literature, not just in discussions of anomalous monism. It will return under a number of guises in what follows.

Fodor (1989) apparently agrees that counterfactuals capture a kind of causal relevance, but he argues that LePore and Loewer have settled for too little. On Fodor’s view, mental properties can be relevant to behavior in a stronger sense in which they are sufficient for their effects and in this way “make a difference”. Fodor spells out sufficiency in terms of laws: a property makes a difference if “it’s a property in virtue of the instantiation of which the occurrence of one event is nomologically sufficient for the occurrence of another” (Fodor 1989, p. 65, note omitted).

Might such an account save anomalous monism from the charge of epiphenomenalism? On the face of it, it cannot, for as we’ve noted, mental properties on Davidson’s view appear only in hedged laws, laws that include an implicit ceteris paribus rider. Consider a candidate psychological law:

  • (L) If an agent, a , wants x , believes x is obtainable by doing y , and judges y best, all things considered, then a forms the intention to y and subsequently y ’s on the basis of this intention, ceteris paribus .

The ceteris paribus clause here would seem to block the mental properties in question from being causally sufficient for the behavioral effect. But perhaps not: according to (L), the mental properties are sufficient for the behavioral effect when the ceteris paribus conditions are satisfied. And this sort of causal sufficiency, Fodor argues, is all anyone could reasonably want for mental properties.

But can Davidson help himself to such an account? Davidson appears to think so (1993, p. 10), as does McLaughlin (1989), who also appeals to hedged laws. Fodor, however, doubts his account is compatible with anomalous monism; such doubts are developed by Antony (1991) and Kim (1993b). The question turns largely on Davidson’s reasons for thinking the mental is anomalous, and on whether these reasons permit him to appeal to hedged laws in the way the laws account requires.

Supposing anomalous monism is compatible with Fodor’s account, you might still wonder whether nomological sufficiency is enough for causal relevance. An account of causal relevance in terms of laws is natural given the tight connections between laws and properties (see laws of nature , §3). But those sympathetic to Fodor’s position might still ask (as Fodor himself does) what the causal mechanism is in mental–physical interactions. For example, it could turn out that the reason psychophysical laws such as (L) hold is that mental properties are themselves grounded in more basic, physical properties, and that only the latter do genuine causal work: mental properties again look like freeloaders (§5.3), merely piggybacking on the real bearers of causal powers (LePore and Loewer 1989; Block 1990; Leiter and Miller 1994; Marras 2003).

Davidson replies to his critics in “Thinking Causes” (Davidson 1993). In that paper he sometimes speaks favorably of “causally efficacious” properties, and he helps himself to both hedged laws and counterfactuals to secure the efficacy of mental properties. But his considered position appears less conciliatory. He clearly denies a crucial assumption of his critics, namely, that causes do their causing in virtue of their properties. When an event causes something, it doesn’t do so qua this or that: it just causes what it does, full stop. Were this so, none of the property-based problems discussed here could get off the ground (Crane 1995; Campbell 1997; Gibb 2006).

Such a response seems to miss the point (Kim 1993b; McLaughlin 1993; Sosa 1993). All parties in this dispute agree that mental events can cause physical events. The difficulty is to understand how they could do so in virtue of their mental (rather than their physical) properties, how they could have physical effects qua mental. The principle of the Nomological Character of Causation (§5.1) apparently requires that, when one event causes another, it does so in solely virtue of its physical properties.

But Davidson is part of a nominalist tradition that rejects properties, at least as his critics conceive of them. Davidson instead formulates anomalous monism in terms of predicates and descriptions. An event is mental if it answers to a mental predicate (that is, it can be picked out using a mental description), physical if it answers to a physical predicate (it can be referred to using a physical description). Davidson’s critics assume that if an event answers to both sorts of predicate, it includes a mental property and a physical property. But Davidson thinks about the mental–physical distinction as merely a difference in description, not as the expression of an ontological divide between kinds of property. For Davidson, then, it makes no more sense to ask whether an event had a particular effect in virtue of being mental or in virtue of being physical than it would to ask whether its effect stemmed from its being described in English or in German. (For further discussion, see Heil 2009.)

6. Problem III: Exclusion

While reflection on property dualism or anomalous monism can lead to our next property-based problem, another route is by way of the doctrine of non-reductive physicalism . Like the property dualist, the non-reductive physicalist holds that mental properties are not physical. But unlike the property dualist, the non-reductive physicalist insists on a strong dependence of the mental on the physical: mental properties are “realized” or “constituted” by physical properties. This strong tie between the mental and physical is the subject of a large contemporary literature, some of which we touch on below.

Non-reductive physicalism in its current form grew out of functionalism , according to which mental properties are functional properties. To be in pain, for example, is a matter of being in a state with a certain causal profile, a state that’s caused by tissue damage, and causes certain overt responses (moans, attempts to repair the damage) as well as other mental states (e.g., beliefs that one is in pain). But, argue functionalists, it is most unlikely that we could identify a single kind of physical state playing this role in every actual and possible case of pain. Human beings differ in endless tiny physiological ways: your neurological states, including states you go into when you are in pain, probably differ subtly from another person’s. Human beings’ neurological states, in turn, differ from those of a cat or a dog, and perhaps dramatically from states of an octopus. You might even imagine encountering aliens with vastly different biologies, but to which you would unhesitatingly ascribe pains.

Here we arrive at a core thesis of functionalism: states of mind are “ multiply realizable ”. The property of being in pain can be realized in a wide variety of physical (and perhaps non-physical) systems. A creature is in pain in virtue of being in a state with the right sort of causal profile, some sort of neurological state, say. But the property of being in pain cannot be identified with this neurological state, because creatures of other kinds can be in pain in virtue of being in vastly different physical conditions. Functionalists often put this point by saying that mental properties are “higher-level” properties, properties possessed by objects by virtue of their possession of appropriate “lower-level” properties, their realizers.

Now, however, we are again confronted with the threat of epiphenomenalism. If mental properties are not physical, how could they make a causal difference? Whenever any mental (functional) property M is instantiated, it will be realized by some particular physical property P . This physical property is unproblematically relevant to producing various behavioral effects. But then what causal work is left for M to do? It seems to be causally idle, “excluded” by the work of P .

This version of the problem of mental causation has appeared in various guises. Much of the contemporary literature is inspired by Malcolm 1968, especially as refined in Kim 1989, 1993c, 1998, 2005. Whatever its precise formulation (cp. Shapiro and Sober 2007; O’Connor and Churchill 2010; historical perspective is in Patterson 2005), the Exclusion Problem has clear affinities with the other problems we’ve looked at so far. Consider our claim that the realizing property P must play a role in producing a particular behavioral effect. This would seem to be justified either by an appeal to Completeness (§2.4) or to Davidson’s doctrine (§5.1) that causal relations must fall under strict (and so physical) laws. Moreover, the argument’s depiction of P and M as competing for causal relevance—one must exclude the other—would seem to require a principle such as No Overdetermination (§2.4). And the fundamental worry that P might exclude M looks exactly like the freeloader problem that badgers mainstream attempts to save anomalous monism (§§5.3–4).

In spite of these similarities, the Exclusion Problem is in one important respect unique: unlike the problems we’ve looked at so far, exclusion worries generalize to a wide range of phenomena outside of the mental. Any properties, mental or otherwise, that are multiply realizable in physical systems are threatened with causal irrelevance. (For discussion of this and related issues, see Kim 1998, pp. 77–87; Noordhof 1999; Bontly 2001; Gillett and Rives 2001; Block 2003; Walter 2008.)

Some philosophers (e.g., Fodor 1989; Baker 1993; Shapiro 2010) take this general nature of the problem to be an encouraging sign. We happily accept biological, or meteorological, or geological properties as causally significant despite their being distinct from their physical realizers. Why then imagine that exclusion threatens the efficacy of mental properties? Others turn this argument around, insisting that the alleged efficacy of biological and other “special science” properties is by no means sacrosanct (Antony 1995). Causal powers we attribute to them must respect what our best metaphysics tells us. And in any case, the central issue is not so much whether mental properties (and the rest) are causally relevant to the production of physical effects, but how they could be (Kim 1998, pp. 61–2, 78–9; Antony and Levine 1997, p. 96; McLaughlin 2006). Even if the Exclusion Problem, because it generalizes, does not tempt us to embrace epiphenomenalism, it presses on us a responsibility to explain how mental properties could play a causal role given that they appear to be screened off by their physical realizers.

The Exclusion Problem is the subject of a large and still-growing literature. In the next few sub-sections, we look at some of the main lines of response, dividing them into three broad categories.

The Exclusion Problem presents us with a picture on which higher-level mental properties compete with their lower-level physical realizers. Physical properties are unproblematically relevant in the production of behavior, and so mental properties must either find a way to do the work that their realizers are already doing or face exclusion. But some philosophers would insist that this picture is deeply misleading: mental properties enjoy causal relevance in their own right and are not threatened by exclusion from physical properties.

This “autonomy solution” (Jackson 1996, §2) can take a variety of forms. One version starts by observing that psychological explanations—and more generally, explanations in the special sciences—are in an important sense independent of physical explanations. Psychological explanations typically abstract away from details of lower-level implementation, appealing instead to their own distinctive kinds and laws. Explanations in the special sciences can thus proceed independently of those in the lower-level physical sciences. If the structure of the causal order reflects these explanatory practices, mental properties need not be threatened by exclusion. Mental and physical causes can peacefully coexist. (Variations on this theme appear in Dennett 1973; Baker 1993; Van Gulick 1993; Garrett 1998; Hardcastle 1998; Marcus 2001; Menzies 2003; Raymont 2003; Ross and Spurrett 2004; Woodward 2008; Zhong 2014; see also §7.5.)

This appeal to explanation can naturally lead to (though it does not entail) another autonomy solution, the dual explanandum strategy. The Exclusion Problem presents a mental (functional) property M and its physical realizer P as competing to be causally relevant to the same effect, namely a bit of behavior. But M might not be threatened with exclusion if M and P are causally relevant to different properties of the effect. Return for a moment to the paperweight example from §3. The shape of the paperweight is relevant, not to the impression simpliciter , but to the impression’s shape . In general, a causally relevant property is relevant to some particular property of the effect (Horgan 1989). Perhaps, then, M and P do not causally compete because they are parts of separate, autonomous causal lines to different properties of the effect.

Consider one way this might work. Behavioral properties, just like mental properties, appear to be multiply realizable. For example, there is more than one way to hail a cab, many different physical realizations of this kind of behavior. Now suppose a belief causes you to hail a cab. In accordance with Completeness , some physical property P of the belief is sufficient for your behavior. But strictly speaking, P is relevant only to the particular way in which you hailed the cab, the particular physical realization of your hailing. What, then, is responsible for your behavior’s higher-level property of simply being a cab-hailing ? It’s natural to suppose that it’s a higher-level property of your belief, namely, some mental property, such as the belief’s representational content. (For proposals along these lines, see Yablo 1992; Thomasson 1998; Marras 1998; Crisp and Warfield 2001; Gibbons 2006; Schlosser 2009; see also §§7.3–4.)

A strength of autonomy solutions is that they secure a causal role for mental properties without running afoul of Completeness , as the physical realization of behavior is always matched with some physical properties of its cause. But do autonomy solutions respect No Overdetermination ? Here matters are not as straightforward. Autonomy solutions present us with two properties, P and M , each sufficient for the behavioral effect. It might seem as if the dual explanandum strategy avoids this awkwardness, since P and M are relevant to different properties of the effect. But even here, overdetermination threatens, as the effect’s behavioral property is produced twice: directly by M , and indirectly by P , which produces the behavioral property’s physical realizer, which itself necessitates the behavioral property.

Proponents of autonomy solutions might grant these points but claim that such “overdetermination” is innocuous, far from the “intolerable coincidence” threatening Cartesian dualist accounts of mental causation (§2.4), for the two causal lines present are not independent. (The nature of overdetermination has itself become the subject of a literature inspired, in part, by the Exclusion Problem. See, e.g., Funkhouser 2002; Bennett 2003; Sider 2003; Walter 2008; Carey 2011; Bernstein 2016; Kroedel 2019.)

Autonomy solutions can make it appear that the causal powers of mental properties “float free” of their physical realizers, bringing to mind the doctrine of parallelism (for replies, see Thomasson 1998; Marcus 2001, §3.3). Some non-reductive physicalists have accordingly looked to tie the causal powers of mental properties more closely to those of their physical realizers. The idea is that mental properties are so intimately related to their realizers that the former “inherit” the causal powers of the latter. The relation between levels is not one of rivalry, such that the physical might exclude the mental, but one of cooperation. Nor, moreover, does there seem to be any threat of overdetermination, since the mental works through the physical. (Compare the metaphor of “transparency” in Jackson 1996.)

On some versions of the inheritance solution, what the higher-level mental property derives from its physical realizer is some weaker or “lower-grade” form of causal relevance. For example, Jackson and Pettit (1988, 1990) distinguish the robust “causal efficacy” of physical properties from the weaker “causal relevance” of higher-level properties. Causal relevance in this sense is an explanatory notion: as one might put it, behavior is produced at the physical level, but by being realized in the physical, mental properties inherit an explanatory relevance they wouldn’t have otherwise. An advantage of such a view is that it accords a derived form of relevance to mental properties, but in a way that respects both the priority of physical causation embodied in Completeness as well as the principle of No Overdetermination . (For similar views, see Kim 1984; Levine 2001, §1.5; Segal 2009. Those who appeal to the counterfactual dependence of behavior on the mental [§5.3] might also fall into this category. For an answer to the charge that counterfactual dependence is “causation lite”, see Loewer 2007, Menzies 2007.)

If such a weakening seems to amount to epiphenomenalism, you might look for an inheritance solution on which mental properties are efficacious in the same sense that their physical realizers are (compare the “homogeneity assumption” in Crane 1995). How can this be done without violating No Overdetermination ? Well, suppose that a mental property is, in spite of being distinct from its physical realizer, immanent in this realizer; M , that is, is somehow nothing over and above P . In that case, any causal work done by P is, in a straightforward way, inherited by M . Overdetermination is avoided because M ’s work is included in P ’s.

The metaphysical details of such a picture matter. Otherwise, “immanence”, “nothing over and above”, and the like will turn into mere labels for that psychophysical relation, we know not what, that solves the Exclusion Problem. Accordingly, several promising lines of inquiry have been pursued. Mental and physical properties are said to be related by, for example, the determinable–determinate relation (Yablo 1992; critics include Ehring 1996; Worley 1997; Funkhouser 2006), constitution (Pereboom 2002; critics include Ney 2007; Heil 2011), metaphysical necessitation (Bennett 2003, 2008), physical explicability (Antony 1991), physical implementation (Marras 2003), and grounding (Kroedel and Schulz 2016).

You might ask why any of these relations should secure the desired solution. One thought is that if mental properties are immanent in their physical realizers, the causal powers of a mental property are included among those of its realizer. Consider again mental property M and one of its realizers in a given instance, P . Plausibly, M ’s powers are included in P ’s. Both properties, for example, have the power to cause a certain kind of behavior, but because of its greater “specificity”, P has in addition to this powers that M lacks. Now in general we don’t think that wholes causally compete with, or are excluded by, their parts. When Gus steps on Lilian’s toe, his foot’s causing Lilian discomfort doesn’t exclude Gus’s causing her discomfort. Both Gus and his foot coexist as causes, without competition and, we might add, without overdetermination. A similar point could be made about properties: if the causal powers of M are included in those bestowed by P , then P ’s causal relevance to behavior, far from excluding M ’s, includes it. (Approaches along these lines have been developed in Antony 1999; Shoemaker 2001, 2007; Wilson 1999, 2011; Clapp 2001; critical discussions include Heil 1999, 2011; McLaughlin 2007; Kim 2010; Ney 2010; Audi 2012.)

Autonomy and inheritance solutions grant at least this much to the Exclusion Problem: mental and physical properties are numerically distinct, however intimately they are otherwise related. But a third sort of strategy tries to undermine the argument at exactly this point: any mental property just is its physical realizer. If M = P , there’s no question of one’s excluding the other, nor is there any mystery of how M can work through P , for M and P are one and the same.

This sort of psychophysical property identity would seem to be blocked by the multiple realizability argument sketched earlier. But that argument, in spite of its wide appeal, has come under attack from several directions (see multiple realizability , §2). For example, some (Kim 1992; Lewis 1994; Jackson 1995; Heil 2003) take the argument to show, not that mental properties are distinct from their physical realizers, but that what we thought was one kind of mental property is actually many. Pains realized by different physical properties, in spite of having the same name (“pain”), are different, though similar, mental properties. There is no such property as pain simpliciter , only pain-for-this-physical-structure and pain-for-that-physical structure. Once such “structure-specific” identities are allowed, we can say that M (now just, say, pain-for-human beings) is identical with P , M ’s “realizer” in human beings (replies include Fodor 1997; Block 1997; Marras 2003; Moore and Campbell 2010).

This solution comes at a price: it forces us to abandon the belief that pain is a single, natural kind. There is, however, a way to preserve this doctrine while pursuing a strategy that’s otherwise similar to the one just sketched. The essential idea is that “property” as we’ve used the term so far is ambiguous. A property could be what characterizes an object (event), or what unifies several objects as a “one across many”. Now suppose the characterizing properties are tropes : particularized properties, unique to each object. And suppose the unifying properties are something else—call these “types”. If the mental “properties” that are causally relevant to behavior are tropes, and the mental “properties” mentioned in the multiple realizability argument are types, there’s no reason to think that this argument rules out psychophysical property-identities in any way that leads to exclusion worries. The M -trope and the P -trope are one and the same trope falling under two types, mental and physical. This proposal allows for a single type pain shared by diverse creatures; it’s just that this type is not the same sort of entity (a trope) that’s efficacious in the production of behavior (Heil 1992; Robb 1997; Heil and Robb 2003; what appears to be a similar view is defended by Macdonald and Macdonald 1986, 1995a; see also Whittle 2007.)

One worry about this proposal is that it appears to raise the Exclusion Problem all over again, this time at the level of properties (tropes). If a single property is both mental and physical, Completeness and No Overdetermination force us to say that it’s efficacious only qua physical, not qua mental. (For this and other criticisms, see Noordhof 1998; Raymont 2001; Gibb 2004; Macdonald and Macdonald 2006; Alward 2008; Maurin 2008; see Robb 2013 for some replies.)

Functionalism, along with any non-reductive theory of mind, faces the problem just discussed. But even if exclusionary worries are finessed, functionalism faces an additional and possibly more fundamental problem.

As we noted earlier, functionalism characterizes states of mind causally. To be in a given mental state is to be in a state with the right sort of causal profile, a state bearing the right sorts of relation to other states. Think of functional states as nodes in a network of states, the identity of which depends on the relations they bear to other nodes, and think of the realizers as occupants of these nodes. All there is to a node is the potential causal relations it bears to other nodes (not so for the occupants, which have intrinsic properties). Suppose, then, that F and G are functional properties—nodes in this network—and that all there is to something’s being F is its being a G -causer. The resulting generalization, “ F s cause G s”, is no doubt true, but it is vacuous, equivalent to the generalization that G -causers cause G s.

This appears to strip functional properties of their causal efficacy. Why? One line of thought appeals to Hume’s celebrated doctrine that there can be no necessary connections between distinct existences. A mental property and its would-be effect are distinct, yet functionalism entails that they enjoy a necessary connection. On the Humean doctrine, such a connection could not be causal. Another, closely related, version of the problem requires that causal relations be subsumed by empirical laws. But there are no such laws available for functional properties if all of the relevant generalizations are analytic and vacuous. (The foregoing argument in either version threatens to generalize to all dispositional properties: see dispositions , §6. For the problem aimed at functionalism in particular, see Block 1990; Rupert 2006; functionalism , §5.2.)

This argument echoes the logical connection argument advanced in the 1950’s and 60’s against causal accounts of action (e.g., Melden 1961, pp. 52–3). Given that reasons (desires, intentions) are not logically distinct from the actions they rationalize, reasons could not cause actions. In response, Davidson (1963) noted that logical connections hold among predicates or descriptions of events, not among events themselves. A cause could be described in various ways, some of which will involve the effect: consider “the cause of the fire caused the fire”. This is hardly informative, but it’s not thereby false. And of course the statement, far from precluding a causal relation, explicitly asserts it. That said, if the claim is true, it should be possible to identify the cause of the fire independently of reference to the effect—as “the match’s igniting”, for instance. In defense of his own causal theory of action, Davidson argued that such a re-description of mental causes is always available, at least in principle (see §5.1).

But Davidson’s saving move appears not to be available for the functionalist, for in the case of functional states and properties, no such independent descriptions are available, as the nature of a functional property is exhausted by its place in the causal network.

The functionalist has a number of options available, some of them mirroring solutions to the Exclusion Problem (Rupert 2006 provides a critical survey). For example, a functionalist could settle for a weaker, explanatory role for functional properties, leaving causal efficacy to the realizers of functional states (§6.4; see, e.g., Segal 2009; compare Roth and Cummins 2014). Or a functionalist might identify states of mind with their realizers (§6.5); indeed, some of the early functionalists were identity theorists (Lewis 1966, 1994; Armstrong 1968/1993). This would permit the sort of re-description that the more mainstream version of functionalism apparently blocks. A third option is to look for non-vacuous, empirical generalizations subsuming functional properties (Antony and Levine 1997). Yet a fourth option rejects the Humean doctrine, permitting necessary connections between a causally efficacious property and its effect. Such a proposal would find a home in the more general “causal theory of properties” defended by Shoemaker (1980, 1998) and others.

7. Problem IV: Externalism

Our final version of the property-based problem is restricted to intentional mental properties, that is, properties in virtue of which some mental states—propositional attitudes, perceptual experiences, mental images, and so on—are about something, properties in virtue of which mental states have representational content. We assume here that externalism is true, so that the contents of representational states of mind depend, not merely on intrinsic features of those states, but on relations, in particular, on the causal, social, and historical relations agents bear to their surroundings. In the simplest case, Lilian is thinking about water (H 2 O) because she stands in the right sorts of causal relation to water. The key move here is to reject the idea that meaningful objects or states owe their meaning to their intrinsic make-up alone.

The causally problematic feature for externalism is this contextual or relational component of representational mental states. Suppose that our mental representations are physical structures in the brain. Now suppose with the externalist that the content of these representations is determined, not just by our intrinsic features, but by context as well. Lilian (or Lilian’s brain) represents a tree in the quad by going into state T . But T represents a tree in the quad, not by virtue of T ’s (or, for that matter, Lilian’s) intrinsic makeup, but by virtue of T ’s (and by extension Lilian’s) standing in the right kind of relation to the tree. The very same kind of state in a different context (in the brain of someone in different circumstances) might represent something very different—or nothing at all.

Now if the content of Lilian’s thought that there is a tree in the quad is “broad”, if the significance of her thought depends on factors outside Lilian’s body, then it is indeed hard to see how this content could figure in a causal account of her actions, including Lilian’s expressing her belief that there is a tree in the quad by uttering the sentence, “There is a tree in the quad”. This is bad news for any attempt to explain why we do what we do by reference to the contents of our thoughts.

Consider an analogy (Dretske 1998). Gus inserts a quarter into a vending machine. The coin has a range of intrinsic qualities common to quarters, but its being a quarter does not depend solely on these intrinsic qualities: a quarter’s intrinsic qualities would be shared by a decent counterfeit. The coin’s being a quarter depends on its having the right sort of history: it was produced in a United States mint. This is something the vending machine cares nothing about. The machine reacts only to the coin’s intrinsic features. You might put this by saying that the coin affects the machine, not qua quarter, but only qua possessor of a particular kind of intrinsic makeup. (Vending machines are built to take advantage of the contingent fact that objects with this intrinsic makeup are almost always quarters.)

The worry is that we apparently operate, in important respects, as vending machines do. We respond to incoming stimuli solely in virtue of our intrinsic makeup and the intrinsic character of the stimuli. But if our thoughts possess their content in virtue of our standing in complicated environmental–social–historical relations to our surroundings, it is hard to see how such contents could make a causal difference in our psychological economy, how they could figure in the production of behavior. Thoughts have contents, but these contents could have no direct influence on the operation of mental mechanisms (Stich 1978; Kim 1982; Fodor 1980, 1987, ch. 2, 1991; Jackson and Pettit 1988).

One general line of response notes that whenever we explain a bit of behavior by appeal to extrinsic content, there is a local, intrinsic property available as a “causal surrogate” to produce the behavior (Crane and Mellor 1990). Such a surrogate may be neurophysiological or, as on computationalist views, a complex of “formal” or “syntactic” properties of internal representations. Now by itself, this point seems just to highlight the problem: if intrinsic surrogates are always needed, all the more reason to reject the efficacy of content. Some have indeed drawn such a lesson, concluding either that content has no role to play in an explanatory psychology (Stich 1978, 1983), or perhaps that psychological explanations appealing to content were never causal to begin with (Owens 1993; see also the noncausalists cited in §1.1).

But this might be too hasty. Far from precluding the causal efficacy of content, the surrogates might in fact play a role in ensuring it. Note that while Lilian’s intrinsic properties don’t guarantee the contents of her beliefs, her intrinsic properties are, in her environment, reliably correlated with these contents—so reliably, in fact, that content, in spite of being extrinsic, enters into the counterfactuals or laws often thought to ground causal efficacy. It seems clear, after all, that if Lilian had not believed there was water in front of her, she would not have extended her hand. This counterfactual could be secured by the fact that Lilian’s believing “There’s water in front of me” covaries with some internal state of her brain, but the counterfactual, for all that, is still true. A similar point could be made using (hedged) laws connecting content with behavior. The terrain here in any case is similar to that explored earlier in §§5.3–4, though the extrinsic nature of content introduces its own complexities. (On the counterfactuals, see Mele 1992, ch. 2; Yablo 1997; on the laws, see Braun 1991; Fodor 1995.)

There’s a more direct way that the intrinsic surrogates might secure the efficacy of content: perhaps the surrogate properties are content, or rather a kind of content. Distinguish narrow from broad content. Think of narrow content as the content of a representational state of mind minus its “broad” components. Consider Lilian (or Lilian’s brain) and an intrinsically indiscernible brain in a vat wired to a supercomputer. Grant that Lilian and the envatted brain entertain intrinsically indiscernible thoughts with utterly different representational contents. Now imagine that we could abstract a common element from the contents of Lilian’s and the brain’s intrinsically indiscernible thoughts. This element is their narrow content. Because narrow content is something all intrinsic duplicates must have in common, the hope is that such content could be the very intrinsic properties that produce behavior.

The notion of narrow content might raise suspicion, however. Return to the vending machine. The quarter Gus inserts in the machine has a particular value owing to relations it bears to outside goings-on: it was minted in the Denver mint. A counterfeit placed in the machine could have the very same intrinsic makeup as the quarter, but it would lack the quarter’s value. It looks as though it is the quarter’s intrinsic makeup, not its value, that matters to the operation of the machine. Now imagine someone arguing that a quarter and an intrinsically indiscernible slug do in fact share a kind of value: narrow value. Because narrow value accompanies an object’s intrinsic qualities, we need not regard narrow value as epiphenomenal. But what could narrow value be? Whatever it is, could it in any way resemble value ordinarily conceived— broad value ? Narrow value looks like a phony category posited ad hoc to accommodate an otherwise embarrassing difficulty. Nevertheless, some philosophers remain optimistic about the prospects of a viable internalist account of content, one that would allow fully fledged thoughts to have a role in the production of behavior. (For references and further discussion, see narrow mental content .)

Another, much different, attempt to preserve a causal role for content can be found in Dretske 1988, 1989, 1993. So far we’ve assumed that a behavioral event is distinct from the mental event that causes it. On Dretske’s view, however, behavior is a process that includes, as a component, its mental cause. When mental event a causes bodily movement b , the behavior in this case is not b itself, but the process of a ’s causing b . When Lilian raises her hand because she wants to get the teacher’s attention and she believes that raising her hand will accomplish this end, her behavior is not her hand’s going up, but the process of this belief-desire pair’s causing her hand to go up.

Dretske grants that when mental event a initiates (“triggers”) a process ending in bodily movement b , a does so solely in virtue of its intrinsic makeup. Nevertheless, a ’s relational, intentional properties have a causal role, for they can be relevant to the fact that a causes b . Reasons are “structuring causes” of behavior: it’s because of what a indicates that it was “recruited” during the learning process as a cause of b . (Indication here is a matter of reliable co-variation.) It’s because, for example, Lilian’s belief indicates what it does—raising one’s hand (in these circumstances) is a way to get the teacher’s attention—that it was (together with the relevant desire) recruited as a cause of her hand-raising. Relational, intentional mental properties thus become causally relevant to behavior, because they are relevant to structuring the very causal processes that, on Dretske’s view, constitute instances of behavior.

Dretske’s proposed solution quickly produced a number of responses (e.g., Smith 1990; Block 1990; Baker 1991; Horgan 1991; Kim 1991; Mele 1991). One question is whether relational, intentional properties in fact play a causal role in the structuring (or “wiring”) of causal processes in the brain. Even during the learning process, the states of Lilian’s brain would seem to be sensitive only to local, intrinsic features of one another, features that screen off external goings-on. Dretske might be able to avoid such screening-off by appealing to the counterfactual dependence of behavior-structuring on these goings-on. His view would then stand or fall with the success of counterfactual theories of causal relevance (§5.3). A second question is whether intentional states, even if they were relevant in the way Dretske says they are, deliver the kind of causal relevance we want. When Lilian raises her hand, the structuring of the relevant processes in her brain has already occurred. If intentional properties are relevant at all, then, they are apparently relevant only to what happened in the past during the learning process. But we normally regard mental properties as causally relevant to what’s going on here and now, the very time when Lilian (or anyone) acts (but cf. Allen 1995; Dretske replies to critics in his 1991, esp. pp. 210–7; for a more recent discussion see Hofmann and Schulte 2014).

Dretske’s proposal is a version of the dual explanandum strategy (§6.3). The idea is that physical and mental properties are causally responsible for different effects. For Dretske, the (triggering) physical properties are responsible for bodily motions, while the (structuring) mental properties are responsible for behavior.

Another version of this strategy begins with a point also made in §6.3, namely that to question a property’s causal relevance is really to question its relevance to some property of the effect. The form of our central causal question, that is, is whether a mental cause qua F causes a behavioral effect qua G . Now when F is an intentional mental property, what G is the object of our question? One possibility is that it is a behavioral property that, like the mental property, is itself “broad” (see, e.g., Enc 1995).

Consider a simple example: Suppose Lilian believes that a glass in front of her contains water, and this belief (together with her desires) causes her to reach for the glass. Her behavior is an instance of trying to get water , and it’s the instantiation of this property (and not, say, the property of being a certain kind of bodily motion) that we’re wondering about when we ask whether the intentional property of her belief is causally relevant. (If our interest lay solely in explaining a particular bodily motion, we would rest content with a non-psychological, purely physiological explanation.) But now the answer seems straightforward. For what makes Lilian’s behavior a trying for water is that it’s caused by a belief whose content concerns water. Once we realize that the behavioral property of the effect is itself broad, its connection to the intentional mental property seems clear.

This is not to say that the physical properties of Lilian’s belief do no work: it’s just that they are responsible for a different property of the effect, for instance, the property of being a forward arm-movement. The intentional properties of her belief are relevant to the effect qua (broad) behavior; the physical properties are relevant to the effect qua (narrow) bodily motion. And as we noted earlier (§6.3), such a solution can be employed in response to the Exclusion Problem as well. If a mental property and its physical realizer are relevant to different properties of the effect, they need not compete causally.

Because it promises to solve two outstanding problems of mental causation, this approach is potentially quite powerful. (For discussion, see Fodor 1991; Burge 1995.) One question to raise here, however, is whether the fact that some behavior can be described broadly makes the intentional mental property of its cause relevant. The undeniable conceptual connections between mental and behavioral descriptions might point to a kind of explanatory relevance, but it’s a further question whether causal connections grounding these explanations involve broad properties. Those motivated by the original epiphenomenalist arguments will worry that narrow, physical properties are really doing all the work here: the apparent relevance of the broad properties is an illusion created by the way we, in describing and explaining behavior, conceptualize both cause and effect (see Owens 1993). This point leads to a fourth, related response to the problem.

Some theorists would challenge the distinction—implicit in the foregoing discussion—between explanation and causation. Our concept of causality, they would insist, is bound up with the concept of explanation: causally relevant properties are those that figure in our best causal explanations (Segal and Sober 1991; Wilson 1992; Burge 1993; Raymont 2001; §6.3). We find out what causal relations amount to by starting with clear cases of causal explanation. Given that we (and the cognitive scientists) routinely explain physical events by citing mental causes (and mental events by invoking physical causes), questioning whether real causal relations answer to these explanations is to succumb to the kind of metaphysical hubris that gives metaphysics a bad name.

This appeal to explanatory practice has the potential to answer in one fell swoop all four of the property-based problems we’ve considered.

Doubtless our understanding of the notions of causality and causal relevance depends importantly on our grasp of causal explanations. But there are at least two areas of concern about the explanatory strategy (compare Kim 1998, pp. 60–7). First, you might wonder whether the strategy addresses the right question. Earlier, we pointed out that the central question of mental causation is not so much whether mental properties are causally relevant but how they could be, given some alleged feature of mental properties (in the case at issue here, the feature is their being relational properties). The explanatory strategy would at best seem to be addressing only the “whether” question, not the “how” question. Second, even when restricted to the “whether” question, the strategy rests on a conflation of what appears to be an epistemological notion (explanation) with metaphysical notions (causation and causal relevance). A full evaluation of the view would thus require a deeper look into how the two are related.

We have been treating the problem of mental causation as though it were a problem in applied metaphysics. Perhaps this approach is wrong-headed. Perhaps the problem really falls under the purview of the philosophy of science. What if we began with a look at actual scientific practice (as suggested in §§6.3, 7.5) and determined what exactly science requires for acceptable causal explanation? An examination of established special sciences reveals that the very features (multiple realizability, higher-level and “broad” properties, for instance) metaphysically inclined philosophers regard as posing apparently insuperable difficulties for mental causation, are routinely invoked in causal explanations in those sciences. This suggests that, rather than let a priori conceptions of causation (or properties, or causal powers) lead us to regard mental causation with suspicion, we should reason in the other direction: revise our conception of causation to fit our actual scientific beliefs and practices. If the metaphysicians were right about causation, no science would be possible beyond basic physics (biological properties, for instance, would lack causal efficacy).

This is one way to go. Another way is to take a step backward and ask which features of our conception of the mental, features we commonly take for granted, might be the source of our difficulties. Eliminativists aside, all parties evidently agree that “realism about the mental” requires that mental predicates figuring in causal accounts of behavior designate distinctively mental properties. If we aim to honor psychology (and the other special sciences), our job is to show how these properties could be causally relevant to physical goings-on. Suppose, in contrast, that you took the goal to be, not the preservation of mental properties , but the preservation of mental truths . In that case we would seek an account of the mind that provides plausible truthmakers for psychological and psycho-physical claims, including claims concerning mental causation.

One possibility is that truthmakers for psychological truths include irreducibly mental properties. This is not the only possibility, however. Another is that psychological assertions are made true by physical states and properties, states and properties answering to predicates belonging to physics and chemistry. A view of this kind (which is close to Davidson’s as spelled out in §5 and to the identity solutions discussed in §6.5) would endeavor to resolve the problem of mental causation, not by tinkering with the causal concept, but by rejecting the idea that mental and physical properties are distinct kinds of property. All parties agree that mental predicates and descriptions differ from physical predicates and descriptions. Application conditions for mental terms and physical terms diverge in ways that preclude definitional reduction of the one to the other. Perhaps it is a mistake, however, to move from this linguistic fact to a substantive ontological thesis: mental and physical predicates designate properties belonging to distinct families of properties.

Whether anything like this could be made to work is an open question. To the extent that you regard the current state of play as unsatisfying, however, it is perhaps a question worth pursuing.

  • Allen, C., 1995, “It Isn’t What You Think: A New Idea About Intentional Causation”, Noûs , 29: 115–26.
  • Alward, P., 2008, “Mopes, Dopes, and Tropes: A Critique of the Trope Solution to the Problem of Mental Causation”, Dialogue , 47: 53–64.
  • Anscombe, G. E. M. and P. T. Geach (trans. and eds.), 1954, Descartes: Philosophical Writings , Indianapolis: Bobbs–Merrill Company.
  • Antony, L. M., 1991, “The Causal Relevance of the Mental: More on the Mattering of Minds”, Mind & Language , 6: 295–327.
  • ––––, 1995, “I’m a Mother, I Worry”, Philosophical Issues , 6: 160–6.
  • ––––, 1999, “Multiple Realizability, Projectibility, and the Reality of Mental Properties”, Philosophical Topics , 26: 1–24.
  • Antony, L. M. and J. Levine, 1997, “Reduction With Autonomy”, Philosophical Perspectives , 11: 83–105.
  • Armstrong, D. M., 1968/1993, A Materialist Theory of the Mind , Revised Edition, London: Routledge.
  • ––––, 1978, A Theory of Universals: Universals and Scientific Realism, Volume II , Cambridge: Cambridge University Press.
  • ––––, 1989, Universals: An Opinionated Introduction , Boulder: Westview Press.
  • Audi, P., 2011, “Primitive Causal Relations and the Pairing Problem”, Ratio , 24: 1–16.
  • ––––, 2012, “Properties, Powers, and the Subset Account of Realization”, Philosophy and Phenomenological Research , 84: 654–74.
  • Averill, E. and B. Keating, 1981, “Does Interactionism Violate a Law of Classical Physics?”, Mind , 90: 102–7.
  • Bailey, A., J. Rasmussen, and L. Van Horn, 2011, “No pairing problem”, Philosophical Studies , 154: 349–60.
  • Baker, L. R., 1991, “Dretske on the Explanatory Role of Belief”, Philosophical Studies , 63: 99–111.
  • ––––, 1993, “Metaphysics and Mental Causation”, in Heil and Mele 1993, pp. 75–95.
  • Bedau, M. A. and P. Humphreys (eds.), 2008, Emergence: Contemporary Readings in Philosophy and Science , Cambridge, MA: MIT Press.
  • Bennett, K., 2003, “Why the Exclusion Problem Seems Intractable, and How, Just Maybe, to Tract It”, Noûs , 37: 471–97.
  • ––––, 2008, “Exclusion Again”, in Hohwy and Kallestrup, pp. 280–305.
  • Bernstein, S., 2016, “Overdetermination Underdetermined”, Erkenntnis , 81: 17–40.
  • Blackburn, S., 1990, “Hume and Thick Connexions”, Philosophy and Phenomenological Research , 50, Supplement: 237–50.
  • Block, N., 1990, “Can the Mind Change the World?”, in G. Boolos (ed.), Meaning and Method: Essays in Honor of Hilary Putnam , Cambridge: Cambridge University Press, pp. 137–70.
  • ––––, 1997, “Anti-Reductionism Slaps Back”, Philosophical Perspectives , 11: 107–32.
  • ––––, 2003, “Do Causal Powers Drain Away?”, Philosophy and Phenomenological Research , 67: 133–50.
  • Bontly, T. D., 2001, “The Supervenience Argument Generalizes”, Philosophical Studies , 109: 75–96.
  • Braun, D., 1991, “Content, Causation, and Cognitive Science”, Australasian Journal of Philosophy , 69: 375–89.
  • ––––, 1995, “Causally Relevant Properties”, Philosophical Perspectives , 9: 447–75.
  • Broad, C. D., 1925, The Mind and its Place in Nature , London: Routledge & Kegan Paul.
  • Burge, T., 1993, “Mind-Body Causation and Explanatory Practice”, in Heil and Mele 1993, pp. 97–120.
  • ––––, 1995, “Reply: Intentional Properties and Causation”, in Macdonald and Macdonald 1995b, pp. 226–35.
  • Campbell, K., 1984, Body and Mind , Second Edition, Notre Dame: University of Notre Dame Press.
  • Campbell, N., 1997, “The Standard Objection to Anomalous Monism”, Australasian Journal of Philosophy , 75: 373–82.
  • Carey, B., 2011, “Overdetermination and the Exclusion Problem”, Australasian Journal of Philosophy , 89: 251–62.
  • Caston, V., 1997, “Epiphenomenalisms, Ancient and Modern”, Philosophical Review , 106: 309–63.
  • Clapp, L., 2001, “Disjunctive Properties: Multiple Realizations”, Journal of Philosophy , 98: 111–36.
  • Clayton, P. and P. Davies (eds.), 2006, The Re-Emergence of Emergence: The Emergentist Hypothesis from Science to Religion , Oxford: Oxford University Press.
  • Cottingham, J., R. Stoothoff, D. Murdoch, and A. Kenny (trans. and eds.), 1991, The Philosophical Writings of Descartes, Volume III: The Correspondence , Cambridge: Cambridge University Press.
  • Crane, T., 1991, “Why Indeed? Papineau on Supervenience”, Analysis , 51: 32–7.
  • ––––, 1992, “Mental Causation and Mental Reality”, Proceedings of the Aristotelian Society , 92: 185–202.
  • ––––, 1995, “The Mental Causation Debate”, Proceedings of the Aristotelian Society , Supplementary Vol. 69: 211–36.
  • ––––, 1999, “Mind-Body Problem”, in R. A. Wilson and F. Keil (eds.), The MIT Encyclopedia of the Cognitive Sciences , Cambridge, MA: MIT Press, pp. 546–8.
  • ––––, 2001, Elements of Mind: An Introduction to the Philosophy of Mind , Oxford: Oxford University Press.
  • ––––, 2008, “Causation and Determinable Properties: On the Efficacy of Colour, Shape, and Size”, in Hohwy and Kallestrup 2008, pp. 176–95.
  • Crane, T. and D. H. Mellor, 1990, “There is No Question of Physicalism”, Mind , 99: 185–206.
  • Crisp, T. M. and T. A. Warfield, 2001, “Kim’s Master Argument”, Noûs , 35: 304–16.
  • Dardis, A., 1993, “Sunburn: Independence Conditions on Causal Relevance”, Philosophy and Phenomenological Research , 53: 577–98.
  • Davidson, D., 1963, “Actions, Reasons, and Causes”, Journal of Philosophy , 60: 685–700. Reprinted in Davidson 1980, pp. 3–19.
  • ––––, 1970, “Mental Events”, in L. Foster and J. W. Swanson (eds.), Experience and Theory , Amherst, MA: University of Massachusetts Press, pp. 79–101. Reprinted in Davidson 1980, pp. 207–25.
  • ––––, 1980, Essays on Actions and Events , Oxford: Clarendon Press.
  • ––––, 1993, “Thinking Causes”, in Heil and Mele 1993, pp. 3–17.
  • Davies, P. C. W., 2006, “The Physics of Downward Causation”, in Clayton and Davies 2006, pp. 35–51.
  • Dennett, D. C., 1973, “Mechanism and Responsibility”, in T. Honderich (ed.), Essays on Freedom of Action , London: Routledge & Kegan Paul, pp. 159–84. Reprinted in D. C. Dennett, 1981, Brainstorms: Philosophical Essays on Mind and Psychology , Cambridge, MA: MIT Press, pp. 233–55.
  • ––––, 1991, Consciousness Explained , Boston: Little, Brown, and Co.
  • Descartes, R., 1642/1996, Meditations on First Philosophy, with Selections from the Objections and Replies , trans. and ed. J. Cottingham, Cambridge: Cambridge University Press.
  • D’Oro, G. (ed.), 2013, Reasons and Causes: Causalism and Non-Causalism in the Philosophy of Action , Basingstoke: Palgrave Macmillan.
  • Dretske, F., 1988, Explaining Behavior: Reasons in a World of Causes , Cambridge, MA: MIT Press.
  • ––––, 1989, “Reasons and Causes”, Philosophical Perspectives , 3: 1–15.
  • ––––, 1991, “Dretske’s Replies”, in McLaughlin 1991, pp. 180–221.
  • ––––, 1993, “Mental Events as Structuring Causes of Behavior”, in Heil and Mele 1993, pp. 121–36.
  • ––––, 1998, “Minds, Machines, and Money: What Really Explains Behavior”, in J. Bransen and S. E. Cuypers (eds.), Human Action, Deliberation and Causation , Dordrecht: Kluwer Academic Publishers, pp. 157–73.
  • Ehring, D., 1996, “Mental Causation, Determinables and Property Instances”, Noûs , 30: 461–80.
  • ––––, 1997, Causation and Persistence: A Theory of Causation , New York: Oxford University Press.
  • Enc, B., 1995, “Units of Behavior”, Philosophy of Science , 62: 523–42.
  • Fodor, J. A., 1980, “Methodological Solipsism Considered as a Research Strategy in Cognitive Psychology”, Behavioral and Brain Sciences , 3: 63–73. Reprinted in J. A. Fodor, 1981, Representations: Philosophical Essays on the Foundations of Cognitive Science , Cambridge, MA: MIT Press, pp. 225–53.
  • ––––, 1981, “The Mind–Body Problem”, Scientific American , 244: 114–23.
  • ––––, 1987, Psychosemantics: The Problem of Meaning in the Philosophy of Mind , Cambridge, MA: MIT Press.
  • ––––, 1989, “Making Mind Matter More”, Philosophical Topics , 17: 59–79. Reprinted in J. A. Fodor, 1990, A Theory of Content and Other Essays , Cambridge, MA: MIT Press, pp. 137–59.
  • ––––, 1991, “A Modal Argument for Narrow Content”, Journal of Philosophy , 88: 5–26.
  • ––––, 1995, The Elm and the Expert , Cambridge, MA: MIT Press.
  • ––––, 1997, “Special Sciences: Still Autonomous After All These Years”, Philosophical Perspectives , 11: 149–63.
  • Foster, J., 1991, The Immaterial Self: A Defence of the Cartesian Dualist Conception of the Mind , London: Routledge.
  • Funkhouser, E., 2002, “Three Varieties of Causal Overdetermination”, Pacific Philosophical Quarterly , 83: 335–51.
  • ––––, 2006, “The Determinable–Determinate Relation”, Noûs , 40: 548–69.
  • Garber, D., 1983, “Understanding Interaction: What Descartes Should Have Told Elisabeth”, Southern Journal of Philosophy , 21: 15–32.
  • Garrett, B. J., 1998, “Pluralism, Causation and Overdetermination”, Synthese , 116: 355–78.
  • ––––, 1999, “Davidson on Causal Relevance”, Ratio (new series) , 12: 14–33.
  • Gibb, S. C., 2004, “The Problem of Mental Causation and the Nature of Properties”, Australasian Journal of Philosophy , 82: 464–76.
  • ––––, 2006, “Why Davidson is not a Property Epiphenomenalist”, International Journal of Philosophical Studies , 14: 407–22.
  • ––––, 2010, “Closure Principles and the Laws of Conservation of Energy and Momentum”, Dialectica , 64: 363–84.
  • ––––, 2013, “Mental Causation and Double Prevention”, in S. C. Gibb, E. J. Lowe, and R. D. Ingthorsson (eds.), Mental Causation and Ontology , Oxford: Oxford University Press, pp. 193–214.
  • ––––, 2015, “The Causal Closure Principle”, Philosophical Quarterly , 65: 626–47.
  • Gibb, S. C., R. F. Hendry and T. Lancaster (eds.), 2019, The Routledge Handbook of Emergence , London: Routledge.
  • Gibbons, J., 2006, “Mental Causation without Downward Causation”, Philosophical Review , 115: 79–103.
  • Gillett, C. and B. Loewer (eds.), 2001, Physicalism and Its Discontents , Cambridge: Cambridge University Press.
  • Gillett, C. and B. Rives, 2001, “Does the Argument from Realization Generalize? Responses to Kim”, Southern Journal of Philosophy , 39: 79–98.
  • Ginet, C., 1990, On Action , Cambridge: Cambridge University Press.
  • Hardcastle, V. G., 1998, “On the Matter of Minds and Mental Causation”, Philosophy and Phenomenological Research , 58: 1–25.
  • Hart, W. D., 1988, The Engines of the Soul , Cambridge: Cambridge University Press.
  • Hasker, W., 1999, The Emergent Self , Ithaca: Cornell University Press.
  • Heil, J., 1992, The Nature of True Minds , Cambridge: Cambridge University Press.
  • ––––, 1999, “Multiple Realizability”, American Philosophical Quarterly , 36: 189–208.
  • ––––, 2003, “Multiply Realized Properties”, in Walter and Heckmann 2003, pp. 11–30.
  • ––––, 2009, “Anomalous Monism”, in H. Dyke (ed.), From Truth to Reality: New Essays in Metaphysics , London: Routledge, pp. 85–98.
  • ––––, 2011, “Powers and the Realization Relation”, The Monist , 94: 35–53.
  • ––––, 2012, Philosophy of Mind , Third Edition, London: Routledge.
  • Heil, J. and A. Mele (eds.), 1993, Mental Causation , Oxford: Clarendon Press.
  • Heil, J. and D. Robb, 2003, “Mental Properties”, American Philosophical Quarterly , 40: 175–96.
  • Hendry, R. F., 2006, “Is there Downward Causation in Chemistry?”, in D. Baird, E. Scerri, and L. McIntyre (eds.), Philosophy of Chemistry: Synthesis of a New Discipline; Boston Studies in the Philosophy and History of Science , 242: 173–89.
  • Hoffman, J., and G. Rosenkrantz, 1991, “Are Souls Unintelligible?”, Philosophical Perspectives , 5: 183–212.
  • Hofmann, F., and P. Schulte, 2014, “The Structuring Causes of Behavior: Has Dretske Saved Mental Causation?”, Acta Analytica , 29: 267–84.
  • Hohwy, J. and J. Kallestrup (eds.), 2008, Being Reduced: New Essays on Reduction, Explanation, and Causation , Oxford: Oxford University Press.
  • Honderich, T., 1982, “The Argument for Anomalous Monism”, Analysis , 42: 59–64.
  • Horgan, T., 1989, “Mental Quausation”, Philosophical Perspectives , 3: 47–76.
  • ––––, 1991, “Actions, Reasons, and the Explanatory Role of Content”, in McLaughlin 1991, pp. 73–101.
  • ––––, 2007, “Mental Causation and the Agent-Exclusion Problem”, Erkenntnis , 67: 183–200.
  • Jackson, F., 1995, “Essentialism, Mental Properties and Causation”, Proceedings of the Aristotelian Society , 95: 253–68.
  • ––––, 1996, “Mental Causation”, Mind , 105: 377–413.
  • Jackson, F. and P. Pettit, 1988, “Functionalism and Broad Content”, Mind , 97: 381–400.
  • ––––, 1990, “Program Explanation: A General Perspective”, Analysis , 50: 107–17.
  • Kim, J., 1973, “Causation, Nomic Subsumption, and the Concept of Event”, Journal of Philosophy , 70: 217–36. Reprinted in Kim 1993a, pp. 3–21.
  • ––––, 1982, “Psychophysical Supervenience”, Philosophical Studies , 41: 51–70. Reprinted in Kim 1993a, pp. 175–93.
  • ––––, 1984, “Epiphenomenal and Supervenient Causation”, Midwest Studies in Philosophy , 9: 257–70. Reprinted in Kim 1993a, pp. 92–108.
  • ––––, 1989, “Mechanism, Purpose, and Explanatory Exclusion”, Philosophical Perspectives , 3: 77–108. Reprinted in Kim 1993a, pp. 237–64.
  • ––––, 1991, “Dretske on How Reasons Explain Behavior”, in McLaughlin 1991, pp. 52–72. Reprinted in Kim 1993a, pp. 285–308.
  • ––––, 1992, “Multiple Realization and the Metaphysics of Reduction”, Philosophy and Phenomenological Research , 52:1–26. Reprinted in Kim 1993a, pp. 309–35.
  • ––––, 1993a, Supervenience and Mind: Selected Philosophical Essays , Cambridge: Cambridge University Press.
  • ––––, 1993b, “Can Supervenience and ‘Non-Strict Laws’ Save Anomalous Monism?”, in Heil and Mele 1993, pp. 19–26.
  • ––––, 1993c, “The Non-Reductivist’s Troubles with Mental Causation”, in Heil and Mele 1993, pp. 189–210. Reprinted in Kim 1993a, pp. 336–57.
  • ––––, 1998, Mind in a Physical World , Cambridge, MA: MIT Press.
  • ––––, 2005, Physicalism, or Something Near Enough , Princeton: Princeton University Press.
  • ––––, 2007, “Causation and Mental Causation”, in McLaughlin and Cohen 2007, pp. 227–42.
  • ––––, 2010, “Two Concepts of Realization, Mental Causation, and Physicalism”, in J. Kim, Essays in the Metaphysics of Mind , Oxford: Oxford University Press, pp. 263–81.
  • King, P., 2007, “Why Isn’t the Mind-Body Problem Medieval?”, in H. Lagerlund (ed.), Forming the Mind: Essays on the Internal Senses and the Mind/Body Problem from Avicenna to the Medical Enlightenment , Dordrecht: Springer, pp. 187–205.
  • Koksvik, O., 2007, “Conservation of Energy is Relevant to Physicalism”, Dialectica , 61: 573–82.
  • Kroedel, T., 2019, Mental Causation: A Counterfactual Theory , Cambridge: Cambridge University Press.
  • Kroedel, T. and M. Schulz, 2016, “Grounding mental causation”, Synthese , 193: 1909–23.
  • Leiter, B. and A. Miller, 1994, “Mind Doesn’t Matter Yet”, Australasian Journal of Philosophy , 72: 220–8.
  • LePore, E., and B. Loewer, 1987, “Mind Matters”, Journal of Philosophy , 84: 630–42.
  • ––––, 1989, “More on Making Mind Matter”, Philosophical Topics , 17: 175–91.
  • Levine, J., 2001, Purple Haze: The Puzzle of Consciousness , New York: Oxford University Press.
  • Lewis, D., 1966, “An Argument for the Identity Theory”, Journal of Philosophy , 63: 17–25.
  • ––––, 1994, “Reduction of Mind”, in S. Guttenplan (ed.), A Companion to the Philosophy of Mind , Oxford: Blackwell, pp. 412–31.
  • Libet B., 1985, “Unconscious Cerebral Initiative and the Role of Conscious Will in Voluntary Action”, Behavioral and Brain Sciences , 8: 529–39.
  • ––––, 2001, “Consciousness, Free Action and the Brain”, Journal of Consciousness Studies , 8: 59–65.
  • ––––, 2004, Mind Time: The Temporal Factor in Consciousness , Cambridge, MA: Harvard University Press.
  • Loewer, B., 2007, “Mental Causation, or Something Near Enough”, in McLaughlin and Cohen 2007, pp. 243–64.
  • Lowe, E. J., 1992, “The Problem of Psychophysical Causation”, Australasian Journal of Philosophy , 70: 263–76.
  • ––––, 1996, Subjects of Experience , Cambridge: Cambridge University Press.
  • ––––, 2000, “Causal Closure Principles and Emergentism”, Philosophy , 75: 571–85.
  • ––––, 2003, “Physical Causal Closure and the Invisibility of Mental Causation”, in Walter and Heckmann 2003, pp. 137–54.
  • ––––, 2006, “Non-Cartesian Substance Dualism and the Problem of Mental Causation”, Erkenntnis , 65: 5–23.
  • Macdonald, C., and G. Macdonald, 1986, “Mental Causes and Explanation of Action”, Philosophical Quarterly , 36: 145–58.
  • ––––, 1995a, “How to Be Psychologically Relevant”, in Macdonald and Macdonald 1995b, pp. 60–77.
  • –––– (eds.), 1995b, Philosophy of Psychology: Debates on Psychological Explanation , Vol. 1, Oxford: Blackwell.
  • ––––, 2006, “The Metaphysics of Mental Causation”, Journal of Philosophy , 103: 539–76.
  • –––– (eds.), 2010, Emergence in Mind , Oxford: Oxford University Press.
  • Mackie, J. L., 1974, The Cement of the Universe: A Study of Causation , Oxford: Clarendon Press.
  • ––––, 1979, “Mind, Brain, and Causation”, Midwest Studies in Philosophy , 4: 19–29.
  • McLaughlin, B. P., 1989, “Type Epiphenomenalism, Type Dualism, and the Causal Priority of the Physical”, Philosophical Perspectives , 3: 109–35.
  • –––– (ed.), 1991, Dretske and His Critics , Oxford: Blackwell.
  • ––––, 1992, “The Rise and Fall of British Emergentism”, in A. Beckermann, H. Flohr, and J. Kim (eds.), Emergence or Reduction? Essays on the Prospects of Nonreductive Physicalism , New York: de Gruyter, pp. 49–93.
  • ––––, 1993, “On Davidson’s Response to the Charge of Epiphenomenalism”, in Heil and Mele 1993, pp. 27–40.
  • ––––, 2006, “Is Role-Functionalism Committed to Epiphenomenalism?”, Journal of Consciousness Studies , 13: 39–66.
  • ––––, 2007, “Mental Causation and Shoemaker-Realization”, Erkenntnis , 67: 149–72.
  • McLaughlin, B. P. and J. Cohen (eds.), 2007, Contemporary Debates in Philosophy of Mind , Oxford: Blackwell.
  • Malcolm, N., 1968, “The Conceivability of Mechanism”, Philosophical Review , 77: 45–72.
  • Marcus, E., 2001, “Mental Causation: Unnaturalized but not Unnatural”, Philosophy and Phenomenological Research , 63: 57–83.
  • ––––, 2005, “Mental Causation in a Physical World”, Philosophical Studies , 122: 27–50.
  • Marras, A., 1998, “Kim’s Principle of Explanatory Exclusion”, Australasian Journal of Philosophy , 76: 439–51.
  • ––––, 2003, “Methodological and Ontological Aspects of the Mental Causation Problem”, in Walter and Heckmann 2003, pp. 243–64.
  • Matson, W. I., 1966, “Why Isn’t the Mind-Body Problem Ancient?”, in P. K. Feyerabend and G. Maxwell (eds.), Mind, Matter, and Method: Essays in Philosophy and Science in Honor of Herbert Feigl , Minneapolis: University of Minnesota Press, pp. 92–102.
  • Maurin, A., 2008, “Does Ontology Matter?”, in S. Gozzano and F. Orilia (eds.), Tropes, Universals and the Philosophy of Mind , Frankfurt: Ontos Verlag, pp. 31–55.
  • Melden, A. I., 1961, Free Action , London: Routledge & Kegan Paul.
  • Mele, A. R., 1991, “Dretske’s Intricate Behavior”, Philosophical Papers , 20: 1–10.
  • ––––, 1992, Springs of Action , New York: Oxford University Press.
  • ––––, 2014, Free: Why Science Hasn’t Disproved Free Will , New York: Oxford University Press.
  • Melnyk, A., 2003, A Physicalist Manifesto: Thoroughly Modern Materialism , Cambridge: Cambridge University Press. A modified version of the chapter cited in the text is in Walter and Heckmann 2003, pp. 155–72.
  • Menzies, P., 2003, “The Causal Efficacy of Mental States”, in Walter and Heckmann 2003, pp. 195–223.
  • ––––, 2007, “Mental Causation on the Program Model”, in G. Brennan, R. Goodin, F. Jackson, and M. Smith (eds.), Common Minds: Themes from the Philosophy of Philip Pettit , Oxford: Oxford University Press, pp. 28–54.
  • Mills, E., 1996, “Interactionism and Overdetermination”, American Philosophical Quarterly , 33: 105–17.
  • Montero, B., 2003, “Varieties of Causal Closure”, in Walter and Heckmann 2003, pp. 173–87.
  • ––––, 2006, “What Does the Conservation of Energy Have to Do with Physicalism?”, Dialectica , 60: 383–96.
  • Moore, D. and N. Campbell, 2010, “Functional Reduction and Mental Causation”, Acta Analytica , 25: 435–46.
  • Ney, A., 2007, “Can an Appeal to Constitution Solve the Exclusion Problem?”, Pacific Philosophical Quarterly , 88: 486–506.
  • ––––, 2010, “Convergence on the Problem of Mental Causation: Shoemaker’s Strategy for (Nonreductive) Physicalists”, Philosophical Issues , 20: 438–45.
  • Noordhof, P., 1998, “Do Tropes Resolve the Problem of Mental Causation?”, Philosophical Quarterly , 48: 221–6.
  • ––––, 1999, “Micro-Based Properties and the Supervenience Argument: A Response to Kim”, Proceedings of the Aristotelian Society , 99: 109–14.
  • O’Connor, T. and J. R. Churchill, 2010, “Is Non-reductive Physicalism Viable within a Causal Powers Metaphysic?”, in Macdonald and Macdonald 2010, pp. 43–60.
  • Oddie, G., 1982, “Armstrong on the Eleatic Principle and Abstract Entities”, Philosophical Studies , 41: 285–95.
  • Owens, J., 1993, “Content, Causation, and Psychophysical Supervenience”, Philosophy of Science , 60: 242–61.
  • Paoletti, M. P. and Orilia, F. (eds.), 2017, Philosophical and Scientific Perspectives on Downward Causation , London: Routledge.
  • Papineau, D., 1991, “The Reason Why: Response to Crane”, Analysis , 51: 37–40.
  • ––––, 1993, Philosophical Naturalism , Oxford: Blackwell.
  • ––––, 2000, “The Rise of Physicalism”, in M. W. F. Stone and J. Wolff (eds.), The Proper Ambition of Science , New York: Routledge, pp. 174–208. Versions of this paper also appear in Gillett and Loewer 2001, pp. 3–36, and in Ch. 1 and the appendix to D. Papineau, 2002, Thinking About Consciousness , Oxford: Oxford University Press.
  • Patterson, S., 2005, “Epiphenomenalism and Occasionalism: Problems of Mental Causation, Old and New”, History of Philosophy Quarterly , 22: 239–57.
  • Pereboom, D., 2002, “Robust Nonreductive Materialism”, Journal of Philosophy , 99: 499–531.
  • Raymont, P., 2001, “Are Mental Properties Causally Relevant?”, Dialogue , 40: 509–28.
  • ––––, 2003, “Kim on Overdetermination, Exclusion and Nonreductive Physicalism”, in Walter and Heckmann 2003, pp. 225–42.
  • Richardson, R. C., 1982, “The ‘Scandal’ of Cartesian Interactionism”, Mind , 91: 20–37.
  • Robb, D., 1997, “The Properties of Mental Causation”, Philosophical Quarterly , 47: 178–94.
  • ––––, 2013, “The Identity Theory as a Solution to the Exclusion Problem”, in S.C. Gibb, E.J. Lowe, and V. Ingthorsson (eds.), Mental Causation and Ontology , Oxford: Oxford University Press, pp. 215–32.
  • Ross, D. and D. Spurrett, 2004, “What to say to a skeptical metaphysician: A defense manual for cognitive and behavioral scientists”, Behavioral and Brain Sciences , 27: 603–27.
  • Roth, M. and R. Cummins, 2014, “Two tales of functional explanation”, Philosophical Psychology , 27: 773–88.
  • Rupert, R. D., 2006, “Functionalism, Mental Causation, and the Problem of Metaphysically Necessary Effects”, Noûs , 40: 256–83.
  • Russell, B., 1912, “On the Notion of Cause”, Proceedings of the Aristotelian Society , 13: 1–26.
  • Schiffer, S., 1987, Remnants of Meaning , Cambridge, MA: MIT Press.
  • Schlosser, M. E., 2009, “Non-Reductive Physicalism, Mental Causation and the Nature of Actions”, in A. Hieke and H. Leitgeb (eds.), Reduction: Between the Mind and the Brain , Frankfurt: Ontos Verlag, pp. 73–89.
  • Segal, G. M. A., 2009, “The Causal Inefficacy of Content”, Mind & Language , 24: 80–102.
  • Segal, G. M. A. and E. Sober, 1991, “The Causal Efficacy of Content”, Philosophical Studies , 63: 1–30.
  • Sehon, S., 2005, Teleological Realism: Mind, Agency, and Explanation , Cambridge, MA: MIT Press.
  • Shapiro, L. A., 2010, “Lessons from Causal Exclusion”, Philosophy and Phenomenological Research , 81: 594–604.
  • Shapiro, L. and E. Sober, 2007, “Epiphenomenalism: The Dos and the Don’ts”, in G. Wolters and P. Machamer (eds.), Studies in Causality: Historical and Contemporary , Pittsburgh: University of Pittsburgh Press, pp. 235–64.
  • Shoemaker, S., 1980, “Causality and Properties”, in P. van Inwagen (ed.), Time and Cause: Essays Presented to Richard Taylor , Dordrecht: D. Reidel Publishing, pp. 109–35. Reprinted in Shoemaker 2003, pp. 206–33.
  • ––––, 1998, “Causal and Metaphysical Necessity”, Pacific Philosophical Quarterly , 79: 59–77. Reprinted in Shoemaker 2003, pp. 407–26.
  • ––––, 2001, “Realization and Mental Causation”, in Gillett and Loewer 2001, pp. 74–98. Reprinted in Shoemaker 2003, pp. 427–51.
  • ––––, 2003, Identity, Cause, and Mind , Expanded Edition, Oxford: Clarendon Press.
  • ––––, 2007, Physical Realization , Oxford: Oxford University Press.
  • Sider, T., 2003, “What’s so Bad about Overdetermination?”, Philosophy and Phenomenological Research , 67: 719–26.
  • Smith, B. C., 1990, “Putting Dretske to Work”, in P. Hanson (ed.), Information, Language, and Cognition , Vancouver: University of British Columbia Press, pp. 125–40.
  • Sosa, E., 1984, “Mind-Body Interaction and Supervenient Causation”, Midwest Studies in Philosophy , 9: 271–81.
  • ––––, 1993, “Davidson’s Thinking Causes”, in Heil and Mele 1993, pp. 41–50.
  • Stapp, H., 2005, “Quantum Interactive Dualism: An Alternative to Materialism”, Journal of Consciousness Studies , 12: 43–58.
  • Stich, S. P., 1978, “Autonomous Psychology and the Belief-Desire Thesis”, Monist , 61: 573–91.
  • ––––, 1983, From Folk Psychology to Cognitive Science: The Case Against Belief , Cambridge, MA: MIT Press.
  • Stoutland, F., 1980, “Oblique Causation and Reasons for Action”, Synthese , 43: 351–67.
  • Strawson, P. F., 1962, “Freedom and Resentment”, Proceedings of the British Academy , 48: 1–25. Reprinted in J. M. Fischer and M. Ravizza (eds.), Perspectives on Moral Responsibility , Ithaca: Cornell University Press, pp. 45–66.
  • Sturgeon, S., 1998, “Physicalism and Overdetermination”, Mind , 107: 411–32.
  • Tanney, J., 2013, Rules, Reason, and Self-Knowledge , Cambridge, MA: Harvard University Press.
  • Thomasson, A., 1998, “A Nonreductivist Solution to Mental Causation”, Philosophical Studies , 89: 181–95.
  • Unger, P., 2006, All the Power in the World , Oxford: Oxford University Press.
  • Van Gulick, R., 1993, “Who’s in Charge Here? And Who’s Doing All the Work?”, in Heil and Mele 1993, pp. 233–56.
  • Walter, S., 2008, “The Supervenience Argument, Overdetermination, and Causal Drainage: Assessing Kim’s Master Argument”, Philosophical Psychology , 21: 673–96.
  • Walter, S. and H. Heckmann (eds.), 2003, Physicalism and Mental Causation: The Metaphysics of Mind and Action , Exeter: Imprint Academic.
  • Wegner, D. M., 2002, The Illusion of Conscious Will , Cambridge, MA: MIT Press.
  • ––––, 2004, “Précis of The Illusion of Conscious Will ”, Behavioral and Brain Sciences , 27: 649–59.
  • Whittle, A., 2007, “The Co-Instantiation Thesis”, Australasian Journal of Philosophy , 85: 61–79.
  • Wilson, J., 1999, “How Superduper Does a Physicalist Supervenience Need to Be?”, Philosophical Quarterly , 49: 33–52.
  • ––––, 2011, “Non-reductive Realization and the Powers-based Subset Strategy”, The Monist , 94: 121–54.
  • ––––, 2021, Metaphysical Emergence , Oxford: Oxford University Press.
  • Wilson, R. A., 1992, “Individualism, Causal Powers, and Explanation”, Philosophical Studies , 68: 103–39.
  • Woodward, J., 2008, “Mental Causation and Neural Mechanisms”, in Hohwy and Kallestrup 2008, pp. 218–62.
  • Worley, S., 1997, “Determination and Mental Causation”, Erkenntnis , 46: 281–304.
  • Yablo, S., 1992, “Mental Causation”, Philosophical Review , 101: 245–80.
  • ––––, 1997, “Wide Causation”, Philosophical Perspectives , 11: 251–81.
  • Zhong, L., 2014, “Sophisticated Exclusion and Sophisticated Causation”, Journal of Philosophy , 111: 341–60.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Papers on Mental Causation (edited by Sven Walter) at PhilPapers .
  • Entry on Mental Causation (by Karen Bennett) in Philosophy Compass .
  • Entries on Mental Causation (by Julie Yoo) and Mind and the Causal Exclusion Problem (by Dwayne Moore) in the Internet Encyclopedia of Philosophy .

action | anomalous monism | causation: counterfactual theories of | causation: the metaphysics of | Davidson, Donald | dualism | emergent properties | epiphenomenalism | externalism about the mind | free will | functionalism | mental content: narrow | mind/brain identity theory | multiple realizability | physicalism | properties | tropes

Acknowledgments

We are grateful to the editors for helpful advice on preparing this entry.

Copyright © 2023 by David Robb < darobb @ davidson . edu > John Heil < jh @ wustl . edu > Sophie Gibb < s . c . gibb @ durham . ac . uk >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

The Mind-Body Problem 3.0

  • First Online: 03 December 2020

Cite this chapter

essay on the mind body problem

  • Marco J. Nathan 9  

Part of the book series: Studies in Brain and Mind ((SIBM,volume 17))

739 Accesses

This essay identifies two shifts in the conceptual evolution of the mind-body problem since it was molded into its modern form. The “mind-body problem 1.0” corresponds to Descartes’ ontological question: what are minds and how are they related to bodies? The “mind-body problem 2.0” reflects the core issue underlying much discussion of brains and minds in the twentieth century: can mental states be reduced to neural states? While both issues are no longer central to scientific research, the philosophy of mind ain’t quite done yet. In an attempt to recast a classic discussion in a more contemporary guise, I present a “mind-body problem 3.0.” In a slogan, this can be expressed as the question: how should we pursue psychology in the age of neuroscience?

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

It is not trivial to find explicit statements of this assumption, partly because the mind-body problem is well-known and contemporary authors seldom bother to present it in full detail. Here are some representative quotes: “[T]he persuasive imagery of the Cartesian Theater [the idea of a centered locus of consciousness in the brain] keeps coming back to haunt us—laypeople and scientists alike—even after its ghostly dualism has been denounced and exorcized” (Dennett 1991 , p. 107). “The mind-body problem was posed in its modern form only in the seventeenth century, with the emergence of the conception of the physical world on which we are now all brought up” (Nagel 1995 , p. 97). “What exactly are the relations between the mental and the physical, and in particular how can there be causal relations between them? (…) This is the most famous problem that Descartes left us, and it is usually called the ‘mind-body problem”’ (Searle 2004 , p. 11).

Descartes’s conception of substance was strikingly nuanced (Rodriguez-Pereyra 2008 ).

To be sure, psychologists and philosophers had different agendas. To reflect this divergence, it is common to distinguish two strands of behaviorism (Fodor 1981 ). First, “philosophical” (also known as “logical” or “analytic”) behaviorism is associated with a thesis about the nature of mind and the meaning of mental states. Second, “psychological” or “methodological,” behaviorism emerged from an influential scientific methodology applied to psychology. For the sake of simplicity, I shall not distinguish between the two variants.

Again, I am unabashedly clashing together several variants of functionalism, such as Putnam’s “psycho-functionalism” and Armstrong’s “a priori functionalism” (Block 1978 ).

As Matteo Colombo has brought to my attention, the mind-body problem 1.0 could also be framed as a matter of reduction. On this reading, Descartes may be interpreted as providing a negative argument: minds cannot be reduced to bodies because they are altogether different substances. This is an effective strategy to bring Descartes into modern debates, finding some narrative continuity in the last four hundred years of philosophy of mind. Still, this operation should be understood, from our contemporary perspective. From historical standpoint, Descartes’ target was not reduction. He was interested in ontological questions about the nature of minds and their interactions with bodies. Relatedly, eliminative materialists prefer to talk about “elimination” as opposed to “reduction.” Yet, the former concept can be straightforwardly treated as a limiting case of the latter.

To be sure, Nagel’s own conception of reduction was subtler, and its proper interpretation remains a matter of controversy (Fazekas 2009 ; Klein 2009 ). Nevertheless, for present purposes I am less interested in Nagel’s actual views, and more in how his model of reduction was received and discussed within philosophy (Fodor 1974 ; Kitcher 2003 ).

An analogous, equally heated debate emerged in the philosophy of science. Putnam’s square-peg example was developed and extended to real-life scientific scenarios in biology (Kitcher 2003 ), psychology (Fodor 1974 ), and the social sciences (Garfinkel 1981 ). Post-positivist neo-reductionists disagreed. Authors such as Waters ( 1990 ), Sober ( 1999 , 2000 ), Rosenberg ( 2006 ), and Strevens ( 2008 ) stressed that, while micro-explanations are often unnecessarily complex or anti-economical, they do emphasize crucial details that are typically presupposed implicitly or taken for granted at the macro-level.

Chemero and Silberstein motivate their provocative claim as follows: “The two main debates in the philosophy of mind over the last few decades about the essence of mental states (they are physical, functional, phenomenal, etc.) and over mental context have run their course. Positions have hardened; objections are repeated; theoretical filigrees are attached. These relatively armchair discussions are being replaced by empirically oriented debates in philosophy of cognitive and neural sciences” ( 2008 , p. 1).

“The scientific practices based on the two-level view (functional/cognitive /computational’ vs. neural/mechanistic/implementation) are being replaced by scientific practices based on the view that there are many levels of mechanistic organization. No one level has a monopoly on cognition proper. Instead, different levels are more or less cognitive depending on their specific properties. The different levels and the disciplines that study them are not autonomous from one another. Instead, the different disciplines contribute to the common enterprise of constructing multilevel mechanistic explanations of cognitive phenomena. In other words, there is no longer any meaningful distinction between cognitive psychology and the relevant portions of neuroscience—they are merging to form cognitive neuroscience” (Boone and Piccinini 2016b , p. 1510).

Armstrong, D. M. (1981). The nature of mind . Ithaca: Cornell University Press.

Google Scholar  

Baetu, T. M. (2015). The completeness of mechanistic explanation. Philosophy of Science, 82 , 775–786.

Article   Google Scholar  

Batterman, R. W., & Rice, C. C. (2014). Minimal model explanations. Philosophy of Science, 81 , 349–376.

Bechtel, W., & Richardson, R. C. (2010). Discovering complexity: Decomposition and localization as strategies in scientific research (2nd ed.). Cambridge: MIT Press.

Book   Google Scholar  

Bickle, J. (1998). Psychoneural reduction: The new wave . Cambridge, MA: MIT Press.

Bickle, J. (2003). Philosophy and neuroscience: A ruthlessly reductive account . Dordrecht: Kluwer.

Block, N. (1978). Troubles with functionalism. In C. Savage (Ed.), Perception and Cognition (pp. 261–325). Minneapolis: University of Minnesota Press.

Boone, W., & Piccinini, G. (2016a). Mechanistic abstraction. Philosophy of Science, 83 , 686–697.

Boone, W., & Piccinini, G. (2016b). The cognitive neuroscience revolution. Synthese, 193 , 1509–1534.

Burge, T. (2007). Foundations of mind . Oxford: Oxford University Press.

Burge, T. (2013). Modest dualism. In Cognition through understanding. Philosophical essays (Vol. 3, pp. 471–488). Oxford: Oxford University Press.

Chapter   Google Scholar  

Chalmers, D. J. (1996). The conscious mind: In search for a fundamental theory . New York: Oxford University Press.

Chemero, A., & Silberstein, M. (2008). After the philosophy of mind: Replacing scholasticism with science. Philosophy of Science, 75 , 1–27.

Chirimuuta, M. (2014). Minimal models and canonical neural computations: The distinctness of computational explanation in neuroscience. Synthese, 191 , 127–153.

Chomsky, N. (2000). New horizons in the study of language and mind . Cambridge: Cambridge University Press.

Chomsky, N. (2002). On nature and language . Cambridge: Cambridge University Press.

Churchland, P. M. (1981). Eliminative materialism and the propositional attitudes. The Journal of Philosophy, 78 (2), 67–90.

Churchland, P. (1986). Neurophilosophy . Cambridge: MIT Press.

Craver, C. F. (2007). Explaining the brain: Mechanisms and the mosaic unity of neuroscience . New York: Oxford University Press.

Craver, C. F., & Kaplan, D. M. (2018). Are more details better? On the norms of completeness for mechanistic explanation. British Journal for the Philosophy of Science, 71 (1), 287–319.

Davidson, D. (1970). Mental events. In L. Foster & J. Swanson (Eds.), Experience and theory (pp. 79–101). London: Duckworth.

Del Pinal, G., & Nathan, M. J. (2013). There and up again: On the uses and misuses of neuroimaging in psychology. Cognitive Neuropsychology, 30 (4), 233–252.

Dennett, D. C. (1991). Consciousness explained . Boston: Little Brown and Co..

Dupré, J. (2012). Processes of life: Essays in the philosophy of biology . New York: Oxford University Press.

Fazekas, P. (2009). Reconsidering the role of bridge laws in inter-theoretic relations. Erkenntnis, 71 , 303–322.

Fodor, J. A. (1968). Psychological explanation . Cambridge: MIT Press.

Fodor, J. A. (1974). Special sciences (or: The disunity of science as a working hypothesis). Synthese, 28 , 97–115.

Fodor, J. A. (1981). The mind-body problem. Scientific American, 244 , 114–123.

Garfinkel, A. (1981). Forms of explanation . New Haven: Yale University Press.

Griffiths, P., & Stotz, K. (2013). Genetics and philosophy: An introduction . Cambridge: Cambridge University Press.

Heil, J. (2013). Philosophy of mind: A contemporary introduction . New York: Routledge.

Hornsby, J. (1997). Simple mindedness: In defense of naive naturalism in the philosophy of mind . Cambridge: Harvard University Press.

Jackson, F. (1982). Epiphenomenal qualia. The Philosophical Quarterly, 32 , 127–136.

Jaworski, W. (2016). Structure and the metaphysics of mind: How hylomorphism solves the mind-body problem . Oxford: Oxford University Press.

Kim, J. (1999). Mind in a physical world . Cambridge: MIT Press.

Kim, J. (2011). Philosophy of Mind . Boulder: Westview.

Kitcher, P. (2003). In Mendel’s mirror. Philosophical reflections on biology . New York: Oxford University Press.

Klein, C. (2009). Reduction without reductionism: A defense of Nagel on connectability. The Philosophical Quarterly, 59 (234), 39–53.

Koslicki, K. (2018). Form, matter, substance . Oxford: Oxford University Press.

Krickel, B., & Kohar, M. (this volume). Compare and contrast: How to assess the completeness of mechanistic explanation .

Levy, A. (2014). What was Hodgkin and Huxley’s achievement? British Journal for the Philosophy of Science, 65 , 469–492.

Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information . New York: Freeman.

Nagel, E. (1961). The structure of science . New York: Harcourt Brace.

Nagel, T. (1995). Searle: Why we are not computers. In Other minds (pp. 96–110). New York: Oxford University Press.

Nathan, M. J. (2012). The varieties of molecular explanation. Philosophy of Science, 79 (2), 233–254.

Nathan, M. J. (under contract). Black boxes: How science turns ignorance into knowledge . New York: Oxford University Press.

Nathan, M. J., & Del Pinal, G. (2016). Mapping the mind: Bridge laws and the psycho-neural interface. Synthese, 193 (2), 637–657.

Place, U. T. (1956). Is consciousness a brain process? British Journal of Psychology, 47 , 44–50.

Putnam, H. (1965). Brains and behaviour. In R. Butler (Ed.), Analytical philosophy (Vol. 2, pp. 24–36). Oxford: Blackwell.

Putnam, H. (1967). Psychological predicates. In W. Capitan & D. Merrill (Eds.), Art, mind, and religion (pp. 37–48). Pittsburgh: University of Pittsburgh Press.

Putnam, H. (1975). Philosophy and our mental life. In Mind, language, and reality (pp. 291–303). New York: Cambridge University Press.

Rodriguez-Pereyra, G. (2008). Descartes’ substance dualism and his independence notion of substance. Journal of the History of Philosophy, 46 (1), 69–90.

Rosenberg, A. (2006). Darwinian reductionism: Or how to stop worrying and love molecular biology . Chicago: University of Chicago Press.

Ryle, G. (1949). The concept of mind . London: Hutchinson & Co..

Sarkar, S. (1998). Genetics and reductionism . Cambridge: Cambridge University Press.

Searle, J. R. (2004). Mind: A brief introduction . New York: Oxford University Press.

Smart, J. (1959). Sensations and brain processes. Philosophical Review, 68 , 141–156.

Sober, E. (1999). The multiple realizability argument against reductionism. Philosophy of Science, 66 , 542–564.

Sober, E. (2000). Philosophy of biology (2nd ed.). Boulder: Westview.

Strevens, M. (2008). Depth. An account of scientific explanation . Cambridge: Harvard University Press.

Waters, C. K. (1990). Why the anti-reductionist consensus won’t survive: The case of classical Mendelian genetics. Proceedings to the Biennial Meeting of the Philosophy of Science Association , 125–39.

Yablo, S. (1992). Mental causation. Philosophical Review, 101 , 254–280.

Download references

Acknowledgments

The author is grateful to Bill Anderson, John Bickle, Fabrizio Calzavarini, Matteo Colombo, Guie Del Pinal, Carrie Figdor, Matteo Grasso, Philipp Haueis, Mika Smith, Marco Viola, and two reviewers for constructive comments on various versions of this essay, and to Stefano Mannone for designing the image. Earlier drafts were presented at the University of Milan, Mississippi State University, the University of Turin Neural Mechanisms Webinar Series, and the University of Denver. All audiences provided valuable feedback.

Author information

Authors and affiliations.

University of Denver, Denver, CO, USA

Marco J. Nathan

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Marco J. Nathan .

Editor information

Editors and affiliations.

Department of Letter, Philosophy, Communication, University of Bergamo LLC, Turin, Italy

Fabrizio Calzavarini

Department of Philosophy and Education, University of Turin, Turin, Italy

Marco Viola

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Nathan, M.J. (2021). The Mind-Body Problem 3.0. In: Calzavarini, F., Viola, M. (eds) Neural Mechanisms. Studies in Brain and Mind, vol 17. Springer, Cham. https://doi.org/10.1007/978-3-030-54092-0_12

Download citation

DOI : https://doi.org/10.1007/978-3-030-54092-0_12

Published : 03 December 2020

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-54091-3

Online ISBN : 978-3-030-54092-0

eBook Packages : Religion and Philosophy Philosophy and Religion (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Mind in a Physical World: An Essay on the Mind-Body Problem and Mental Causation

Mind in a Physical World : An Essay on the Mind-Body Problem and Mental Causation

This book, based on Jaegwon Kim's 1996 Townsend Lectures, presents the philosopher's current views on a variety of issues in the metaphysics of the mind—in particular, the mind-body problem, mental causation, and reductionism.

This book, based on Jaegwon Kim's 1996 Townsend Lectures, presents the philosopher's current views on a variety of issues in the metaphysics of the mind—in particular, the mind-body problem, mental causation, and reductionism. Kim construes the mind-body problem as that of finding a place for the mind in a world that is fundamentally physical. Among other points, he redefines the roles of supervenience and emergence in the discussion of the mind-body problem. Arguing that various contemporary accounts of mental causation are inadequate, he offers his own partially reductionist solution on the basis of a novel model of reduction. Retaining the informal tone of the lecture format, the book is clear yet sophisticated.

  • Permissions
  • Cite Icon Cite

Mind in a Physical World : An Essay on the Mind-Body Problem and Mental Causation By: Jaegwon Kim https://doi.org/10.7551/mitpress/4629.001.0001 ISBN (electronic): 9780262277105 Publisher: The MIT Press Published: 1998

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Table of Contents

  • Preface Doi: https://doi.org/10.7551/mitpress/4629.003.0001 Open the PDF Link PDF for Preface in another window
  • 1: The Mind-Body Problem Doi: https://doi.org/10.7551/mitpress/4629.003.0002 Open the PDF Link PDF for 1: The Mind-Body Problem in another window
  • 2: The Many Problems of Mental Causation Doi: https://doi.org/10.7551/mitpress/4629.003.0003 Open the PDF Link PDF for 2: The Many Problems of Mental Causation in another window
  • 3: Mental Causation Doi: https://doi.org/10.7551/mitpress/4629.003.0004 Open the PDF Link PDF for 3: Mental Causation in another window
  • 4: Reduction and Reductionism Doi: https://doi.org/10.7551/mitpress/4629.003.0005 Open the PDF Link PDF for 4: Reduction and Reductionism in another window
  • Notes Doi: https://doi.org/10.7551/mitpress/4629.003.0006 Open the PDF Link PDF for Notes in another window
  • References Doi: https://doi.org/10.7551/mitpress/4629.003.0007 Open the PDF Link PDF for References in another window
  • Index Doi: https://doi.org/10.7551/mitpress/4629.003.0008 Open the PDF Link PDF for Index in another window
  • Open Access

A product of The MIT Press

Mit press direct.

  • About MIT Press Direct

Information

  • Accessibility
  • For Authors
  • For Customers
  • For Librarians
  • Direct to Open
  • Media Inquiries
  • Rights and Permissions
  • For Advertisers
  • About the MIT Press
  • The MIT Press Reader
  • MIT Press Blog
  • Seasonal Catalogs
  • MIT Press Home
  • Give to the MIT Press
  • Direct Service Desk
  • Terms of Use
  • Privacy Statement
  • Crossref Member
  • COUNTER Member  
  • The MIT Press colophon is registered in the U.S. Patent and Trademark Office

This Feature Is Available To Subscribers Only

Sign In or Create an Account

We will keep fighting for all libraries - stand with us!

Internet Archive Audio

essay on the mind body problem

  • This Just In
  • Grateful Dead
  • Old Time Radio
  • 78 RPMs and Cylinder Recordings
  • Audio Books & Poetry
  • Computers, Technology and Science
  • Music, Arts & Culture
  • News & Public Affairs
  • Spirituality & Religion
  • Radio News Archive

essay on the mind body problem

  • Flickr Commons
  • Occupy Wall Street Flickr
  • NASA Images
  • Solar System Collection
  • Ames Research Center

essay on the mind body problem

  • All Software
  • Old School Emulation
  • MS-DOS Games
  • Historical Software
  • Classic PC Games
  • Software Library
  • Kodi Archive and Support File
  • Vintage Software
  • CD-ROM Software
  • CD-ROM Software Library
  • Software Sites
  • Tucows Software Library
  • Shareware CD-ROMs
  • Software Capsules Compilation
  • CD-ROM Images
  • ZX Spectrum
  • DOOM Level CD

essay on the mind body problem

  • Smithsonian Libraries
  • FEDLINK (US)
  • Lincoln Collection
  • American Libraries
  • Canadian Libraries
  • Universal Library
  • Project Gutenberg
  • Children's Library
  • Biodiversity Heritage Library
  • Books by Language
  • Additional Collections

essay on the mind body problem

  • Prelinger Archives
  • Democracy Now!
  • Occupy Wall Street
  • TV NSA Clip Library
  • Animation & Cartoons
  • Arts & Music
  • Computers & Technology
  • Cultural & Academic Films
  • Ephemeral Films
  • Sports Videos
  • Videogame Videos
  • Youth Media

Search the history of over 866 billion web pages on the Internet.

Mobile Apps

  • Wayback Machine (iOS)
  • Wayback Machine (Android)

Browser Extensions

Archive-it subscription.

  • Explore the Collections
  • Build Collections

Save Page Now

Capture a web page as it appears now for use as a trusted citation in the future.

Please enter a valid web address

  • Donate Donate icon An illustration of a heart shape

Mind in a physical world : an essay on the mind-body problem and mental causation

Bookreader item preview, share or embed this item, flag this item for.

  • Graphic Violence
  • Explicit Sexual Content
  • Hate Speech
  • Misinformation/Disinformation
  • Marketing/Phishing/Advertising
  • Misleading/Inaccurate/Missing Metadata

[WorldCat (this item)]

plus-circle Add Review comment Reviews

161 Previews

15 Favorites

Better World Books

DOWNLOAD OPTIONS

No suitable files to display here.

PDF access not available for this item.

IN COLLECTIONS

Uploaded by station42.cebu on February 16, 2022

SIMILAR ITEMS (based on metadata)

Change Password

Your password must have 6 characters or more:.

  • a lower case character, 
  • an upper case character, 
  • a special character 

Password Changed Successfully

Your password has been changed

Create your account

Forget yout password.

Enter your email address below and we will send you the reset instructions

If the address matches an existing account you will receive an email with instructions to reset your password

Forgot your Username?

Enter your email address below and we will send you your username

If the address matches an existing account you will receive an email with instructions to retrieve your username

Psychiatry Online

  • April 01, 2024 | VOL. 181, NO. 4 CURRENT ISSUE pp.255-346
  • March 01, 2024 | VOL. 181, NO. 3 pp.171-254
  • February 01, 2024 | VOL. 181, NO. 2 pp.83-170
  • January 01, 2024 | VOL. 181, NO. 1 pp.1-82

The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use , including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

A Psychiatric Dialogue on the Mind-Body Problem

  • Kenneth S. Kendler , M.D.

Search for more papers by this author

Of all the human professions, psychiatry is most centrally concerned with the relationship of mind and brain. In many clinical interactions, psychiatrists need to consider both subjective mental experiences and objective aspects of brain function. This article attempts to summarize, in the form of a dialogue between a philosophically informed attending psychiatrist and three residents, the major philosophical positions on the mind-body problem. The positions reviewed include the following: substance dualism, property dualism, type identity, token identity, functionalism, eliminative materialism, and explanatory dualism. This essay seeks to provide a brief user-friendly introduction, from a psychiatric perspective, to current thinking about the mind-body problem.

Of all the human professions, psychiatry, in its day-to-day work, is most concerned with the relationship of mind and brain. In a typical clinical interaction, psychiatrists are centrally concerned with both subjective, mental, first-person constructs and objective, third-person brain states. In such clinical interventions, the working psychiatrist traverses many times the “mind-brain” divide. We have tended to view etiologic theories of psychiatric disorders as either brain based (organic or biological) or mind based (functional or psychological). Our therapies are divided into those that impact largely on the mind (“psycho” therapies) and on the brain (“somatic” therapies). The division of the United States government that funds most research in psychiatry is termed the National Institute of “Mental” Health. The manual of the American Psychiatric Association that is widely used for the diagnosis of psychiatric disorders is called the Diagnostic and Statistical Manual of “Mental” Disorders.

Therefore, as a discipline, psychiatry should be deeply interested in the mind-body problem. However, although this is an active area of concern within philosophy and some parts of the neuroscientific community, it has been years since a review of these issues has appeared in a major Anglophonic psychiatric journal (although relatively recent articles by Kandel [ 1 , 2 ] certainly touched on these issues). Almost certainly, part of the problem is terminology. Neither medical nor psychiatric training provides a good background for the conceptual and terminologic approach most frequently taken by those who write on the subject. In fact, training in biomedicine is likely to produce impatience with the philosophical discourse in this area.

The goal of this essay is to provide a selective primer for past and current perspectives on the mind-body problem. No attempt is made to be complete. Indeed, this article reflects several years of reading and musing by an active psychiatric researcher and clinician without formal philosophical training.

Important progress has been recently made in our understanding of the phenomenon of consciousness (3) . Some investigators have proposed general theories (e.g., Edelman and Tononi [4] and Damasio [5] ), while others have explored the implications of specific neurologic conditions (e.g., “blindsight” [6] and “split-brain” [7] ). Although this work is of substantial relevance to the mind-body problem, space constraints make it impossible to review all of this material here.

I take a time-honored approach in philosophy and review these issues as a dialogue on rounds between Teacher, a philosophically informed attending psychiatrist, and three residents: Doug, Mary, and Francine. These three have sympathies for the three major theories we will examine: dualism, materialism, and functionalism. Doug has just finished a detailed presentation about a patient, Mr. A, whom he admitted the previous night with major depression.

The Dialogue

Teacher: That was a nice case presentation, Doug. Can you summarize for us how you understand the causes of Mr. A’s depression?

Doug: Sure. I think that both psychoanalytic and cognitive theories can be usefully applied. Mr. A has a lot of unresolved anger and competitiveness toward his father and this resulted—

Mary: Doug, come on! That is so old-fashioned. Psychiatry is applied neuroscience now. We shouldn’t be talking about parent-child relationships or cognitive schemata but serotonergic dysfunction resulting in deficits in functional transmission at key mood centers in the limbic system.

Teacher: Mary, I’m glad you raised that point. Let’s pursue this discussion further. Could Doug’s view and your view of Mr. A’s depression both be correct? Could his unresolved anger at his father or his self-derogatory cognitive schemata be expressed through dysfunction in his serotonin system?

Doug: I am not sure, Teacher. My approach to psychiatry has always been to try to understand how patients feel, to try to make sense of their problems from their own perspectives. People don’t feel a dysfunctional serotonin receptor. They have conflicts, wishes, and fears. How can molecules and receptors have wishes or conflicts?

Mary: Wait a minute, Doug! Are you seriously claiming that there are aspects of mental functioning that cannot be due to brain processes? How else do you think we have thoughts or wishes or conflicts? These are all the result of synaptic firings in different parts of our brains.

Teacher: Let me push you a bit on that, Mary. What precisely do you think is the causal relationship between mind and brain?

Mary: I haven’t thought much about this since college! I guess I have always thought that mind and brain were just different words for the same thing, one experienced from the inside—mind—and the other experienced from the outside—brain.

Teacher: Mary, you are not being very precise with your use of language. A moment ago you said that mind is the result of brain, that is, that synaptic firings are the cause of thoughts and feelings. Just now, you said that mind and brain are the same thing. Which is it?

Mary: I’m not sure. Can you help me understand the distinction?

Teacher: I can try. It’s probably easiest to give examples of what philosophers would call identity relationships. Simply speaking, identity is self-sameness. The most straightforward and, some might say, trivial form of identity occurs when multiple names exist for the same entity. For example, “Samuel Clemens” is Mark Twain. Of more relevance to the mind-body problem is what have been called theoretical identities, identities revealed by scientists as they discover the way the world works. Theoretical identities take folk concepts and provide for them scientific explanations. Examples include the discoveries that temperature is the mean kinetic energy of molecules, that water is H 2 O, and that lightning is a cloud-to-earth electrical discharge.

Mary: I think I get it. It wouldn’t make any sense to say that molecular motion causes temperature or that cloud-to-earth electrical discharge causes lightning. Molecular energy just is temperature, and earth-to-cloud discharge of electricity just is lightning.

Teacher: Exactly! Now, let’s get back to the question. Do mind and brain have a causal or an identity relationship? Having looked at identity relationships, let’s explore the causal model. Am I correct, Mary, that you said that you think that abnormal serotonin function can cause symptoms of depression? Put more broadly, then, does brain cause mind? Does it only go in that direction?

Mary: Do you mean, can mind cause brain as well as brain cause mind?

Teacher: Precisely.

Doug: Wait. I’m lost. I’m still back on the problem of how what we call mind can possibly be the same as brain or even caused by brain. The mind and physical things just seem to be too different.

Francine: Me, too. I can’t help but feel that Mary is barking up the wrong tree trying to see mind and brain as the same kind of thing.

Teacher: I can pick up on both strands of this conversation, as they both lead us back to Descartes (1596–1650), the great scientist and polymath who started modern discourse on the mind-body problem. Descartes agreed with you, Doug. He said that the universe could be divided into two completely different kinds of “stuff,” material and mental. They differed in three key ways (8) . Material things are spatial; they have a location and dimensions. Mental things do not. Material things have properties, like shape and color, that mental things do not. Finally, material things are public; they can be observed by anyone, whereas mental events are inherently private. They can be directly observed only by the individual in whose mind they occur.

Doug: Yes. That is exactly what I mean. The physical world has an up and down. Things have mass. But do thoughts have a direction? Can you weigh them?

Mary: Doug, do you realize how antiscientific you are sounding? How can you expect psychiatry to be accepted by the rest of medicine if you talk about psychiatric disorders as due to some ethereal nonphysical thing called mind or spirit?

Doug: However, maybe that is exactly what psychiatry should do, stand as a bastion of humanism against the overwhelming attacks of biological reductionism. Science is a wonderful and powerful tool, but it is not the answer to everything. Is science going to tell me why I find Mozart’s music so lovely or the poetry of Wordsworth so moving?

Teacher: Hold on, you two. How about if we agree for now to ignore the problem of how psychiatry should relate to the rest of medicine? Let’s get back to Descartes. He postulated what we would now call substance dualism, the theory that the universe contains two fundamentally different kinds of stuff: the mental and the material (or physical).

Mary: So, he rejected the idea of an identity relationship?

Teacher: Absolutely. But he had a big problem, and that is the problem of the apparent bidirectional causal relationship between mind and brain. Even in the 17th century, they knew that damage to the brain could produce changes in mental functioning, so it appeared that brain influences mind. Furthermore, we all know what would happen if a mother were told that her young child had died. One would see trembling, weeping, and agitation—all very physical events. So it would also appear that mind influences brain. Descartes never successfully solved this problem. He came up with the rather unsatisfactory idea that somehow these two fundamentally different kinds of stuff met up in the pineal gland and there influenced each other.

Doug: But if mind and brain are completely different sorts of things, how could one ever affect the other?

Teacher: Precisely, Doug. That is one of the main reasons why Descartes’s kind of dualism has not been very popular in recent years. It just seems too incredible.

Mary: So, are you saying that the identity relationship makes more sense because causal relationships are hard to imagine between things that are so different as mind and brain?

Teacher: That is only part of the problem, Mary. There is another that some consider even more serious. It is easy for us to understand what you might call brain-to-brain causation, that is, that different aspects of brain function influence other aspects. We are learning about these all the time, for example, our increasing insights into the biological basis of memory (9) . Many of us can also begin to see how brain events can cause mind events. This is easiest to understand in the perceptual field—let’s say vision. We know that applying small electrical currents to the primary and secondary sensory cortices produces the mental experience of perception.

Mary: Those kinds of experiments sure sound like brain-to-mind causation to me.

Teacher: Yes. I agree that seems like the easiest way to understand them. But I am trying to get at another point. Let’s set up a very simple thought experiment. Bill is sitting in a chair eating salty peanuts and reading the newspaper. He has poured himself a beer but is engrossed in an interesting story. In the middle of the story, he feels thirsty, stops reading, and reaches for the glass of beer.

Now, the key question is, what role did the subjective sense of thirst play in this little story? Was it in the causal pathway of events? I am going to have to go to the blackboard here and draw out two versions of this ( Figure 1 , upper part).

Both versions begin with a set of neurons in Bill’s hypothalamus noting that the sodium concentration is rising owing to all those salty peanuts! We will call that brain thirst. The interactionist model assumes that the hypothalamus sends signals to some executive control system (probably involving a network of several structures, including the frontal cortex). Somewhere in that process, brain thirst becomes mind thirst; that is, Bill has the subjective experience of thirst. In his executive control region, then, based on this mind thirst and his memory about the nearby glass of beer, Bill makes the mind-based decision, “I’m thirsty and that beer would sure taste good.” The decision (in mind) now being made, the executive control region, under the control of mind, sends signals to the motor strip and cerebellum saying, “Reach for that glass of beer.”

The main advantage of this little story is that it maps well onto our subjective experience. If you asked Bill what happened, he would say, “I was thirsty and wanted the beer.” He would be clear that it was a volitional act in his mental sphere that made him reach for the beer. But notice in Figure 1 that we are right back into the problem confronted by Descartes. This little story has the causal arrows going from brain to mind and back to brain. A lot of people have trouble with this.

An alternative approach to this problem is offered by the theory of eliminative materialism or, as it is sometimes called, epiphenomenonalism.

Doug: Ugh. It is these kinds of big words that always turn me off to philosophy.

Teacher: Bear with me, Doug. It might be worth it. Philosophy certainly does have its terminology, and it can get pretty thick at times. But then again, so do medicine and psychiatry. The concept of eliminative materialism is very simple: that the sufficient cause for all material events is other material events. If we were to retell this story from the perspective of eliminative materialism, it would look a lot simpler. All the causal arrows flow between brain states, from the hypothalamus to the frontal cortex to the motor strip ( Figure 1 , lower part).

Doug: But what about Bill’s subjective experience of thirst and of deciding to reach for the glass of beer?

Teacher: The theory of eliminative materialism wouldn’t deny that Bill experienced that but would maintain that none of those mental states was in the causal loop. The hypothalamic signal might enter consciousness as the feeling of thirst, and the work of the frontal control center might enter consciousness as the sense of having made a decision to reach for the glass, but in fact the mental experiences are all epiphenomenal or, as some say, inert. Let me state this clearly. The eliminative materialism theory assumes a one-way casual relation between brain and mind states (brain → mind) and no causal efficacy for mind; that is, there is no mind → brain causality. According to this theory, mind is just froth on the wave or the steam coming out of a steam engine. Mind is a shadow theater that keeps us amused and thinking (incorrectly) that our consciousness really controls things.

Doug: That is a pretty grim view of the human condition. Why should we believe anything that radical?

Teacher: Doug, have you ever had the experience of touching something hot like a stove and withdrawing your hand and then experiencing the heat and pain?

Teacher: If you think about it, that experience is exactly what is predicted by the eliminative materialist theory. Your nervous system sensed the heat, sent a signal to move your hand, and then, by the way, decided to inform your consciousness of what had happened after the fact.

Doug: OK. I’ll accept that. But that is just a reflex arc, probably working in my spinal cord. Are you arguing that that is a general model of brain action?

Teacher: I’m only suggesting that it needs to be taken seriously as a theory. In famous experiments conducted in the early 1980s (10) , the neurophysiologist Ben Libet asked students to perform spontaneously a simple motor task: to lift a finger. He found that although the students became aware of the impulse around 200 msec before performing the motor act, EEG recordings showed that the brain was planning the task 500 msec before it occurred. Now, there are a lot of questions about the interpretation of this study, but one way to see it, as predicted by eliminative materialism, is that the brain makes up its mind to do something, and then the decision enters consciousness. The mind thinks that it really made the decision, but it was actually dictated by the brain 300 msec before. My point with this story and the reference to Libet’s work is to raise the question of whether consciousness is as central in brain processes as many of us like to think.

Let me try one more approach to advocating the eliminative materialist perspective. Before the development of modern science, people had many folk conceptions that have since been proven incorrect, such as that thunder is caused by an angry god and that certain diseases are caused by witches. Perhaps the concepts of what has been called folk psychology are the same sort of thing, superstitious beliefs that arose when we didn’t know anything about how the brain worked.

Doug: You mean that believing that our actions are governed by our beliefs and desires is like believing in witches or tribal gods?

Mary: To put it in another way, magnetic resonance imaging (MRI) machines that show the brain basis of perception and cognition are like Ben Franklin’s discovery that lightning wasn’t caused by Zeus but could be explained as a form of electricity.

Teacher: Exactly. So, we have now examined some of the problems you get into when you want to try to work out a causal relationship between mind and brain.

Mary: Yeah. That identity theory is looking better and better. After all, it is so simple and elegant that it must be true. Mind is brain and brain is mind.

Teacher: Not so fast, Mary. There are some problems with this theory too.

Francine: Teacher, isn’t one of the biggest problems with identity theories that they assume that the mind is a thing rather than a process?

Teacher: Yes, Francine. Let’s get back to that point in a few minutes. I agree with Mary that identity theories are very appealing in their simplicity and potential power, but, unfortunately, they are not without their problems. Let’s review three of them. The first stems from what has been called Leibniz’s law, which specifies a critical feature of an identity relationship. This law simply says that if an identity relationship is true between A and B, then A and B must have all the same properties or characteristics. If there is a property possessed by A and not by B, then A and B cannot have an identity relationship.

Mary: That makes sense, and it certainly works for the examples you gave. Whatever is true for water must be true for H 2 O, and the same goes for lightning and a cloud-to-earth electrical discharge.

Teacher: Yes, but what about mind and brain? As Doug said, the brain is physical and has direction, mass, and temperature, whereas the mind has wishes, intentions, and fears. Can two such different things as mind and brain really have an identity relationship?

Doug: Yes. That is exactly what I have been trying to say. That puts a big hole in your mind-equals-brain and brain-equals-mind ideas, Mary!

Mary: Maybe. But isn’t that a pretty narrow view of identity? Perhaps the problem of mind and brain is not like that of lightning and electricity. It might be that mind and brain are one thing, but when you experience it from the inside (as mind) and then from the outside (as brain), it is unrealistic to expect them to appear the same and to have all the same properties.

Teacher: A good point, Mary. What you have raised is the possibility of another kind of dualism that is less radical than the kind proposed by Descartes. One way to think about it is that there are two levels to what we might call identity. Things can be identical at the level of substance and/or at the level of property. Imagine that brain and mind are the same substance but have two fundamentally different sets of properties.

Mary: Please give me an example. I am having a hard time grasping this.

Teacher: Sure. If we take any object, it will have several distinct sets of properties: mass, volume, and color, for instance.

Mary: So mind and brain are two distinct properties of the same substance?

Teacher: Precisely. Not surprisingly, this is called property dualism and is considerably more popular today in philosophical circles than is the substance dualism originally promulgated by Descartes.

Doug: That is quite an appealing theory.

Teacher: I agree, Doug. Let me get on to the second major problem with the identity theory.

Doug: Wait, Teacher! At least tell us, is property dualism consistent with identity theories?

Teacher: A hard question. Most identity theorists suggest that brain and mind have a full theoretical identity just like lightning and cloud-to-earth electrical discharges. But you could argue that property dualism is consistent with an attenuated kind of identity. This might disappoint most identity theorists because it suggests that the relationship of brain to mind is different from that of other aspects of our world in which we are moving from folk knowledge to scientific theories.

Doug: I think I followed that.

Teacher: Let’s get back to the second major weakness of the classic identity theory. The commonsense or “strong” form of these identity theories is called a type identity theory. That is, if A and B have an identity relationship, that relationship is fundamental and the same everywhere and always.

Mary: OK. So where does this lead?

Teacher: Well, let’s go back to Mr. A’s depression. Let’s imagine that in the year 2050 we have a super-duper MRI scanner that can look into Mr. A’s brain and completely explain the brain changes that occur with his depression. We can then claim that the brain state for depression and the mind state for depression have an identity relationship.

Mary: OK. Makes sense so far.

Teacher: Now, if Mr. A develops the same kind of depression again in 20 years, would we expect to find the exact same brain state? Or what about if Ms. B comes in with depression. Would her brain state be the same as that seen in Mr. A? Or even worse, imagine an alien race of intelligent sentient beings who might also be prone to depression and were able to explain to us how they felt. Is there any credible reason that we would expect the changes in their brains (if they have a brain anything like ours) to be the same as those seen in Mr. A? Philosophers call this the problem of multiple realizability, that is, the probability that many different brain states might all cause the same mind state (e.g., depression).

Doug: Sounds to me like you are making a pretty strong case against what you call the type identity theory. Is there any other kind?

Teacher: Yes. It is called a token identity theory, and it postulates a weaker kind of identity relationship.

Mary: Weaker in what way?

Teacher: It makes no claims for a universal relationship. It only claims that for a given person at a given time there is an identity relationship between the brain state and the mind state.

Mary: That makes clinical sense. I have certainly seen depressed patients who had very similar clinical pictures and got the same treatment, but one got better and the other did not. This might imply that they actually had different brain states underlying their depression.

Teacher: Precisely. My guess is that if the identity theory is correct, we will find that there is a spectrum extending from primarily type identity to token identity relationships. This will, in part, reflect the plasticity of the central nervous system (CNS) and the level of individual differences. For example, for some subcortical processes, such as stimulation of spinal tracts producing pain, there may be no individual differences, and the type identity theory may apply. It is as if that brain function is hard wired—at least, for all humans. All bets would be off for Martians! On the other hand, for more complex neurobehavioral traits that are controlled by very plastic parts of the human CNS, there may be highly variable interindividual and even intraindividual, across-time differences, so the token identity model may be more appropriate.

Francine: What is the third problem?

Teacher: The third problem is sometimes referred to as the explanatory gap (11) or the “hard problem of consciousness” (12) . When most people think about an identity, they tend to view mind from the outside. For example, recent research with positron emission tomography and functional MRI has produced an increasing number of results supporting identity theories. We can see the effects of vision in the occipital cortex, hearing in the temporal cortex, and the like. We have even been able to see greater metabolic activity in a range of structures, including the temporal association cortex, correlated with the report of auditory hallucinations.

Mary: Yes. My point exactly. This is leading the way to a view of psychiatry as applied neuroscience.

Doug: I also have to agree that this evidence strongly supports some kind of close relationship between brain and some functions of mind.

Teacher: Well, the problem of the explanatory gap is that it is easy to establish a correlation or identity between brain activity and a mental function but much harder to get from brain activity to the actual experience. That is, I don’t have too much trouble going from the state of looking at a red square to the state of increased blood flow in the visual cortex reflecting increased neuronal firing. Modern neuroscience has been making an increasingly compelling case for viewing this as an identity relationship. However, I have a lot more trouble going from increased blood flow reflecting neuronal firing in the visual cortex to the actual experience of seeing red.

In the philosophical literature, this subjective “feel” of mind is often called “qualia.” In a famous essay titled “What Is It Like to Be a Bat?” (13) , the philosopher Nagel argued that the qualia problem is fundamental. We will never be able to understand what it feels like to be a bat.

With the qualia problem, we come up against the mind-body divide in a particularly stark and direct way. Even if we got down to the firing of the specific neurons in the cortex that we know are correlated with the perception of color, how does that neuronal firing actually produce the subjective sense of redness with which we are all familiar? Can you have an identity relationship between what seems to be clearly within the material world (neuronal firing) and the raw sensory feel of redness?

I should say that having spoken about this problem with a number of people, I have gotten a wide range of reactions. Some don’t see it as a problem at all, and others, like me, feel a sense of existential vertigo when trying to grapple with this question.

Doug: I think this comes close to describing my concerns. Neuronal firing and the sense of seeing red—they just cannot be the same thing.

Mary: Maybe we have finally gotten to the root of our differences. I have no problem with this. When you stimulate a muscle, it twitches. When you stimulate a liver cell, it makes bile. When you stimulate a cell in the visual cortex, you get the perception of red. What’s the difference?

Teacher: I am glad that the two of you responded so differently to this question. I would only say, Mary, that the major difference is that with stimulating a motor neuron and producing a muscle twitch, you have events that all take place in the material world. On the other hand, stimulation in the visual cortex produces a perception of the color red that is only seen by the person whose brain is being stimulated. In this sense, it is not the same thing at all. It is hard for some of us to see how the nerve cell firing and the experience of seeing red could be the same thing, at least in the way that lightning and an electrical discharge are the same thing.

Mary: It seems to me, Teacher, that you and these philosophers are just getting too precious. That’s the way the world is. We humans with our consciousness are not nearly as special as you think. When you poke a paramecium, it moves away. A few hundred million years later you have big-brained primates, but nothing has really changed. It’s all biology. I’m losing patience with all this identity-relationship talk. This sounds too much like abstract psychoanalytic theorizing to me.

Francine: I have been patient for a long time, Teacher. Can I have my say now?

Teacher: Sure, Francine.

Francine: Up until now, all of you have been approaching the mind-body problem the wrong way. Identity theories see the mind as a thing like a rock or a molecule. But the mind isn’t a thing; it’s a process.

Mary: Help me understand. What is the difference between a thing and a process?

Francine: Sure. There are two ways to answer the question, “What is it?” If I asked you about a steel girder, you would probably tell me what it is, that is, its composition and structure. You would treat it as a thing. However, if I asked you about a clock, you would probably say, “Something to tell time”; that is, you would tell me what it does. In answering what a clock does, you have given a functional (or process) description.

Mary: I think I get it. So how does this explain how brain relates to mind?

Francine: I think that states of mind are functional states of brain.

Doug: You’ve lost me! How does a functional state of brain differ from a physical state of brain?

Francine: Think about a computer. You can change its physical state by adding more random-access memory or getting a bigger hard drive, or you can change its functional state by loading different software programs.

Teacher: Let me interrupt for a moment. Francine is advocating what has been called functionalism, which is almost certainly the most popular current philosophical approach to the mind-body problem. Functionalism has strong historical roots in computer science, artificial intelligence, and the cognitive neurosciences.

Mary: Why has it been so popular?

Teacher: Well, maybe we can let Francine explain. But its advocates say that it avoids the worst problems of dualism and the identity theories.

Francine: Let me start with how this approach is superior to the type identity theories. When we think about mental states from a functional perspective, the problems of multiple realizability go away.

Francine: A functional theory would say depression is a functional state defined by responses to certain inputs with specific outputs.

Doug: Such as crying when you look at a picture of an old boyfriend or having a very sad facial expression when waking up in the morning?

Francine: Correct. Functionalism doesn’t try to say that depression is a particular physiological brain state. It defines it at a more abstract level as any brain state that plays this particular functional role, of causing someone to cry, to look sad, etc.

Mary: So functionalism would not be so concerned about whether the basic biology of depression in different humans or humans and aliens was the same as long as the state of depression in these organisms played the same functional role?

Francine: Yes. I find this especially attractive.

Mary: So functionally equivalent need not say anything about biologically equivalent?

Francine: Right. You can see how well functionalism lends itself to cognitive science and artificial intelligence. If brain states are functions connecting certain inputs with certain outputs (stub your toe → experience pain → swear, get red in the face, and dance around holding your toe and cursing), then this kind of state could be realized in a variety of different physical systems, including neurons or silicon chips.

Mary: I think I understand. But are functionalists materialists?

Francine: Can I try to answer this?

Teacher: Sure.

Francine: I think the answer is mostly yes but a little bit no. Functional states are realized in material systems, but they are not essentially material states.

Mary: Can you translate that into plain English?

Francine: OK. Let’s go back to clocks. Their functional role is to tell time.

But we could design a machine to tell time that used springs, pendulums, batteries, or even water. In each case, the clock is a material thing, but very different kinds of material things that were nonidentical on a physical level could all have the same function—of telling time.

Teacher: Let me try to clarify. I think Francine is right that functionalism avoids the unattractive feature of dualism, which postulates a nonmaterial mental substance. But it resembles dualism in that it postulates two levels of reality. That is, there is the physical apparatus—I like to think of a huge series of switches—and then there is the functional state of those switches. Let’s imagine a computer program that controls the railroad. You have thousands of switches in the form of transistors. Depending on whether one of those thousands of switches is on or off, you send a train to New York or Chicago. Recall that the fundamental physical nature of a transistor or any switch is not changed as a function of which position it is in. What is important are the rules of your program, that is, the functional significance of what that switch means.

Mary: I think I see more clearly. The switch is physical, but the significance of its on-ness or off-ness is really a function of the rules of the system—the program, in this case.

Teacher: Yes. Philosophers would say that the functional status of the switch is realized in some physical structure.

Doug: I think that I only dimly see the difference between functionalism and identity theories.

Teacher: Doug, let me try one more approach, and this, for me, is perhaps the most important insight that functionalism has given me. Identity theorists want to equate a specific physiological aspect of brain function with specific mental events. A problem with this is that at a basic level, a lot of what goes on in the brain (ions crossing through membranes, second messenger systems being activated, neurotransmitters binding receptors) is nonspecific. If you looked at the biophysics of cell firing, it would probably look similar whether the neuron was involved in a pain pathway, the visual system, or a motor pathway. So, on one level, I would wager that the functionalists are right: that the specific mental consequences of a brain event cannot be fully specified at a purely physical level (e.g., as ions crossing membranes) but must also be a consequence of the functional organization of the brain. The same action potential could initiate the activity of neuronal arrays associated with a perception of pain, the color red, or the pitch of middle C, depending on where that neuron is located, that is, its functional position in the various brain pathways.

Doug: That helps a lot. Thanks.

Mary: If we accept functionalism, aren’t we at risk of sliding back into the whole functional-versus-organic mess? Am I not correct that functionalism might predict that some psychiatric illness is in the software, hence, functional, whereas others might be in the hardware? That has been a pretty sterile approach, hasn’t it? I still want to stick up for the identity theories.

Teacher: That is quite insightful, Mary. Maybe we will have time to look quickly at the implications of these theories for psychiatry. Let me briefly outline how the philosophical community has responded to functionalism. I will focus on two main objections. The first, and probably most profound, addresses the question of whether a software program is really a good model of the mind. This approach, known as the “Chinese room problem,” was developed by the Berkeley philosopher John Searle (14) . It goes like this. Assume that you are part of a program designed to simulate naturally spoken Chinese, about which you know absolutely nothing. You have an input function, a window in your room through which you receive Chinese characters. You then have a complex manual in which you look up instructions. You go to the piles of Chinese characters in this room and, carefully following the instruction manual, you assemble a response that you produce as the output function of your room. If the manual is good enough, it is possible that to someone outside the room it might appear that you know Chinese—but, of course, you do not.

Mary: So, what is the point?

Teacher: The point that Searle is making is that software is a bad model of the mind because it is only rules with no understanding (or, more technically, “syntax without semantics”). An aspect of mind that has to be taken into account in any model of the mind-body problem is that minds understand things. You know what the words “box,” “love,” and “sky” mean. Meaning is a key, basic aspect of some critical dimensions of mental functioning.

Francine: But there have been a number of strong rebuttals to this argument!

Teacher: I know, Francine, but if we are going to get done with these rounds, let me outline the second main problem with the functionalist approach: it defines mind-brain operations solely in terms of their functional status.

Doug: Yes. So, where is the problem?

Teacher: Well, for example, say I am faced with a color-discrimination task: having to learn which kind of fruit is bitter and poisonous (let’s say green) and which is sweet and nutritious (let’s say red). The state of my perception of color in this context is only meaningful to a functionalist because it enables me to predict taste and nutritional status.

In what is called the “inverted-spectrum problem,” if the wiring from your eye to your brain for color were somehow reversed so that you saw green where I saw red and vice versa, from a functional perspective, we would never know. You would learn just as quickly as I would that green fruit is bitter and to be avoided and red fruit is good and can be eaten safely. However, our subjective experiences would be exactly the opposite. You would have learned to associate the subjective sense of redness (which you would have learned to call greenness because that it what everyone else would call it) with fruit to be avoided and greenness with fruit to be eaten.

Mary: I think I see.

Teacher: We need to move on to another of the most puzzling aspects of the mind-body problem.

Mary: How many more are there? My head is starting to swim.

Teacher: Hold on, Mary. Just a few more minutes, and we’ll be done. The first issue we need to talk about has a bunch of names, which I will call the problem of intentionality. If we reject dualistic models and accept one of the family of identity theories or functionalism, how do we explain that when I want to scratch my nose, amazingly, my arm and hand move and my fingers scratch?

Doug: Isn’t your big term “intentionality” another word for free will?

Teacher: Sort of, Doug, but I don’t want to get into all the ethical and religious implications of free will. I would argue that it is an absolutely compelling subjective impression of every human being I have spoken to that we have a will. We can wish to do things, and then our body executes those wishes. This phenomenon, which in the old dualistic theory might be called mind-brain causality, is pretty hard to explain using identity and functionalist theories.

Mary: The eliminative materialists have a solution: that the perception of having a will is false.

Doug: Isn’t there any philosophically defensible alternative to this rather grim view?

Mary: I’d be more interested in a scientifically defensible alternative. But I’m a little confused. If we accept the identity theory, aren’t we then saying that brain and mind are the same thing? So, if the brain wishes something—has intention, to use your words—then so does the mind.

Teacher: Technically you’re right, Mary. But here is the problem. How do carbon atoms, sodium ions, and cAMP have intentions or wishes?

Mary: Hmm. I’ll have to think about that.

Teacher: Although there are several different approaches to this problem, I want to focus here on only one: that of emergent properties and the closely associated issues of bottom-up versus top-down causation.

Francine: Can you define emergent properties for us?

Teacher: Sure. But first we have to review issues about levels of causation. Most of us accept that there are certain laws of subatomic particles that govern how atoms work and function. The rules for how atoms work can then explain chemical reactions, the rules that explain biochemical systems like DNA replication, and these in turn can explain the biology of life. I could keep going, but I think that you get the basic idea.

Mary: Yes. So what does this have to do with the mind-body problem?

Teacher: The concept of emergent properties is that at higher levels of complexity, new features of systems emerge that could not be predicted from the more basic levels. With these new features come new capabilities.

Francine: Can you give us some examples?

Teacher: Sure. One example that is often used is water and wetness. It makes no sense to say that one water molecular is wet. Wetness is an emergent property of water in its liquid form. Probably a better analogy is life itself. Imagine two test tubes full of all the constituents of life: oxygen, carbon, nitrogen, etc. In one of them, there are only chemicals—no living forms—and in the other, there are single-celled organisms. You would be hard pressed to deny that although the physical constituents of the two tubes are the same, there are not some new properties that arise in the tube with life.

Doug: Couldn’t you say the same things about family or social systems, that they have emergent properties that were not predictable from the behavior of single individuals?

Teacher: Yes, Doug. Many would argue that. One critical concept of emergent properties is that all the laws of the lower level operate at the higher level, but new ones come online. So, the question this all leads to is whether we can view many aspects of mind, such as intentionality, consciousness, or qualia, as emergent properties of brain.

The theory of emergent properties can challenge traditional scientific ideas about the direction of causality. Traditional reductionist models of science see causation flowing unidirectionally through these hierarchical systems, from the bottom up. Changes in subatomic particles might influence atomic structure, which in turn would affect molecular structure, etc. But no change in a biological system would affect the laws of quantum mechanics. However, if we adopt this perspective on the mind-body problem, it is very difficult to see how volition could ever work.

Doug: I think I need an example here to understand what you’re driving at.

Teacher: Let’s look at evolution. Most of us accept that life is explicable on the basis of understood principles of chemistry. However, life is a classic emergent property. Evolution does not work directly on atoms, molecules, or cells. The unit of selection by which evolution works is the whole organism, which will or will not succeed in passing on its genes to the next generation. So your and my DNA are in fact influenced by natural selection acting on the whole organisms of our ancestors. Thus, in addition to the traditional bottom-up causality we usually think about—DNA produces RNA, which makes protein—DNA itself is shaped over evolutionary time by the self-organizing emergent properties of the whole organism that it creates. That is an example of top-down causality. But, critically, this hypothesis is nothing like dualism. Organisms are entirely material beings that operate by the rules of physics and chemistry.

Mary: So you see this as a possible model for the mind/brain?

Teacher: This is one of the main ways that people try to accommodate two seemingly contradictory positions, that dualism is not acceptable and is probably false and that the mind/brain truly has causal powers, so that human volition is not a fantasy as proposed by the eliminative materialists.

I need to see a patient in a few minutes. But before we end, we have to touch on two more issues. The first returns us to where we started, with dualism. As I said before, few working scientists today give much credence to classical Cartesian substance dualism, although property dualism does have some current adherents. However, there is a third form of dualism that may be highly relevant to modern neuroscience, especially psychiatry.

Francine: What is that, Teacher?

Teacher: It is called explanatory dualism and might be defined as follows: to have a complete understanding of humans, two different kinds of explanations are required. Lots of different names have been applied to these two kinds of explanations. The first can be called mental, psychological, or first person. The second can be called material, biological, or third person.

Doug: Aren’t those just different names for Descartes’ mental and material spheres?

Teacher: Yes. But with one critical difference. Descartes spoke of the existence of two fundamentally different kinds of “stuff.” Technically, he was talking about ontology, the discipline in philosophy that examines the fundamental basis of reality. Explanatory dualism, by contrast, deals with two different ways of knowing or understanding. This is a concern of the discipline of epistemology, or the problem of the nature of knowledge.

Mary: Can you explain that without all the big words?

Teacher: A fair question! Explanatory dualism makes no assumptions about the nature of the relationship between mind and brain. It just says that there are two different and complementary ways of explaining events in the mind/brain.

Doug: To accept explanatory dualism, do you have to accept Descartes’ substance dualism?

Teacher: No. In fact, explanatory dualism is consistent with identity theories or functionalism. Let’s assume that the token identity theory about Mr. A’s depression is true; that is, the serotonergic dysfunction in certain critical limbic regions in his brain is his depression. Explanatory dualism suggests that even if these brain and mind states have an identity relationship, to understand these states completely requires explanations both from the perspective of mind (perhaps the psychological issues that Doug first raised at the beginning of our discussion) and the perspective of brain. Neither approach provides a complete explanation.

Doug: That is very attractive. This makes it possible to believe that mind and brain are the same thing and yet not deny the unique status of mental experiences.

Teacher: Yes, if you accept explanatory dualism.

Mary: Isn’t this theory a bit unusual in that most events in the material world have only one explanation? We wouldn’t think that you would have one explanation for lightning and another for earth-to-cloud electrical discharges. Why should events in the brain be different?

Teacher: I agree, Mary. Explanatory dualism suggests that there is something unique about mind-brain events that does not apply to material events that do not occur in brains, that they can be validly explained from two perspectives, not just one.

Doug: What is appealing about explanatory dualism is that it seems to describe what we do every day when we see psychiatric patients, despite all these discussions, and it may be that mind really is brain. I am still impressed with the basic fact that we have fundamentally different ways in which we can know brains versus minds. One is public and the other private. Getting back to Mr. A, the optimal treatment for him requires me to be able to view Mr. A’s depression from both the perspectives of brain and mind. We need to be able to view the depression as a product of Mr. A’s brain to consider whether his disorder might be due to a neurologic and endocrine disease and to evaluate the efficacy and understand the mode of action of antidepressant medications. But to provide good quality humanistic clinical care, and especially psychotherapy, I need to be able to use and develop my natural intuitive and empathic powers to understand his depression from the perspective of mind, thinking about his wishes, conflicts, anger, sadness, and the impact of life events in addition to autoreceptors, uptake pumps, and down-regulation.

Teacher: There is a lot that is very sensible in what you say, Doug. I would add that it is critically important for us to understand both the strengths and limitations of and the important differences between knowing our patients from the perspective of brain versus from the perspective of mind.

Doug: I agree.

Teacher: One last issue, and then I really have to go. When we look at major theories in psychiatry, like the dopamine hypothesis of schizophrenia or trying to tie Mr. A’s depression to dysfunction in the serotonin system, what assumptions are we making about the relationship between mind and brain?

Mary: It is pretty clearly materialistic at least in the sense that changes in brain explain the mental symptoms of these syndromes.

Teacher: So these theories embody the assumption of brain-to-mind causality?

Doug: I am not clear on this. Do these biological theories of psychiatric illness assume a causal or an identity relationship between mind and brain?

Teacher: Good question, Doug. If you listen to biological researchers closely, they actually use causal language quite commonly. They might say, “An excess of dopamine transmission in key limbic forebrain structures causes schizophrenia.”

Francine: Do they mean that, or do they actually mean that an excess of dopamine transmission is schizophrenia?

Teacher: I am not sure. My bet is that most biological psychiatrists prefer some kind of identity theory. I wonder if they use causal language because Cartesian assumptions about the separation of material and mental spheres are so deeply rooted in the way we think.

Mary: So they may not be very precise about the philosophical assumptions they are making?

Teacher: That’s my impression.

Mary: What about the multiple-realizability problem? Are they likely to assume type identity theories that imply that a single mental state (e.g., auditory hallucinations) has an identity relationship with a single brain state or token identity states that suggest that multiple brain states might produce the same mental state, such as hearing voices?

Teacher: My guess is that most researchers suggest that token identity models are most realistic. They would be more likely to call it “etiologic heterogeneity,” but I think it is the same concept in different garb.

Doug: After all, we know that hearing voices can arise from drugs of abuse, schizophrenia, affective illness, and dementia.

Mary: What about eliminative materialism? That would have pretty radical implications for the practice of psychiatry!

Teacher: Yes, it would, Mary. If you took that theory literally—that mental processes are without causal efficacy, like froth on the wave—then any psychiatric interventions that are purely mental in nature, like psychotherapy, could not possibly work.

Doug: We have a lot of evidence that psychological interventions work and can produce changes in biology. That would be strong evidence against eliminative materialism, wouldn’t it?

Teacher: That’s how I see it, Doug.

Francine: What about functionalism? Certainly theories of schizophrenia and affective illness have pointed toward defective information processing and mood control modules, respectively.

Teacher: This gets to a pretty basic point. Etiologic heterogeneity aside, are specific forms of mental illness “things” that have a defined material basis or abnormalities at a functional level, like an error in a module of software?

Francine: This gets at what you said before. Functionalism is different from identity theories in that it implies abnormalities are possible in psychiatric illness at two levels: the functional “software” level that affects mind or the material “hardware” level that affects brain.

Teacher: Yes. I am ambivalent about that implication. It suggests two different pathways to psychiatric illness. Is it helpful to ask whether Mr. A developed depression with a normal brain that was “misprogrammed” perhaps through faulty rearing or because there is a structural abnormality in his brain? I’m not sure. I continue to feel that functionalism makes sense if you think about computers and artificial intelligence, but when you deal with brains like we do, I have my doubts. But as I said, this is still the most popular theory about the mind-body problem among philosophers. I have to go. I look forward to seeing you on rounds tomorrow. This was fun.

Doug, Francine, and Mary: Good-bye, Teacher.

Conclusions

The goal of this introductory dialogue was to provide a helpful, user-friendly introduction to some of the current thinking on the mind-body problem, as seen from a psychiatric perspective. Many interesting topics were not considered (including, for example, philosophical behaviorism and the details of theories propounded by leading workers in the field, such as Searle and Dennett), and others were discussed only superficially. Those interested in pursuing this fascinating area might wish to consult the list of references and web sites below.

Recommendations

The Stanford Encyclopedia of Philosophy (15) has a number of entries relevant to the mind-body problem. For example, see “Epiphenomenalism,” “Identity Theory of Mind,” and “Multiple Realizability.” See, also, the Dictionary of Philosophy of Mind (16) .

David Chalmers has compiled a very useful list of “Online Papers on Consciousness” as part of a larger web site titled “Contemporary Philosophy of Mind: An Annotated Bibliography” (17) .

Further Reading

Bechtel’s Philosophy of Mind: An Overview for Cognitive Science (18) is a good overview from the perspective of psychology, although rather technical in places.

A Companion to the Philosophy of Mind (19) is a helpful but somewhat difficult introductory essay followed by short entries on nearly all topics of importance in the mind-body problem. Very useful.

Gennaro’s Mind and Brain: A Dialogue on the Mind-Body Problem (20) is a brief, easily understood introduction to the mind-body problem, also in the form of a dialogue.

Churchland’s Matter and Consciousness: A Contemporary Introduction to the Philosophy of Mind (21) is a good introduction to the mind-body problem. Although a strong advocate for eliminative materialism, Churchland fairly presents the other main perspectives. The chapter on neuroscience is dated.

Brook and Stainton’s Knowledge and Mind: A Physical Introduction (22) is a charming, accessible, and up-to-date introduction that includes sections on epistemology and the problem of free will.

Priest’s Theories of the Mind (23) is a quite useful, albeit somewhat more advanced, treatment of the mind-body problem. Priest takes a different approach from the other books listed here, summarizing the views of this problem by 17 major philosophers, from Plato to Wittgenstein.

Heil’s Philosophy of Mind: A Contemporary Introduction (24) is a recent introductory book, with an emphasis on the metaphysical aspects of the mind-body problem. A bit hard to follow in the later chapters.

Searle’s The Rediscovery of the Mind (25) is probably the most important book by this influential philosopher who has been very critical of functionalism. He writes clearly and with a minimum of philosophical jargon.

Nagel’s The View From Nowhere (26) is a brilliant book-length treatment of the key epistemic issue raised by the mind-body problem: that we see the world from a third-person perspective but ourselves from a first-person perspective.

Hannan’s Subjectivity and Reduction: An Introduction to the Mind-Body Problem (27) is a short and relatively clear introduction. The author makes no attempt to hide her views about the problem.

The Nature of Consciousness: Philosophical Debates (28) is probably the most up-to-date of the several available collections of key articles in this area, with an emphasis on problems related to consciousness.

Cunningham’s What Is a Mind? An Integrative Introduction to the Philosophy of Mind (29) is a particularly clear and up-to-date summary of the mind/body and related philosophical topics. It is one of the best available introductions.

If you want to probe the mind-body problem on your way to work, you might want to try Searle’s The Philosophy of Mind (30) audiotapes. Searle has his own specific “take” on this problem, but he is down-to-earth and rather accessible for the beginner.

Received Aug. 22, 2000; revision received Dec. 5, 2000; accepted Dec. 21, 2000. From the Virginia Institute for Psychiatric and Behavioral Genetics, the Department of Psychiatry and the Department of Human Genetics, Medical College of Virginia of Virginia Commonwealth University, Richmond. Address reprint requests to Dr. Kendler, Box 980126, Richmond, VA 23298-0126.Funded in part by the Rachel Brown Banks Endowment Fund. Jonathan Flint, M.D., and Becky Gander, M.A., provided comments on an earlier version of this article.

Figure 1.

Figure 1. Two Views of the Causal Relationship Between Mind and Brain in the Experience of Thirst and Act of Reaching for a Beer a

a In the view shown in the top part of the figure, which depicts a bidirectional causal relationship between mind and brain, the critical decision to reach for beer because of thirst is made consciously in the mind; the decision is conveyed to the motor cortex for implementation. From the perspective of eliminative materialism, all causal arrows flow within the brain. The mind is informed of brain processes but has no causal efficacy. Conscious decisions apparently made by the mind have, in fact, been previously made by the brain.

1. Kandel ER: A new intellectual framework for psychiatry. Am J Psychiatry 1998 ; 155:457-469 Link ,  Google Scholar

2. Kandel ER: Biology and the future of psychoanalysis: a new intellectual framework for psychiatry revisited. Am J Psychiatry 1999 ; 156:505-524 Abstract ,  Google Scholar

3. Seager W: Theories of Consciousness: An Introduction and Assessment, 1st ed. London, Routledge, 1999 Google Scholar

4. Edelman GM, Tononi G: A Universe of Consciousness. New York, Basic Books, 2000 Google Scholar

5. Damasio A: The Feeling of What Happens: Body and Emotion in the Making of Consciousness. San Diego, Harcourt Brace, 1999 Google Scholar

6. Guzeldere G, Flanagan O, Hardcastle VG: The nature and function of consciousness: lessons from blindsight, in The New Cognitive Neurosciences, 2nd ed. Edited by Gazzaniga MS. Cambridge, Mass, MIT Press, 2000, pp 1277-1284 Google Scholar

7. Baynes K, Gazzaniga MS: Consciousness: introspection, and the split-brain: the two minds/one body problem. Ibid, pp 1355-1364 Google Scholar

8. Descartes R: Meditations on First Philosophy. Translated by Lafleur LJ. New York, Macmillan, 1985 Google Scholar

9. Squire LR, Kandel ER: Memory: From Mind to Molecules. New York, Scientific American Library, 1999 Google Scholar

10. Libet B: Unconscious cerebral initiative and the role of conscious will in voluntary action. Behav Brain Sci 1985 ; 8:529-566 Crossref ,  Google Scholar

11. Levine J: Materialism and qualia: the explanatory gap. Pacific Philosophical Quarterly 1983 ; 64:354-361 Crossref ,  Google Scholar

12. Chalmers D: Facing up to the problem of consciousness. J Consciousness Studies 1995 ; 2:200-219 Google Scholar

13. Nagel T: What is it like to be a bat, in Mortal Questions. New York, Cambridge University Press, 1979, pp 165-180 Google Scholar

14. Searle JR: Minds, brains, and programs. Behav Brain Sci 1980 ; 3:417-424 Crossref ,  Google Scholar

15. Stanford University, Metaphysics Research Lab, Center for the Study of Language and Information: Stanford Encyclopedia of Philosophy. http://plato.stanford.edu Google Scholar

16. Eliasmith C (ed): Dictionary of Philosophy of Mind. http://www.artsci.wustl.edu/~philos/MindDict/index.html Google Scholar

17. Chalmers D: Online Papers on Consciousness. http://www.u.arizona.edu/~chalmers/online.html Google Scholar

18. Bechtel W: Philosophy of Mind: An Overview for Cognitive Science. Hillsdale, NJ, Lawrence Erlbaum Associates, 1988 Google Scholar

19. Guttenplan S (ed): A Companion to the Philosophy of Mind. Cambridge, Mass, Blackwell, 1994 Google Scholar

20. Gennaro RJ: Mind and Brain: A Dialogue on the Mind-Body Problem. Indianapolis, Hackett, 1996 Google Scholar

21. Churchland PM: Matter and Consciousness: A Contemporary Introduction to the Philosophy of Mind, revised ed. Cambridge, Mass, MIT Press, 1988 Google Scholar

22. Brook A, Stainton RJ: Knowledge and Mind: A Physical Introduction. Cambridge, Mass, MIT Press, 2000 Google Scholar

23. Priest S: Theories of the Mind. New York, Houghton Mifflin, 1991 Google Scholar

24. Heil J: Philosophy of Mind: A Contemporary Introduction. London, Routledge, 1998 Google Scholar

25. Searle JR: The Rediscovery of the Mind. Cambridge, Mass, MIT Press, 1992 Google Scholar

26. Nagel T: The View From Nowhere. New York, Oxford University Press, 1986 Google Scholar

27. Hannan B: Subjectivity and Reduction: An Introduction to the Mind-Body Problem. Boulder, Colo, Westview Press, 1994 Google Scholar

28. Block N, Flanagan O, Guzeldere G (eds): The Nature of Consciousness: Philosophical Debates. Cambridge, Mass, MIT Press, 1997 Google Scholar

29. Cunningham S: What Is a Mind? An Integrative Introduction to the Philosophy of Mind. Indianapolis, Hackett, 2000 Google Scholar

30. Searle JR: The Philosophy of Mind. Springfield, Va, Teaching Company, 1996 (audiotapes) Google Scholar

  • Mental disorders as processes: A more suited metaphysics for psychiatry 15 July 2022 | Philosophical Psychology, Vol. 37, No. 2
  • Sich entwickelnde Paradigmen und ihre Auswirkungen auf die psychische Gesundheitsversorgung 25 April 2024
  • Effective Connectivity Between the Orbitofrontal Cortex and the Precuneus Differentiates Major Psychiatric Disorders: Results from a Transdiagnostic Spectral DCM Study CNS & Neurological Disorders - Drug Targets, Vol. 22, No. 8
  • Integrating subjective and objective—spatiotemporal approach to psychiatric disorders 17 May 2023 | Molecular Psychiatry, Vol. 28, No. 10
  • Reasons for the Belief that Psychotherapy is Less Effective for Biologically Attributed Mental Disorders 26 June 2023 | Cognitive Therapy and Research, Vol. 120
  • La hipótesis de la saliencia aberrante: unificando la neurobiología y la fenomenología de la esquizofrenia 1 January 2023 | Revista Latinoamericana de Psicopatologia Fundamental, Vol. 26
  • The seductive allure effect extends from neuroscientific to psychoanalytic explanations among Turkish medical students: preliminary implications of biased scientific reasoning within the context of medical and psychiatric training 17 January 2022 | Thinking & Reasoning, Vol. 28, No. 4
  • Brain-immune crosstalk in the treatment of major depressive disorder European Neuropsychopharmacology, Vol. 141
  • Mind-Brain Dichotomy, Mental Disorder, and Theory of Mind 28 July 2018 | Erkenntnis, Vol. 85, No. 2
  • Synthese, Vol. 196, No. 6
  • Journal of Humanistic Psychology
  • The mind‐brain gap and the neuroscience‐psychiatry gap 2 March 2018 | Journal of Evaluation in Clinical Practice, Vol. 24, No. 4
  • Revista Brasileira de Psiquiatria, Vol. 40, No. 3
  • Philosophical Psychology, Vol. 29, No. 8
  • Psychiatry Investigation, Vol. 13, No. 5
  • La fabrique du TOC moderne : une analyse ethnographique de la cérébralisation de la névrose obsessionnelle dans un laboratoire de neurosciences cliniques 24 February 2014 | Socio-logos, No. 9
  • Social Psychiatry and Psychiatric Epidemiology, Vol. 49, No. 1
  • Clinical Psychology: Science and Practice, Vol. 21, No. 1
  • Trends in Psychiatry and Psychotherapy, Vol. 36, No. 4
  • Re-revisiting the Behavioral Model of Health Care Utilization by Andersen: A Review on Theoretical Advances and Perspectives 5 December 2013
  • General Hospital Psychiatry, Vol. 35, No. 1
  • The Scandinavian Psychoanalytic Review, Vol. 35, No. 1
  • Nonorganic Visual Loss: What's In a Name? American Journal of Ophthalmology, Vol. 151, No. 4
  • A utilitarian concept of manas and mental health Indian Journal of Psychiatry, Vol. 53, No. 2
  • Impact of biopsychosocial factors on psychiatric training in Japan and overseas: Are psychiatrists oriented to mind, brain, or sociocultural issues? 28 September 2010 | Psychiatry and Clinical Neurosciences, Vol. 64, No. 5
  • Psychiatry in general medical settings
  • Bibliographie
  • Global Perspectives on Mental–Physical Comorbidity
  • Mental Health Clinicians’ Beliefs About the Biological, Psychological, and Environmental Bases of Mental Disorders 24 February 2009 | Cognitive Science, Vol. 33, No. 2
  • Neurowissenschaften und Psychodynamische Psychotherapie Zeitschrift für Psychiatrie, Psychologie und Psychotherapie, Vol. 57, No. 2
  • Integrative treatment in persons with intellectual disability and mental health problems 31 October 2006 | Journal of Intellectual Disability Research, Vol. 51, No. 1
  • The International Journal of Psychoanalysis, Vol. 87, No. 5
  • A Neurobiological Perspective on Mentalizing and Internal Object Relations in Traumatized Patients with Borderline Personality Disorder 28 April 2008
  • Schizophrenia Research, Vol. 79, No. 1
  • Neuropsychoanalysis, Vol. 6, No. 2
  • Niloofar Afari , Ph.D. , and
  • Dedra Buchwald , M.D.
  • Shitij Kapur , M.D., Ph.D., F.R.C.P.C.
  • Brain, Mind and Kep 17 November 2016 | Australian & New Zealand Journal of Psychiatry, Vol. 36, No. 6
  • A Discussion with D.M. Armstrong about the Nexus between Philosophy and Psychiatry 16 November 2016 | Australasian Psychiatry, Vol. 10, No. 4
  • KATARINA STENGLER-WENZKE , M.D. , and
  • ULRICH MÜLLER , M.D. ,
  • MARCUS VINICIUS KECHE WEBER , M.D. ,
  • G. SCOTT WATERMAN , M.D. , and
  • ROBERT J. SCHWARTZ , PH.D. ,
  • Journal of the American Academy of Psychoanalysis, Vol. 30, No. 4
  • American Journal of Clinical Hypnosis, Vol. 44, No. 2
  • Alternative and Complementary Therapies, Vol. 7, No. 5

essay on the mind body problem

The Mind-Body Problem in the History of Psychology Essay

Introduction, the views of philosophers, the views of physicians, the views of psychologists from various schools of thought, the author’s views.

The mind-body problem is a very old one. In Western culture, its roots go back in history to the philosophical theories of Ancient Greece. The crux of the problem is evident from its name: what is the relationship between the mind and the body? How do they correlate and work together? The reasons for the problem’s emergence are not hard to imagine. There is a large qualitative gap between what we see as our physical bodies, and what we experience happening in our heads. And, once the question was asked, numerous thinkers attempted to answer it.

In this paper, we will briefly describe the history of the problem by discussing what various prominent philosophers and physicians thought about it. After that, we will discuss the views of the most important schools of psychological thought on this problem. We will also state what the author of this paper believes regarding this problem.

Ancient Greek philosophers already were trying to answer a question similar to the mind-body problem. However, for them it was formulated in different terms; they asked not about the mind, but rather about the soul (ψυχή, psyche), something that made a thing alive. The soul was perceived as something extremely light and associated with breath. For instance, Thales associated it with air, while Heraclitus thought of it as of fire (Copleston, 1993).

More complicated theories emerged with Socrates, Plato and Aristotle. Socrates’ opinions are known to us only thanks to the writings of other philosophers; in Plato’s Phaedo , Socrates talks to his friends and convinces them that a human’s soul is immortal. Plato believed that the soul is an immortal substance which is, in fact, imprisoned in the body (Plato, n.d.). Aristotle paid somewhat more attention to the problem of the soul. His treatise, On the Soul , discusses the issue. Aristotle believed the soul to be a principle of all the living things; even plants had souls.

However, unlike Plato, he thought it to be dependent on the body (Aristotle, n.d.). For a well-known Neo-Platonist Plotinus, who somewhat combined the teachings of Plato and Aristotle, the universe consisted of the One, a source of everything in the world, the Mind, all the eternal truths, and the Soul, which was a manifestation of the Mind in living beings (Copleston, 1993). Therefore, the Ancient Greek philosophy already associated the soul and the mind.

The medieval philosophy, strongly aligned with Christian theology, paid much attention to the issues of the soul. A prominent medieval philosopher Thomas Aquinas ties the issue of the body and the soul to the issue of the matter and form. For him, the body and the soul are united – as much like the seal and the wax are; the mind does not interact with the body, because they are one. Regardless, the soul is immortal and capable of existing without the body (McInerny & O’Callaghan, 2014).

Rene Descartes, one of the most prominent representatives of the 17 th -century rationalism, had strictly distinguished the soul and the body. His expression, “cogito ergo sum,” “I think, therefore I am,” is known to many. He believed that the soul is an immortal substance created by God, completely immaterial, responsible for all the cognitive activity of a person. At the same time, he believed the soul to interact with the body by pushing a part of the pineal gland in the brain (Lokhorst, 2013).

John Locke was the one to bring the notion of consciousness as an important one, defining a person as a conscious, thinking thing. He believed that, at birth, a person’s mind was a tabula rasa, but then it was filled with numerous ideas coming from our experience. The body, however, was also important, and it interacted with the mind (Uzgalis, 2012).

For David Hume, the mind’s contents came from perceptions, but a priori ideas were also possible. The mind was apparently a set of perceptions. However, the mind existed within the body, and without the body, there was no mind. This dualism is incomplete, however; for instance, he asked what led people to the very conviction that the body exists (Flage, 1983).

For Immanuel Kant, the mind perceived the outer world through the prism of innate, a priori forms of sensibility, such as space and time. The body was a spatial object, but, given that space was a form of sensibility, any spatial object, including the body, was a phenomenon, and it could never be known whether this object really existed and what it was like; it remained an inaccessible thing-in-itself (Rohlf, 2010).

It is also important to consider the attitudes of influential physicians towards the mind-body problem. A prominent Ancient Greek physician, Galen, believed that the mind and the body were a single unit, not separate things. He was convinced that individual parts of the body were responsible for different functions and that the mind was among such functions (Hankinson, 1991).

Hermann von Helmholtz, an influential German physician and physicist of the 19 th century, had a strictly materialistic view on the mind-body problem. He, along with his colleagues, believed that a living body was a machine that only worked according to the laws of physics. For him, no distinct types of energy were responsible for the existence of life (Bowler & Morus, 2005, p. 177-178). Another important figure of the 19 th century, an English biologist Thomas Huxley, had rather similar views.

He was an epiphenomenalist, i.e., he believed that the mind was produced by the brain, so all the emotions, wishes, feelings were reflections of the needs of the physical body (Huxley, 1874). A prominent Russian physician of the 19-20 th centuries Ivan Pavlov also aligned towards the monistic solution of the mind-body problem, believing that the mind and the body were identical and that the mind originated in the higher nervous activity (Windholz, 1997).

The establishment of psychology as an academic discipline is associated with the creation of a laboratory in Germany in 1879 by Wilhelm Wundt. Wundt began the psychological school of voluntarism. Voluntarists believed that consciousness could be broken down into separate components without the whole being lost. For them, the mind interacted with the body, but the nature of this interaction was difficult to grasp (Hergenhahn & Henley, 2014).

Edward Titchener, a Wundt’s student who created the school of structuralism, believed that the mind was comprised only of experience gathered throughout a person’s life and that this experience was organized into a structure of some kind; he wished to describe its elements. Titchener and structuralists also adopted the voluntarists’ stance on the mind-body problem, holding to dualism, a view that the mind and the body are two different entities. However, they were not interested in studying the relationship between them (Wertheimer, 2012).

Another school of psychology, functionalism, was interested in the study of the functions of the mind and consciousness. For them, the mind was a set of functions, and the mental states were performed by the body. Therefore, they did not hold to the traditional mind-body dualism (Hergenhahn & Henley, 2014).

The representatives of the school of behaviourism believed that a person’s (or, in fact, any organism’s) behaviour was a response to the physical stimuli they received from the outside. In fact, they rejected the need to study consciousness at all. The mind-body problem was not relevant to them, for they believed consciousness was a physical process accompanying behaviour (Hergenhahn & Henley, 2014).

Psychoanalysis, on the other hand, is interested in studying the human’s subconsciousness to help the mentally ill. They believe the unconscious mind to be the cause of behaviours and mental conditions. For psychoanalysts, bodies “cannot be reduced to mere collections of chemicals”; mind and body are mutually permeating entities (Brearley, 2002, p. 442-443). However, the mind still remains the main area of psychoanalysts’ interest.

The school of humanistic psychology emerged in the 1960s to combine the problems researched by deterministic psychoanalysts and behaviourists while leaving space for free will and the spirit. For them, mind, body, and spirit were aspects of a single person, and no split between mind and body existed (Schneider, Pierson, & Bugental, 2015, p. 656-657).

The school of psychobiology combined the principles of behaviourism and psychoanalysis and developed the idea that the mind and psychological phenomena are grounded in the physiological causes. The representatives of this school realize the mutual dependence of mind and body and strive to exactly determine the relationship between the two by using both psychological and biological variables in their studies (for instance, see Kemeny (2003) or Stein (2009)).

Finally, the school of cognitive psychology, one of the newest schools of thought, studies the cognitive processes of the brain (such as attention, memory, concept formation, reasoning, etc.), and employs the achievements of numerous academic disciplines such as neuroscience, information technologies and artificial intelligence, linguistics, philosophy and anthropology. Their view on the mind-body problem can be called similar to that of psychobiology; the mind is grounded in the biological causes (Hergenhahn & Henley, 2014).

The author of this paper is inclined towards the point of view of the schools of psychobiology and cognitive psychology. We believe that the mind is grounded in the body. In fact, it is our opinion that they are a single entity, even though it is not (yet) understood how exactly mental processes emerge from the body. Distinguishing between mind and body is required because it is necessary to denote different aspects of this entity.

As it was possible to see, the mind-body problem was studied as early as in the times of Ancient Greece. Ancient Greek philosophers relied on speculative reasoning alone to explain the nature of ψυχή, the soul that was responsible not only for the mind but was believed to make a creature alive. The philosophers of the Middle Ages perceived the issue through the prism of Christian theology. Descartes and the rationalism of the 17 th century proclaimed a strict mind-body dualism; still, the notions of consciousness and perceptions were made important by Locke and Hume.

The psychologists adopted different stances towards the problem; for voluntarists and functionalists, mind and body were two different entities that interacted, but the nature of this interaction was not interesting to them. Behaviourists and psychoanalysts had almost opposing views, believing that either the body and behaviour or the mind and the unconscious were of primal importance, respectively. Humanistic psychologists believed that mind, body, and spirit were a single entity. Finally, psychobiology and cognitive psychology perceive the mind as emerging from the body but believe that essentially they are one.

Aristotle. (n.d.). On the soul . Web.

Brearley, M. (2002). Psychoanalysis and the body-mind problem. Ratio, 15 (4), 429-443. Web.

Bowler, P. J., & Morus, I. R. (2005). Making modern science: A historical survey . Chicago, IL: University of Chicago Press.

Copleston, F. (1993). A history of philosophy. Volume 1: Greece and Rome: from the pre-Socratics to Plotinus . New York, NY: Image Books, Doubleday. Web.

Flage, D. E. (1983). Review of “Hume’s philosophy of mind.” Hume Studies, 9 (1), 82-88. Web.

Hankinson, R. J. (1991). Galen’s Anatomy of the Soul. Phronesis, 36 (2), 197-233. Web.

Hergenhahn, B. R., & Henley, T. B. (2014). An introduction to the history of psychology (7th ed.). Belmont, CA: Wadsworth.

Huxley, T. H. (1874). On the hypothesis that animals are automata, and its history . Web.

Kemeny, M. E. (2003). The psychobiology of stress. Current Directions in Psychological Science, 12 (4), 124-129. Web.

Lokhorst, G.-J. (2013). Descartes and the pineal gland . Web.

McInerny, R., & O’Callaghan, J. (2014). Saint Thomas Aquinas . Web.

Plato. (n.d.). Phaedo . Web.

Rohlf, M. (2010). Immanuel Kant . Web.

Schneider, K. J., Pierson, J. F., & Bugental, J. F. T. (Eds.). (2015). The handbook of humanistic psychology: Theory, research, and practice (2nd ed.). London, UK: SAGE Publications.

Stein, D. J. (2009). The psychobiology of resilience. CNS Spectrums, 14 (2, Suppl. 3), 41-47.

Uzgalis, W. (2012). John Locke . Web.

Wertheimer, M. (2012). A brief history of psychology (5th ed.). New York, NY: Psychology Press, Taylor & Francis Group.

Windholz, G. (1997). Pavlov and the mind-body problem . Integrative Physiological and Behavioral Science, 32 (2), 149-159. Web.

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2020, June 10). The Mind-Body Problem in the History of Psychology. https://ivypanda.com/essays/the-mind-body-problem-in-the-history-of-psychology/

"The Mind-Body Problem in the History of Psychology." IvyPanda , 10 June 2020, ivypanda.com/essays/the-mind-body-problem-in-the-history-of-psychology/.

IvyPanda . (2020) 'The Mind-Body Problem in the History of Psychology'. 10 June.

IvyPanda . 2020. "The Mind-Body Problem in the History of Psychology." June 10, 2020. https://ivypanda.com/essays/the-mind-body-problem-in-the-history-of-psychology/.

1. IvyPanda . "The Mind-Body Problem in the History of Psychology." June 10, 2020. https://ivypanda.com/essays/the-mind-body-problem-in-the-history-of-psychology/.

Bibliography

IvyPanda . "The Mind-Body Problem in the History of Psychology." June 10, 2020. https://ivypanda.com/essays/the-mind-body-problem-in-the-history-of-psychology/.

  • Psychobiology and Its Usage in Treatment Today
  • Psychobiology Definition and Impacts
  • Conflict and Anxiety by Psychoanalysts and Behaviourists
  • Mind-Body Debate: Monism and Dualism in Psychology
  • Mind-Body Dualism Concept Analysis
  • Sensation and Perception: The Mind-Body Problem
  • Overall Philosophy Behind the Mind-Body Treatment Method
  • Monistic Views on the Mind-Body Debate
  • Mind-Body in Cartesian Dualism and Darwinian Monism
  • Mind - Body Problem
  • Trauma: Treatment and Recovery General Issues
  • Post-Traumatic Stress Disorder: Gender Variations
  • Stress, Its Causes and Effects Relationship
  • Stress Impacts on the Human Development
  • Biological Psychology: Development and Theories

IMAGES

  1. What is the mind body problem? by Ashton Smith on Prezi

    essay on the mind body problem

  2. Mind-Body Problem Essay

    essay on the mind body problem

  3. The Mind-Body Problem in the History of Psychology

    essay on the mind body problem

  4. The Mind-Body Problem Essay Example

    essay on the mind body problem

  5. the mind-body problem

    essay on the mind body problem

  6. The Problem With Body Shaming

    essay on the mind body problem

VIDEO

  1. The Mind-Body Problem Explained

  2. Interactionism: Mind-Body Relationship

  3. The Mind-Body Connection: How Psychological and Emotional Stress Impact Physical Health

  4. Mind-Body Problem (episode 38)

  5. Breatharian mind, body, soul unity connection

  6. Mind Body Problem- History of Psychology Ch.6

COMMENTS

  1. Mind-body problem

    Mind-body problem, in the philosophy of mind and metaphysics, the problem of explaining how mental events arise from or interact with physical events. Historically, three types of theory have been most influential: psychophysical monism, property dualism, and psychophysical dualism. According to.

  2. The mind-body problem

    The mind-body problem is best thought of not as a single problem but as a set of problems that attach to different views of the mind. For physicalists, the mind-body problem is the problem of explaining how conscious experience can be nothing other than a brain activity—what has been called " the hard problem .".

  3. Descartes and the Discovery of the Mind-Body Problem

    The French philosopher René Descartes is often credited with discovering the mind-body problem, a mystery that haunts philosophers to this day. The reality is more complicated than that. Consider the human body, with everything in it, including internal and external organs and parts — the stomach, nerves and brain, arms, legs, eyes, and all ...

  4. Mind-body problem essay

    Essay about the mind body problem and dualism hibdon riley hibdon professor kroll november 17, 2021 (revised) the problem there is an philosophical issue that. Skip to document. ... The mind-body problem is such an obscure topic to analyze, because the mind itself is so complex, that trying to break it down is just flat out exhausting. ...

  5. The Mind-Body Problem: What Are Minds?

    These questions constitute the so-called "mind-body problem," a core issue in the philosophy of mind, the area of philosophy that studies phenomena such as thought, perception, emotion, memory, agency, and consciousness. This essay introduces some of the most influential answers to these questions. Yellow-Red-Blue, 1925 by Wassily Kandinsky. 1.

  6. Mental Causation

    Metaphysics: Mental causation is "at the heart of the mind-body problem" (Shoemaker 2001, p. 74), often figuring explicitly in how the problem is formulated (Mackie 1979; Campbell 1984; Crane 1999). To ask how mind and body are related just is, in part, to ask how they could possibly affect one another.

  7. PDF The mind-body problem

    THE MIND-BODY PROBLEM Tim Crane Department of Philosophy, University College London. The mind-body problem is the problem of explaining how our mental states, events and processes—like beliefs, actions and thinking—are related to the physical states, events and processes in our bodies. A question of the form, 'how is A related to B ...

  8. Mind-body problem

    The mind-body problem is a philosophical problem concerning the relationship between thought and consciousness in the human mind, and the body. [1] [2] It is not obvious how the concept of the mind and the concept of the body relate. For example, feelings of sadness (which are mental events) cause people to cry (which is a physical state of ...

  9. The Mind-Body Problem 3.0

    The mind-body problem is typically presented as a single, monolithic, perduring puzzle that has framed discussions of mental states, at least, since Descartes molded the question into its current form. 1 This essay examines, and, ultimately, rejects this presupposition. Over time, the content of the mind-body problem has shifted substantially.

  10. Mind in a Physical WorldAn Essay on the Mind-Body Problem and Mental

    This book, based on Jaegwon Kim's 1996 Townsend Lectures, presents the philosopher's current views on a variety of issues in the metaphysics of the mind—in particular, the mind-body problem, mental causation, and reductionism. Kim construes the mind-body problem as that of finding a place for the mind in a world that is fundamentally physical.

  11. PDF The Mind-Body-Body Problem

    I. Introduction. There are at least three distinct philosophical problems about the mind and the body: (1) the Traditional Mind-Body Problem; (2) the Body Problem; and (3) the Mind-Body-Body Problem. The Traditional Mind-Body Problem is how to account for the existence and character of the mental—specifically, consciousness, in the sense of ...

  12. The mind-body problem

    The mind-body problem is the problem of explaining how the happenings of our mental lives are related to physical states, events and processes. Proposed solutions to the problem vary by whether and how they endorse physicalism, the claim that mental states are ultimately "nothing over and above" physical states, and by how they understand ...

  13. The Mind‐Body Problem: An Overview

    Summary This chapter contains sections titled: Introduction Marks of the Mental The Physical Mind-Body Relations The Mind-Body Problem The Logical Space of Solutions 1.7 Conclusion The Mind‐Body Problem: An Overview - The Blackwell Guide to Philosophy of Mind - Wiley Online Library

  14. Mind in a physical world : an essay on the mind-body problem and mental

    This book, based on Jaegwon Kim's 1996 Townsend Lectures, presents the philosopher's current views on a variety of issues in the metaphysics of the mind - in particular, the mind-body problem, mental causation, and reductionism. Kim construes the mind-body problem as that of finding a place for the mind in a world that is fundamentally physical.

  15. Mind-Body Problem Essay

    The Mind/Body Problem The mind is about mental processes, thought and consciousness. The body is about the physical aspects of the brain-neurons and how the brain is structured. The mind-body problem is about how these two interact. In this paper I am going to talk about whether the mind and body are separate or the same thing.

  16. A Psychiatric Dialogue on the Mind-Body Problem

    A Companion to the Philosophy of Mind is a helpful but somewhat difficult introductory essay followed by short entries on nearly all topics of importance in the mind-body problem. Very useful. Gennaro's Mind and Brain: A Dialogue on the Mind-Body Problem is a brief, easily understood introduction to the mind-body problem, also in the form of ...

  17. Mind-Body Problem

    Mind-Body Problem. The mind-body problem is the problem of understanding what the relation between the mind and body is, or more precisely, whether mental phenomena are a subset of physical phenomena or not. ... as revealed by this famous and prescient passage from Hippocrates' essay On the Sacred Disease (Jones, 1923):

  18. The Mind-Body Distinction

    The Mind-Body Distinction. One of the deepest and most lasting legacies of Descartes' philosophy is his thesis that mind and body are really distinct—a thesis now called "mind-body dualism.". He reaches this conclusion by arguing that the nature of the mind (that is, a thinking, non-extended thing) is completely different from that of ...

  19. Jaegwon Kim Mind in a Physical World: An Essay on the Mind-Body Problem

    12 That this is what Kim has in mind may be suggested by the following remark, intended to clarify the sense in which an instance of a property causes an instance of another property: 'it must further be the case that one instance causes another instance in virtue of the fact that the first is of [sic] an F-instance and the second is a G-instance' (42, n. 22).

  20. The Mind-Body Problem in the History of Psychology Essay

    A prominent medieval philosopher Thomas Aquinas ties the issue of the body and the soul to the issue of the matter and form. For him, the body and the soul are united - as much like the seal and the wax are; the mind does not interact with the body, because they are one. Regardless, the soul is immortal and capable of existing without the ...

  21. Mind in a Physical World: An Essay on the Mind-Body Problem and Mental

    Part 1 The mind-body problem - where we now are: supervenience, realization and emergence supervenience is not a mind-body theory the layered model and mereological supervenience physical realization physical realization explains mind-body supervenience. Part 2 The many problems of mental causation: three problems of mental causation the supervenience argument, or Descartes's revenge Searle ...

  22. The Mind Body Problem A Reaction Paper Philosophy Essay

    The mind-body problem has been discussed by philosophers and scientists for hundreds of years. The crux of the mind-body problem is that humans have a subjective experience of an inner life or consciousness that seems removed from the physical world. Despite a subjective experience of a separation between mind and body, mind and body need to ...

  23. Mind in a Physical World

    An Essay on the Mind-Body Problem and Mental Causation. by Jaegwon Kim. Paperback. $30.00. Paperback. ISBN: 9780262611534. Pub date: January 27, 2000. Publisher: The MIT Press. 156 pp., 5 x 8 in, MIT Press Bookstore Penguin Random House Amazon Barnes and Noble Bookshop.org Indiebound Indigo Books a Million. Hardcover.

  24. Many mental-health conditions have bodily triggers

    T HE TICS started when Jessica Huitson was only 12 years old. Over time her condition worsened until she was having whole-body fits and being rushed to hospital. But her local hospital, in Durham ...