Think Critically, Live Honestly

Hypothesis Contrary To Fact

Imagine arguing about a reality that never happened, asserting cause and effect from a non-existent event, and presenting it as a fact - that's the intriguing world of this logical fallacy. It's like building castles in the air and then claiming they can actually house people, a captivating yet deceptive illusion that can mislead by creating a false sense of understanding or control over a situation.

  • Cause and Effect
  • Misrepresentation

Definition of Hypothesis Contrary To Fact 

Hypothesis Contrary To Fact, also known as "counterfactual fallacy" or "speculative fallacy," is a type of logical fallacy where a statement or argument is made based on a hypothetical situation that is presented as fact, but is actually contrary to what is known or proven to be true. This fallacy involves making a claim about a past event that didn't occur, and then asserting a cause and effect relationship based on that non-existent event. It is fallacious because it's impossible to definitively know the outcome of an event that did not happen. This fallacy can mislead or manipulate by creating an illusion of understanding or control over a situation, when in fact the hypothetical scenario and its supposed consequences are purely speculative. It's important to note that while hypothetical scenarios can be useful for exploring possibilities, they become fallacious when presented as factual or inevitable outcomes.

In Depth Explanation

The Hypothesis Contrary to Fact, also known as the Counterfactual Fallacy, is a logical error that occurs when an argument is built on a premise that is not true, but is presented as if it were. This fallacy involves making a claim about what would have happened in the past if a certain event had or hadn't occurred, even though there's no way to verify this claim because it's based on a hypothetical situation, not a factual one. Let's imagine a simple scenario to illustrate this fallacy. Suppose you're playing a game of chess and you lose. You then say, "If I had moved my queen instead of my pawn, I would have won the game." This statement is a hypothesis contrary to fact. You're making a claim about an alternate reality that didn't happen, and there's no way to prove whether your claim is true or false because we can't go back in time to see what would have happened if you had made a different move. The logical structure of this fallacy typically involves two statements: one that sets up a hypothetical situation ("If I had moved my queen...") and one that makes a claim about what would have happened in this situation ("...I would have won the game"). The problem is that the first statement is not true—you didn't move your queen—so any claim based on this statement is inherently flawed. This fallacy can be particularly misleading in abstract reasoning because it often sounds plausible. After all, it's easy to imagine how things might have turned out differently if we had made different choices. However, this kind of reasoning is purely speculative and doesn't provide a solid basis for an argument. The Hypothesis Contrary to Fact can have a significant impact on rational discourse because it can be used to deflect responsibility, justify poor decisions, or manipulate others. For example, a person might use this fallacy to argue that they would have succeeded if not for some external factor, thereby shifting the blame for their failure onto something beyond their control. Alternatively, a person might use this fallacy to convince others to take a certain course of action based on what they claim would have happened in a hypothetical situation. In conclusion, while it's natural to speculate about what might have been, it's important to recognize that these speculations are not facts and should not be treated as such in logical arguments. The Hypothesis Contrary to Fact is a fallacy that can lead us astray in our thinking and decision-making, so it's crucial to be aware of it and to challenge it when we encounter it.

Real World Examples

1. Sports Scenario: Imagine a basketball fan saying, "If Michael Jordan had not retired in 1993, the Chicago Bulls would have won eight consecutive NBA championships instead of six." This statement is an example of a hypothesis contrary to fact. It assumes a hypothetical scenario where Jordan didn't retire and then predicts an outcome based on that assumption. However, there's no way to prove this hypothesis because it's impossible to know how the Bulls would have performed had Jordan not retired. 2. Historical Event: A common example is the assertion, "If the United States had not entered World War II, the Allies would have lost." This is a hypothesis contrary to fact because it's based on a hypothetical scenario that didn't occur. While it's possible to speculate, there's no way to definitively know what would have happened had the U.S. not entered the war. 3. Everyday Scenario: Suppose a student who failed an exam says, "If I had just studied one more hour, I would have passed the test." This is an example of a hypothesis contrary to fact. The student is assuming that an extra hour of study would have made the difference between passing and failing, but there's no way to prove this. It's possible that the student might still have failed even with an additional hour of study, or they might have passed even without it. This statement is based on a hypothetical scenario, not on what actually happened.

Countermeasures

Addressing the logical fallacy of Hypothesis Contrary To Fact can be achieved through a few clear and concise steps. Firstly, it's important to encourage critical thinking. This involves questioning the basis of the hypothesis and examining the evidence that supports it. If the hypothesis is based on an event or circumstance that did not actually occur, it's crucial to point this out and discuss the implications of this. Secondly, promoting evidence-based reasoning is key. This means focusing on what we know to be true and what can be proven, rather than what might have been. If a hypothesis is based on a counterfactual, it's essential to redirect the conversation towards the facts at hand. Thirdly, fostering open-mindedness can help counteract this fallacy. This involves being open to alternative hypotheses and not being wedded to a particular outcome. It's important to be willing to change one's mind in the face of new evidence. Lastly, it's beneficial to cultivate a culture of intellectual humility. This means acknowledging the limits of our knowledge and being open to the possibility that we might be wrong. If a hypothesis is based on a counterfactual, it's important to acknowledge this and be willing to revise our views accordingly. In conclusion, countering the Hypothesis Contrary To Fact fallacy involves promoting critical thinking, evidence-based reasoning, open-mindedness, and intellectual humility. By fostering these qualities, we can help ensure that our hypotheses are grounded in fact, rather than in what might have been.

Thought Provoking Questions

1. Can you identify a time when you made a claim about a past event that didn't occur and asserted a cause and effect relationship based on that non-existent event? How did this impact your understanding or control over the situation? 2. Have you ever presented a hypothetical scenario as a factual or inevitable outcome? How did this affect your decision-making process and the decisions of those around you? 3. Can you recall a situation where you were misled by a 'Hypothesis Contrary To Fact' fallacy? How did this influence your perception of the situation and the actions you took? 4. How do you differentiate between useful hypothetical scenarios for exploring possibilities and those that are fallacious because they are presented as factual or inevitable outcomes? How has this skill affected your critical thinking and decision-making abilities?

Weekly Newsletter

Gain insights and clarity each week as we explore logical fallacies in our world. Sharpen your critical thinking and stay ahead in a world of misinformation. Sign up today!

404 Not found

A Believing Scientist

  • Fallacy Friday!
  • Lectures and Video Clips

Friday, March 18, 2016

Today's logical fallacy is...hypothesis contrary to fact.

hypothesis contrary to fact examples in real life

No comments:

Post a comment.

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Counterfactuals

Modal discourse concerns alternative ways things can be, e.g., what might be true, what isn’t true but could have been, what should be done. This entry focuses on counterfactual modality which concerns what is not, but could or would have been. What if Martin Luther King had died when he was stabbed in 1958 (Byrne 2005: 1) ? What if the Americas had never been colonized? What if I were to put that box over here and this one over there? These modes of thought and speech have been the subject of extensive study in philosophy, linguistics, psychology, artificial intelligence, history, and many other allied fields. These diverse investigations are united by the fact that counterfactual modality crops up at the center of foundational questions in these fields.

In philosophy, counterfactual modality has given rise to difficult semantic, epistemological, and metaphysical questions:

  • Semantic    How do we communicate and reason about possibilities which are remote from the way things actually are?
  • Epistemic    How can our experience in the actual world justify thought and talk about remote possibilities?
  • Metaphysical    Do these remote possibilities exist independently from the actual world, or are they grounded in things that actually exist?

These questions have attracted significant attention in recent decades, revealing a wealth of puzzles and insights. While other entries address the epistemic— the epistemology of modality —and metaphysical questions— possible worlds and the possibilism-actualism debate —this entry focuses on the semantic question. It will aim to refine this question, explain its central role in certain philosophical debates, and outline the main semantic analyses of counterfactuals.

Section 1 begins with a working definition of counterfactual conditionals ( §1.1 ), and then surveys how counterfactuals feature in theories of agency, mental representation, and rationality ( §1.2 ), and how they are used in metaphysical analysis and scientific explanation ( §1.3 ). Section 1.4 then details several ways in which the logic and truth-conditions of counterfactuals are puzzling. This sets the stage for the sections 2 and 3 , which survey semantic analyses of counterfactuals that attempt to explain this puzzling behavior.

Section 2 focuses on two related analyses that were primarily developed to study the logic of counterfactuals: strict conditional analyses and similarity analyses. These analyses were not originally concerned with saying what the truth-conditions of particular counterfactuals are. Attempts to extend them to that domain, however, have attracted intense criticism. Section 3 surveys more recent analyses that offer more explicit models of when counterfactuals are true. These analyses include premise semantics ( §3.1 ), conditional probability analyses ( §3.2 ) and structural equations/causal models ( §3.3 ). They are more closely connected to work on counterfactuals in psychology, artificial intelligence, and the philosophy of science.

Sections 2 and 3 of this entry employ some basic tools from set theory and logical semantics. But these sections also provide intuitive characterizations alongside formal definitions, so familiarity with these tools is not a pre-requisite. Readers interested in more familiarity with these tools will find basic set theory , as well as Gamut (1991) and Sider (2010) useful.

1.1 What are Counterfactuals?

1.2.1 agency, choice, and free will, 1.2.2 rationality, 1.2.3 mental representation, content, and knowledge, 1.3 metaphysical analysis and scientific explanation, 1.4 semantic puzzles, 2.1 introducing strict and similarity analyses, 2.2.1 second wave strict conditional analyses, 2.3 similarity semantics, 2.4 comparing the logics, 2.5.1 truth-conditions and similarity, 2.5.2 truth-conditions and the strict analysis, 2.6 philosophical objections, 2.7 summary, 3.1 premise semantics, 3.2 conditional probability analyses, 3.3 bayesian networks, structural equations, and causal models, 3.4 summary, 4. conclusion, other internet resources, related entries, 1. counterfactuals and philosophy.

This section begins with some terminological issues ( §1.1 ). It then provides two broad surveys of research that places counterfactuals at the center of key philosophical issues. Section 1.2 covers the role of counterfactuals in theories of rational agency, mental representation, and knowledge. Section 1.3 focuses on the central role of counterfactuals in metaphysics and the philosophy of science. Section 1.4 will then bring a bit of drama to the narrative by explaining how counterfactuals are deeply puzzling from the perspective of classical and modal logics alike.

In philosophy and related fields, counterfactuals are taken to be sentences like:

This entry will follow this widely used terminology to avoid confusion. However, this usage also promotes a confusion worth dispelling. Counterfactuals are not really conditionals with contrary-to-fact antecedents. For example ( 2 ) can be used as part of an argument that the antecedent is true (Anderson 1951) :

On these grounds, it might be better to speak instead of subjunctive conditionals , and reserve the term counterfactual for subjunctive conditionals whose antecedent is assumed to be false in the discourse. [ 1 ] While slightly more enlightened, this use of the term does not match the use of counterfactuals in the sprawling philosophical and interdisciplinary literature surveyed here, and has its own drawbacks that will be discussed shortly. This entry will use counterfactual conditional and subjunctive conditional interchangeably, hoping to now have dispelled the suggestion that all counterfactuals, in that sense, have contrary-to-fact antecedents.

The terminology of indicative and subjunctive conditionals is also vexed, but it aims to get at a basic contrast which begins between two different forms of conditionals that can differ in truth value. (3) and (4) can differ in truth-value while holding fixed the world they are being evaluated in. [ 2 ]

It is easy to imagine a world where (3) is true, and (4) false. Consider a world like ours where Kennedy was assassinated. Further suppose Oswald didn’t do it, but some lone fanatic did for deeply idiosyncratic reasons. Then (3) is true and (4) false. Another aspect of the contrast between indicative and subjunctive conditionals is illustrated in (5) and (6) .

Indicatives like (5) are infelicitous when their antecedent has been denied, unlike the subjunctives like (6) and (7) ( Stalnaker 1975; Veltman 1986 ).

The indicative and subjunctive conditionals above differ from each other only in particular details of their linguistic form. It is therefore plausible to explain their contrasting semantic behavior in terms of the semantics of those linguistic differences. Indicatives, like (3) and (5) , feature verbs in the simple past tense form, and no modal auxiliary in the consequent. Subjunctives, like (4) and (6) , feature verbs in the past perfect (or “pluperfect”) with a modal would in the consequent. Something in the neighborhood of these linguistic and semantic differences constitutes the distinction between indicative and subjunctive conditionals —summarized in Figure 1 . [ 3 ]

Figure 1: Rough Guide to Indicative and Subjunctive Conditionals

As with most neighborhoods, there are heated debates about the exact boundaries and the names—especially when future-oriented conditionals are included. These debates are surveyed in the supplement Indicative and Subjunctive Conditionals . The main entry will rely only on the agreed-upon paradigm examples like (3) and (4) . The labels indicative and subjunctive are also flawed since these two kinds of conditionals are not really distinguished on the basis of whether they have indicative or subjunctive mood in the antecedent or consequent. [ 4 ] But the terminology is sufficiently entrenched to permit this distortion of linguistic reality.

Much recent work has been devoted to explaining how the semantic differences between indicative and subjunctive conditionals can be derived from their linguistic differences—rather than treating them as semantically unrelated. Much of this work has been done in light of Kratzer’s ( 1986, 2012 ) general approach to modality according to which all conditionals are treated as two-place modal operators. This approach is also discussed in the supplement Indicative and Subjunctive Conditionals . [ 5 ] This entry will focus on the basic logic and truth-conditions of subjunctive conditionals as a whole, and will use the following notation for them (following Stalnaker 1968 ). [ 6 ]

  • Subjunctive Conditionals (Notation)  \(\phi>\psi\) symbolizes if it had been the case that \(\phi\) then it would have been the case that \(\psi\)

This project and notation has an important limitation that should be highlighted: it combines the meaning of the modal would and if…then… into a single connective “\(>\)”. This makes it difficult to adequately represent subjunctive conditionals like:

Conditionals like (8a) have figured in debates about the semantics of counterfactuals and have been modeled either as a related connective (D. Lewis 1973b: §1.5) or a normal would -subjunctive conditional embedded under might ( Stalnaker 1980, 1984: Ch.7 ). But the more complex examples (8b) – (8d) highlight the need for a more refined compositional analysis, like those surveyed in Indicative and Subjunctive Conditionals . So, while this notation will be used in §1.4 and throughout §2 and §3 , it should be regarded as an analytic convenience rather than a defensible assumption.

1.2 Agency, Mind, and Rationality

Counterfactuals have played prominent and interconnected roles in theories of rational agency. They have figured prominently in views of what agency and free will amount to, and played important roles in particular theories of mental representation, rational decision making, and knowledge. This section will outline these uses of counterfactuals and begin to paint a broader picture of how counterfactuals connect to central philosophical questions.

A defining feature of agents is that they make choices. Suppose a citizen votes, and in doing so chooses to vote for X rather than Y . It is hard to see how this act can be a choice without a corresponding counterfactual being true:

The idea that choice entails the ability to do otherwise has been taken by many philosophers to underwrite our practice of holding agents responsible for their choices. But understanding the precise meaning of the counterfactual could have claim in (9) requires navigating the classic problem of free will: if we live in a universe where the current state of the universe is determined (or near enough) by the prior state of the universe and the physical laws, then it seems like every action of every agent, including their “choices”, are predetermined. So interpreting this intuitively plausible counterfactual (9) leads quite quickly to a deep philosophical dilemma. One can maintain, with some Incompatibilists, that (9) is a false claim about what’s physically possible, and revisit the understanding of agency, choice, and responsibility above—the entry incompatibilist theories of free will explores this further. [ 7 ] Alternatively, one can maintain that (9) is a true claim about some non-physical sense of possibility, and explain how that is appropriate to our understanding of choice and responsibility—the entry compatibilism explores this further. It is wrong to construe debates about free will as just debates about the meaning of counterfactuals. But, the semantics of counterfactuals can have a substantive impact on delimiting the space of possible solutions, and perhaps even deciding between them. The same is true for research on counterfactual thinking in psychology.

Experiments in social psychology suggest that belief in free will is linked to increased counterfactual thinking (Alquist et al. 2015) . Further, they have shown that counterfactually reflecting on past events and choices is one significant way humans imbue life experiences with meaning and create a sense of self (Galinsky et al. 2005; Heintzelman et al. 2013; Kray et al. 2010) . Incompatibilists might be able to cite this result as an explanation for why so many people believe they have free will. It is a specific form of wishful thinking: it is interwoven with the practices of counterfactual reflection that give our lives meaning. Seto et al. (2015) support this idea by showing that variation in subjects’ belief in free will predicts how much meaning they derive from relevant instances of counterfactual reflection. This might even be used as part of a pragmatic argument for believing in free will: roughly, belief in free will is so practically important, and our knowledge of the world so incomplete, that it is rational to believe that it exists. [ 8 ]

Counterfactual reflection is not just used for the “sentimental” purposes discussed above, but as part of what Byrne (2005) calls rational imagination . This capacity is implicated in many philosophical definitions of rational agency. According to the standard model, agency involves intentional action—see entries agency and action . While choices are intentional actions, intentional actions are a more general class of actions which, on most views, are in part caused by intentions—see entry intention . One prominent understanding of intentions is that they are prospective (forward looking) mental states that play a crucial role in planning actions. Byrne (2005, 2016: 138) details psychological evidence showing that counterfactual thinking is central to forming rational intentions. People use counterfactual thinking after particular events to formulate plans that will improve the outcome of their actions in related scenarios. Examples include aviation pilots thinking after a near-accident “if I had understood the controller’s words accurately, I wouldn’t have initiated the inappropriate landing attempt”, and blackjack players thinking “If I’d gotten the 2, I would have beaten the dealer”. People who reason in this way show more persistence and improved performance in related tasks, while those who dwell on how things could have been worse, or do not counterfactually reflect at all, show less persistence and no improvement in performance. Finally, human rationality can become disordered when counterfactual thinking goes astray, e.g., in depression, anxiety, and schizophrenia (Byrne 2016: 140–143) .

This psychological research shows that rational human agents do learn from the past and plan for the future engaging in counterfactual thinking. Many researchers in artificial intelligence have voiced similar ideas (Ginsberg 1985; Pearl 1995; Costello & Mccarthy 1999) . But, this view is distinct from a stronger philosophical claim: that the nature of rational agency consists, in part, in the ability to perform counterfactual thinking. Some versions of causal decision theory make precisely this claim, and do so to capture similar patterns of rational behavior. Newcomb’s Problem (Nozick 1969) consists of a decision problem which challenges the standard way of articulating the idea that rational agents maximize expected utility, and, according to some philosophers (Stalnaker 1972 [1980]; Gibbard & Harper 1978) , shows that causal or counterfactual reasoning must be included in rational decision procedures—see the entry causal decision theory for further details. In a similar vein, work on belief revision theory explores how a rational agent should revise their beliefs when they are inconsistent with something they have just learned—much like a counterfactual antecedent demands—and uses structures that formally parallel those used in the semantics of counterfactuals (Harper 1975; Gärdenfors 1978, 1982; Levi 1988) . See formal representations of belief for further discussion of this literature.

The idea that counterfactual reasoning is central to rational agency has surfaced in another way in cognitive science and artificial intelligence, where encoding counterfactual-supporting relationships has emerged as a major theory of mental representation (Chater et al. 2010) . These disciplines also study how states of mind like belief, desire, and intention explain rational agency. But they are not satisfied with just showing that certain states of mind can explain certain choices and actions. They aim to explain how those particular states of mind lead to those choices and actions. They do so by characterizing those states of mind in terms of representations, and formulating particular algorithms for using those representations to learn, make choices and perform actions. [ 9 ] Many recent advances in cognitive science and artificial intelligence share a starting point with Bayesian epistemology: agents must learn and decide what to do despite being uncertain what exactly the world is like, and these processes can be modeled in the probability calculus. On a simple Bayesian approach, an agent represents the world with a probability distribution over binary facts or variables that represent what the world is like. But even for very simple domains the probability calculus does not provide computationally tractable representations and algorithms for implementing Bayesian intelligence. The tools of Bayesian networks , structural equations and causal models , developed by Spirtes, Glymour, and Scheines (1993, 2000) and Pearl (2000, 2009) address this limitation, and also afford simple algorithms for causal and counterfactual reasoning, among other cognitive processes. This framework represents an agent’s knowledge in a way that puts counterfactuals and causal connections at the center, and the tools it provides have been influential beyond cognitive science and AI. It has also been applied to topics covered later in this entry: the semantic analyses of counterfactuals ( §3.2 ) and metaphysical dependence, causation and scientific explanation ( §1.3 ). For this reason, it will be useful to describe its basics now, though still focusing on its applications to mental representation. What follows is a simplified version of the accessible introduction in Sloman (2005: Ch.4) . For a more thorough introduction, see Pearl (2009: Ch.1) .

In a Bayesian framework, probabilities are real numbers between 0 and 1 assigned to propositional variables A , B , C ,…. These probabilities reflect an agent’s subjective credence, e.g., \(P(A)=0.6\) reflects that they think A is slightly more likely than not to be true. [ 10 ] At the heart of Bayesian Networks are the concepts of conditional probability and two variables being probabilistically independent . \(P(B \mid A)\) is the credence in B conditional on A being true and is defined as follows:

  • Definition 1 (Conditional Probability)    \(\displaystyle P(B\mid A)\dequal \frac{P(A\land B)}{P(A)}\)

Conditional probabilities allow one to say when B is probabilistically independent of A : when an agent’s credence in B is the same as their credence in B conditional on A and conditional on \(\neg A\).

  • Definition 2 (Probabilistic Independence)    B is probabilistically independent of A just in case \(P(B)=P(B\mid A)=P(B\mid\neg A)\).

Bayesian networks represent relations of probabilistic dependence. For example, an agent’s knowledge about a system containing eight variables could be represented by the directed acyclic graph and system of structural equations between those variables in Figure 2 .

Figure 2: Bayesian Network and Structural Equations. [An extended description of figure 2 is in the supplement.]

While the arrows mark relations of probabilistic dependence, the equations characterize the nature of the dependence, e.g., “\(H\dequal F\lor G\)” means that the value of H is determined by the value of \(F\lor G\) (but not vice versa). [ 11 ] This significantly reduces the number of values that must be stored. [ 12 ] But it also stores information that is useful to agents. It facilitates counterfactual reasoning—e.g., if C had been true then G would have been true—reasoning about actions—e.g., if we do A then C will be true—and explanatory reasoning—e.g., H is true in part because C is true (Pearl 2002) .

The usefulness of Bayesian networks is evidenced by their many applications in psychology (e.g., Glymour 2001; Sloman 2005 ) and artificial intelligence (e.g., Pearl 2009, 2002) ). They are among the key representations employed in autonomous vehicles (Thrun et al. 2006; Parisien & Thagard 2008) , and have been applied to a wide range of cognitive phenomena:

  • Causal learning and reasoning in AI (Pearl 2009: Chs 1–4) and humans (Glymour 2001; Gopnik et al. 2004; Sloman 2005: Chs.6–12)
  • Counterfactual reasoning in AI (Pearl 2009: Ch.7) and humans (Sloman & Lagnado 2005; Sloman 2005; Rips 2010; Lucas & Kemp:2015)
  • Conceptual categorization and action planning (Sloman 2005: Chs.9,10)
  • Learning and cognitive development (Gopnik & Tenenbaum 2007)

As Sloman (2005: 177) highlights, this form of representation fits well with a guiding idea of embodied cognition: mental representations in biological agents are constrained by the fact that their primary function is to facilitate successful action despite uncertain information and bounded computational resources. Bayesian networks have also been claimed to address a deep and central issue in artificial intelligence called the frame problem (e.g., Glymour 2001: Ch. 3 ). For the purposes of this entry, it is striking how fruitful this approach to mental representation has been, since counterfactual dependence is at its core.

Counterfactual dependence has also featured prominently in theories of mental content, which explain how a mental representation like the concept dog comes to represent dogs. Informational theories take their inspiration from natural representations like tree rings, which represent, in some sense, how old the tree is (Dretske 2011) . While some accounts in this family are called “causal theories of mental content”, it is somewhat limiting to formulate the view as: X represents Y just in case Y causes X . Even for the tree rings, it is metaphysically controversial to claim that the tree rings are caused by the age of the tree, rather than thinking they have a common cause or are merely causally related via a number of laws and factors, e.g., rainfall, seasons, growth periods. For this and other reasons, Dretske (1981, 1988, 2002) formulates the relationship in terms of conditional probabilities:

  • Definition 3 (Dretske’s Probabilistic Theory of Information)    State s carries the information that a is F , given background conditions g , just in case \(P(a\text{ is }F\mid s, g)=1\).

On this view, the state of the tree rings carries the information that the tree is a certain age, since given the background conditions in our world the relevant conditional probability is 1. As argued by Loewer (1983: 76) and Cohen and Meskin (2006) , this formulation introduces problematic issues in how to interpret the probabilities involved and these problems are avoided by a counterfactual formulation:

  • Definition 4 (Loewer’s Counterfactual Theory of Information)    State s carries the information that a is F , given background conditions g , just in case, given g , if s were to obtain, a would have to have been F .

Even this theory of information requires several elaborations to furnish a plausible account of mental content. For example, Dretske (1988, 2002) holds that a mental representation r represents that a is F just in case r has the function of indicating that a is F . The teleological (“function”) component is added to explain how a deer on a dark night can cause tokens of the concept dog without being part of the information carried by thoughts that token dog . Fodor (1987, 1990) pursues another, non-teleological solution, the asymmetric dependence theory. Counterfactuals feature here in another way:

  • “ a being F causes r ” is a law.
  • For any other cause c of r , c would not have caused r if a being F had not caused r . ( c ’s causing r asymmetrically depends on a being F causing r .)

This approach also appeals to laws, which are another key philosophical concept connected to counterfactuals—see §1.3 below.

Counterfactuals are not just used to analyze how a given mental state represents reality, but also when a mental state counts as knowledge. Numerous counterexamples, like Gettier cases, make the identification of knowledge with justified true belief problematic—for further details see the analysis of knowledge . But some build on this analysis by proposing further conditions to address these counterexamples. Two counterfactual conditions are prominent in this literature:

  • Sensitivity    If p were false, S would not believe that p .
  • Safety    If S were to believe that p , p would not be false.

Both concepts are ways of articulating the idea that S ’s beliefs must be formed in a way that is responsive to p being true. The semantics of counterfactuals have interacted with this project in a number of ways: in establishing their non-equivalence, refining them, and adjudicating putative counterexamples.

Counterfactuals have played an equally central role in metaphysics and the philosophy of science. They have featured in metaphysical theories of causation, supervenience, grounding, ontological dependence, and dispositions. They have also featured in issues at the intersection of metaphysics and philosophy of science like laws of nature and scientific explanation. This section will briefly overview these applications, largely linking to related entries that cover these applications in more depth. But, this overview is more than just a list of how counterfactuals have been applied in these areas. It helps identify a cluster of inter-related concepts (and/or properties) that are fruitfully studied together rather than in isolation.

Many philosophers have proposed to analyze causal concepts in terms of counterfactuals (e.g., D. Lewis 1973a , Mackie 1974 ). The basic idea is that (10) can be understood in terms of something like (11) (see counterfactual theories of causation for further discussion).

This basic idea has been elaborated and developed in several ways. D. Lewis (1973a, c) refines it using his similarity semantics for counter­factuals—see §2.3 . The resulting counterfactual analysis of causation faces a number of challenges—see counterfactual theories of causation for discussion and references. But this has simply inspired a new wave of counterfactual analyses that use different tools.

Hitchcock (2001, 2007) and Woodward (2003: Ch.5) develop counterfactual analyses of causation using the tools of Bayesian networks (or “causal models”) and structural equations described back in §1.2.3 . The rough idea of the analysis is as follows. Given a graph like the one in Figure 2 , X can be said to be a cause of Y just in case there is a path from X to Y and changing just the value of X changes the value of Y . According to Hitchcock (2001) and Woodward (2002, 2003) , this analysis of causation counts as a counterfactual analysis because the basic structural equations, e.g., \(C\dequal A\land B\), are best understood as primitive counterfactual claims, e.g., if A and B had been true, C would have been true. While not all theories of causation that employ structural equations are counterfactual theories, structural equations are central to many of the contemporary counterfactual theories of causation. [ 13 ] See counterfactual theories of causation for further developments and critical reactions to this account of causation.

Recently, Schaffer (2016) and Wilson (2018) have also used structural equations to articulate a counterfactual theory of metaphysical grounding. [ 14 ] Metaphysical grounding is a concept widely employed in metaphysics throughout its history, but has been the focus of intense attention only recently—see entry metaphysical grounding for further details. As Schaffer (2016) puts it, the fact that Koko the gorilla lives in California is not a fundamental fact because it is grounded in more basic facts about the physical world, perhaps facts about spacetime and certain physical fields. Statements articulating these grounding facts constitute distinct metaphysical explanations. So conceived, metaphysical grounding is among the most central concepts in metaphysics. The key proposals in Schaffer (2016) and Wilson (2018) are to use structural equations to model grounding relations, and not just causal relations, and in doing so capture parallels between causation and grounding. Indeed, they define grounding in terms of structural equations in the same way as the authors above defined causation in terms of structural equations. The key difference is that the equations articulate what grounds what. While this approach to grounding has its critics (e.g., Koslicki 2016 ), it is worth noting here since it places counterfactuals at the center of metaphysical explanations. [ 15 ] Counterfactuals have been implicated in other key metaphysical debates. Work on dispositions is a prominent example. A glass’s fragility is a curious property: the glass has it in virtue of possibly shattering in certain conditions, even if those conditions are never manifested in the actual world, unlike say, the glass’s shape. This dispositional property is quite naturally understood in terms of a counterfactual claim:

Early analyses of this form were pursued by Ryle (1949) , Quine (1960) , and Goodman (1955) , and have remained a major position in the literature on dispositions. See dispositions for further discussion and references.

It is not just metaphysical explanation where counterfactuals have been central. They also feature prominently in accounts of scientific explanation and laws of nature. Strict empiricists have attempted to characterize scientific explanation without reliance on counterfactuals, despite the fact that they tend to creep in—for further background on this see scientific explanation . Scientific explanations appeal to laws of nature, and laws of nature are difficult to separate from counterfactuals. Laws of nature are crucially different from accidental generalizations, but how? One prominent idea is that they “support counterfactuals”. As Chisholm (1955: 97) observed, the counterfactual (14) follows from the corresponding law (13) but the counterfactual (16) does not follow from the corresponding accidental generalization (15) .

A number of prominent views have emerged from pursuing this connection. Woodward (2003) argues that the key feature of an explanation is that it answers what-if-things-had-been-different questions, and integrates this proposal with a structural equations approach to causation and counterfactuals. [ 16 ] Lange (1999, 2000, 2009) proposes an anti-reductionist account of laws according to which they are identified by their invariance under certain counterfactuals. Maudlin (2007: Ch.1) also proposes an anti-reductionist account of laws, but instead uses laws to define the truth-conditions of counterfactuals relevant to physical explanations. For more on these views see laws of nature .

It should now be clear that a wide variety of central philosophical topics rely crucially on counterfactuals. This highlights the need to understand their semantics: how can we systematically specify what the world must be like if a given counterfactual is true and capture patterns of valid inference involving them? It turns out to be rather difficult to answer this question using the tools of classical logic, or even modal logic. This section will explain why.

Logical semantics (Frege 1893; Tarski 1936; Carnap 1948) provided many useful analyses of English connectives like and and not using Boolean truth-functional connectives like \(\land\) and \(\neg\). Unfortunately, such an analysis is not possible for counterfactuals. In truth-functional semantics, the truth of a complex sentence is determined by the truth of its parts because a connective’s meaning is modeled as a truth-function—a function from one or more truth-values to another. Many counterfactuals have false antecedents and consequents, but some are true and others false. (17a) is false—given Joplin’s critiques of consumerism—and (17b) is true.

It may be useful to state the issue a bit more precisely.

In truth-functional semantics, the truth-value (True/False: 1/0) of a complex sentence is determined by the truth-values of its parts and particular truth-function expressed by the connective. This is illustrated by the truth-tables for negation \(\neg\), conjunction \(\land\), and the material conditional \(\supset\) in Figure 3 .

Figure 3: Negation (\(\neg\)), Conjunction (\(\land\)), Material Conditional (\(\supset\))

Truth-functional logic is inadequate for counterfactuals not just because the material conditional \(\supset\) does not capture the fact that some counterfactuals with false antecedents like (17a) are false. It is inadequate because there is, by definition, no truth-functional connective whatsoever that simultaneously combines two false sentences to make a true one like (17b) and combines two false ones to make a false one like (17a) . In contemporary philosophy, this is overwhelmingly seen as a failing of classical logic. But there was a time at which it fueled skepticism about whether counterfactuals really make true or false claims about the world at all. Quine ( 1960: §46, 1982: Ch.3 ) voices this skepticism and supports it by highlighting puzzling pairs like (18) and (19) :

Quine (1982: Ch.3) suggests that no state of the world could settle whether (19a) or (19b) is true. Similarly he contends that it is not the world, but sympathetically discerning the speaker’s imagination and purpose in speaking that matters for the truth of (18b) versus (18a) (Quine 1960: §46) . Rather than promoting skepticism about a semantic analysis of counterfactuals, Lewis (1973b: 67) took these examples as evidence that their truth-conditions are context-sensitive : the possibilities that are considered when evaluating the antecedent are constrained by the context in which the counterfactual is asserted, including the intentions and practical ends of the speaker. All contemporary accounts of counterfactuals incorporate some version of this idea. [ 17 ]

Perhaps the most influential semantic puzzle about counterfactuals was highlighted by Goodman (1947) , who noticed that adding more information to the antecedent can actually turn a true counterfactual into a false one. For example, (20a) could be true, while (20b) is false.

Lewis ( 1973c: 419; 1973b: 10 ) dramatized the problem by considering sequences such as (21) , where adding more information to the antecedent repeatedly flips the truth-value of the counterfactual.

The English discourse (21) is clearly consistent: it is nothing like saying I shirked my duty and I did not shirk my duty . This property of counterfactual antecedents is known by a technical name, non-monotonicity , and is one of the features all contemporary accounts are designed to capture. As will be discussed in §2.2 , even modal logic does not have the resources to capture semantically non-monotonic operators.

Goodman (1947) posed another influential problem. Examples (20a) and (20b) show that the truth-conditions of counterfactuals depend on assumed background facts like the presence of oxygen. However, a moment’s reflection reveals that specifying all of these background facts is quite difficult. The match must be dry, oxygen must be present, wind must be below a certain threshold, the friction between the striking surface and the match must be sufficient to produce heat, that heat must be sufficient to activate the chemical energy stored in the match head, etc. Further, counterfactuals like (20a) also rely for their truth on physical laws specific to our world, e.g., the conservation of energy. Goodman’s problem is this: it is difficult to adequately specify these background conditions and laws without further appealing to counterfactuals. This is clearest for laws. As discussed in §1.3 , some have aimed to distinguish laws from accidental generalizations by noting that only the former support counterfactuals. But if this is a defining feature of laws, and laws are part of the definition of when a counterfactual is true, circularity becomes a concern. Explicit analyses of laws in terms of counterfactuals, like Lange (2009) , would make an analysis of counterfactuals in terms of laws circular.

The potential circularity for background conditions takes a bit more explanation. Suppose one claims to have specified all of the background conditions relevant to the truth of (20a) , as in (22a) . Then it is tempting to say that (20a) is true because (22c) follows from (22a) , (22c) , and the physical laws.

But now suppose there is an agent seeing to it that a fire is not started, and will only strike the match if it is wet. In this case the counterfactual (20a) is intuitively false. However, unless one adds the counterfactual, if the match were struck, it would have to be wet , to the background conditions, (22c) still follows from (22a) , (22c) , and the physical laws. That would incorrectly predict the counterfactual to be true. In short, it seems that the background conditions must themselves consist of counterfactuals. Any analysis of counterfactuals that captures their sensitivity to background facts must either eliminate these appeals to counterfactuals, or show how this appeal is non-circular, e.g., part of a recursive, non-reductive analysis.

To summarize, this section has identified three key theses about the semantics of counterfactuals and a central problem:

  • Counterfactuals are not truth-functional.
  • Counterfactuals have context-sensitive truth-conditions.
  • Counterfactual antecedents are interpreted non-monotonically.
  • Goodman’s Problem    The truth-conditions of counterfactuals depend on background facts and laws. It is challenging to specify these facts and laws in general, but particularly difficult to specify them in non-counterfactual terms.

These theses, along with Goodman’s Problem, were once grounds for skepticism about the coherence of counterfactual discourse. But with advances in semantics and pragmatics, they have instead become the central features of counterfactuals that contemporary analyses aim to capture.

2. The Logic of Counterfactuals

This section will survey two semantic analyses of counterfactuals: the strict conditional analysis and the similarity analysis. These conceptually related analyses also have a shared explanatory goal: to capture logically valid inferences involving counterfactuals, while treating them non-truth-functionally, leaving room for their context dependence, and addressing the non-monotonic interpretation of counterfactual antecedents. Crucially, these analyses abstract away Goodman’s Problem because they are not primarily concerned with the truth-conditions of particular counterfactuals—just as classical logic does not take a stand on which atomic sentences are actually true. Instead, they say only enough about truth-conditions to settle matters of logic, e.g., if \(\phi\) and \(\phi>\psi\) are true, then \(\psi\) is true. Sections 2.5 and 2.6 will revisit questions about the truth-conditions of particular counterfactuals, Goodman’s Problem and the philosophical projects surveyed in §1 .

The following subsections will detail strict conditional and similarity analyses. But it is useful at the outset to consider simplified versions of these two analyses alongside each other. This will clarify their key differences and similarities. Both analyses are also stated in the framework of possible world semantics developed in Kripke (1963) for modal logics. The following subsection provides this background and an overview of the two analyses.

The two key concepts in possible worlds semantics are possible worlds and accessibility spheres (or relations). Intuitively, a possible world w is simply a way the world could be or could have been. Formally, they are treated as primitive points in the set of all possible worlds W . But their crucial role comes in assigning truth-conditions to sentences: a sentence \(\phi\) can only said to be true given a possible world w , but since w is genuinely possible, it cannot be the case that both \(\phi\) and \(\neg\phi\) are true at w . Accessibility spheres provide additional structure for reasoning about what’s possible: for each world w , \(R(w)\) is the set of worlds accessible from w . [ 18 ] This captures the intuitive idea that given a possible world w , a certain range of other worlds \(R(w)\) are possible, in a variety of senses. \(R_1(w)\) might specify what’s nomologically possible in w by including only worlds where w ’s natural laws hold, while \(R_2(w)\) specifies what’s metaphysically possible in w .

These tools furnish truth-conditions for a formal language including non-truth-functional necessity (\({{\medsquare}}\)) and possibility (\({{\meddiamond}}\)) operators: [ 19 ]

  • Definition 6 (Kripkean Semantics)    \[ \begin{align} {\llbracket A\rrbracket}^{R}_v & =\set{w\mid v(w,A)=1} & \tag*{1.}\\ {\llbracket\neg\phi\rrbracket}^{R}_v & = W-{\llbracket\phi\rrbracket}^{R}_v & \tag*{2.}\\ {\llbracket\phi\land\psi\rrbracket}^{R}_v & ={\llbracket\phi\rrbracket}^{R}_v\cap{\llbracket\psi\rrbracket}^{R}_v & \tag*{3.}\\ {\llbracket\phi\lor\psi\rrbracket}^{R}_v & ={\llbracket\phi\rrbracket}^{R}_v\cup{\llbracket\psi\rrbracket}^{R}_v & \tag*{4.}\\ {\llbracket\phi\supset\psi\rrbracket}^{R}_v & =(W-{\llbracket\phi\rrbracket}^{R}_v)\cup{\llbracket\psi\rrbracket}^{R}_v & \tag*{5.}\\ {\llbracket\medsquare\phi\rrbracket}^{R}_v & =\set{w\mid R(w)\subseteq{\llbracket\phi\rrbracket}^{R}_v} & \tag*{6.}\\ {\llbracket\meddiamond\phi\rrbracket}^{R}_v & =\set{w\mid R(w)\cap{\llbracket\phi\rrbracket}^{R}_v\neq\emptyset} & \tag*{7.} \end{align} \]

In classical logic, the meaning of \(\phi\) is simply its truth-value. But in modal logic, it is the set of possible worlds where \(\phi\) is true: \({\llbracket}\phi{\rrbracket}\). So \(\phi\) is true in w , relative to v and R , just in case \(w\in{\llbracket}\phi{\rrbracket}^R_v\):

  • Definition 7 (Truth)    \(w,v,R\vDash \phi \iff w\in{\llbracket}\phi{\rrbracket}^R_v\)

Only clauses 6 and 7 rely crucially on this richer notion of meaning. \({{\medsquare}}\phi\) says that in all accessible worlds \(R(w)\), \(\phi\) is true. \({{\meddiamond}}\phi\) says that there are some accessible worlds where \(\phi\) is true. Logical concepts like consequence are also defined in terms of relations between sets of possible worlds. The intersection of the premises must be a subset of the conclusion (i.e., every world where the premises are true, the conclusion is true):

  • Definition 8 (Logical Consequence)    \(\phi_1,{\ldots},\phi_n\vDash\psi \iff \forall R,v{{:\thinspace}}({\llbracket}\phi_1{\rrbracket}^R_v\cap\cdots\cap{\llbracket}\phi_n{\rrbracket}^R_v)\subseteq{\llbracket}\psi{\rrbracket}^R_v\)

Given this framework, the strict analysis can be formulated very simply: \(\phi > \psi\) should be analyzed as \({{\medsquare}}(\phi\supset\psi)\). This says that all accessible \(\phi\)-worlds are \(\psi\)-worlds. This analysis can be depicted as in Figure 4 . [ 20 ]

Figure 4: Truth in \(w_0\) relative to R . [An extended description of figure 4 is in the supplement.]

The red circle delimits the worlds accessible from \(w_0\), the x -axis divides \(\phi\) and \(\neg\phi\)-worlds, and the y -axis \(\psi\) and \(\neg\psi\)-worlds. \({{\medsquare}}(\phi\supset\psi)\) says that there are no worlds in the blue shaded region.

It is crucial to highlight that this semantics does not capture the non-monotonic interpretation of counterfactual antecedents. For example, \({\llbracket}\mathsf{A}\land \mathsf{B}{\rrbracket}^R_v\) is a subset of \({\llbracket}\mathsf{A}{\rrbracket}\), and this means that any time \({{\medsquare}}(\mathsf{A\supset C})\) is true, so is \({{\medsquare}}(\mathsf{(A\land B)\supset C})\). After all, if all \(\mathsf{A}\)-worlds are in the red quadrant of Figure 4 , so are all of the \(\mathsf{A\land B}\)-worlds, since the \(\mathsf{A\land B}\)-worlds are just a subset of the \(\mathsf{A}\)-worlds. A crucial point here is that on this semantics the domain of worlds quantified over by a counterfactual is constant across counterfactuals with different antecedents. As will be discussed in §2.2 , advocates of strict conditional analyses aim to instead capture the non-monotonic behavior of antecedents pragmatically by incorporating it into a model of their context-sensitivity. The most important difference between strict analyses and similarity analyses is that similarity analyses capture this non-monotonicity semantically.

On the similarity analysis, \(\phi >\psi\) is true in \(w_0\), roughly, just in case all the \(\phi\)-worlds most similar to \(w_0\) are \(\psi\)-worlds. To model this notion of similarity, one needs more than a simple accessibility sphere. One way to capture it is with with a nested system of spheres \(\mathcal{R}\) around a possible world \(w_0\) (D. Lewis 1973b: §1.3) —this is just a particular kind of set of accessibility spheres. As one goes out in the system, one gets to less and less similar worlds. This analysis can be depicted as in Figure 5 . [ 21 ]

Figure 5: Truth in \(w_0\) relative to \(\mathcal{R}\). [An extended description of figure 5 is in the supplement.]

The most similar \(\phi\)-worlds are in the innermost gray region. So, this analysis excludes any worlds from being in the shaded innermost blue region. Comparing Figures 4 and 5 , one difference stands out: the similarity analyses does not require that there be no \(\phi\land\neg\psi\)-worlds in any sphere, just in the innermost sphere. For example, world \(w_1\) does not prevent the counterfactual \(\phi >\psi\) from being true. It is not in the \(\phi\)-sphere most similar to w . This is the key to semantically capturing the non-monotonic interpretation of antecedents. The truth of \(\mathsf{A > C}\) does not guarantee the truth of \(\mathsf{(A\land B)> C}\) precisely because the most similar \(\mathsf{A}\)-worlds may be in the innermost sphere, and the most similar \(\mathsf{A\land B}\) may be in an intermediate sphere, and include worlds like \(w_1\) where the consequent is false. In this sense, the domain of worlds quantified over by a similarity-based counterfactual varies across counterfactuals with different antecedents, though it does express a strict conditional over this varying domain. For this reason, D. Lewis (1973b) and many others call the similarity analysis a variably-strict analysis.

Since antecedent monotonicity is the key division between strict and similarity analyses, it is worthwhile being a bit more precise about what it is, and what its associated inference patterns are.

  • Definition 9 (Antecedent Monotonicity)    If \(\phi_1>\psi\) is true at some \(w,R,v\) and \({\llbracket}\phi_2{\rrbracket}^R_v\subseteq{\llbracket}\phi_1{\rrbracket}^R_v\), then \(\phi_2 >\psi\) is true at \(w,R,v\).

The crucial patterns associated with antecedent monotonicity are:

  • Antecedent Strengthening (AS)    \(\phi_1>\psi\vDash (\phi_1\land\phi_2)>\psi\)
  • Simplification of Disjunctive Antecedents (SDA)    \((\phi_1\lor\phi_2)>\psi\vDash (\phi_1>\psi)\land(\phi_2>\psi)\)
  • Transitivity    \(\phi_2>\phi_1,\phi_1 > \psi\vDash \phi_2>\psi\)
  • Contraposition    \(\phi>\psi\,\leftmodels\vDash \neg\psi>\neg\phi\)

AS and SDA clearly follow from antecedent monotonicity. By contrast, Transitivity and a plausible auxiliary assumption entail antecedent monotonicity, [ 22 ] and the same is true for Contraposition. [ 23 ] With these basics in place, it is possible to focus in on each of these analyses in more detail. In doing so, it will become clear that there are important differences even among variants of the similarity analysis and variants of the strict analysis. This entry will focus on what these analyses predict about valid inferences involving counterfactuals.

2.2 Strict Conditional Analyses

The strict conditional analysis has a long history, but its contemporary form was first articulated by Peirce: [ 24 ]

“If A is true then B is true”… is expressed by saying, “In any possible state of things, [ w ], either \([A]\) is not true [in w ], or \([B]\) is true [in w ]”. (Peirce 1896: 33)

C.I. Lewis (1912, 1914) defended the strict conditional analysis of subjunctives and developed an axiomatic system for studying their logic, but offered no semantics. A precise model-theoretic semantics for the strict conditional was first presented in Carnap (1956: Ch. 5) . However, that account did not appeal to accessibility relations, and ranged only over logically possible worlds. Since counterfactuals are often non-logical, it it was only after Kripke (1963) introduced a semantics for modal logic featuring an accessibility relation, that the modern form of the strict analysis was precisely formulated: [ 25 ]

  • \({\llbracket\phi>\psi\rrbracket}^R_v= {\llbracket\medsquare(\phi\supset\psi)\rrbracket}^R_v\)
  • I.e., all \(\phi\)-worlds in \(R(w)\) are \(\psi\)-worlds
  • \( \begin{align*} \llbracket\medsquare(\phi\supset\psi)\rrbracket^{R} & =\{w\mid R(w)\subseteq\llbracket\phi\supset\psi\rrbracket^{R}_v\} & \\ & =\{w\mid (R(w)\cap\llbracket\phi\rrbracket^{R}_v)\subseteq\llbracket\psi\rrbracket^{R}_v\}& \\[-18px] \end{align*} \)
  • \(\phi\strictif\psi\mathbin{:=}\medsquare(\phi\supset\psi)\)

Just as the logic of \({{\medsquare}}\) will vary with constraints that can be placed on R , so too will the logic of strict conditionals. [ 26 ] For example, if one does not assume that \(w\in R(w)\) then modus ponens will not hold for the strict conditional: \(\psi\) will not follow from \(\phi\) and \({{\medsquare}}(\phi\supset\psi)\). But even without settling these constraints, some basic logical properties of the analysis can be established. The discussion to follow is by no means exhaustive. [ 27 ] Instead, it will highlight the logical patterns which are central to the debates between competing analyses.

The core idea of the basic strict analysis leads to the following validities.

  • \(\vDash(\phi\land\neg\phi)\strictif\psi\)
  • \(\vDash\psi\strictif(\phi\lor\neg\phi)\)
  • \(\neg{{\meddiamond}}\phi\vDash\phi\strictif\psi\)
  • \({{\medsquare}}\psi\vDash\phi\strictif\psi\)

In these validities, some see a plausible and attractive logic (C.I. Lewis 1912, 1914) . Others see them as “so utterly devoid of rationality [as to be] a reductio ad absurdum of any view which involves them” ( Nelson 1933: 271) , earning them the title paradoxes of strict implication . Patterns 3 and 4 are more central to debates about counterfactuals, so they will be the focus here. Pattern 3 clearly follows from the core idea of the basic strict analysis : the premise guarantees that there are no accessible \(\phi\)-worlds, from which it vacuously follows that all accessible \(\phi\)-worlds are \(\psi\)-worlds. Much the same is true of pattern 4: if all the accessible worlds are \(\psi\)-worlds then all the accessible \(\phi\)-worlds are \(\psi\)-worlds. Both 3 and 4 are seem incorrect for English counterfactuals.

Contrary to pattern 3, the false (23b) does not intuitively follow from the true (23a) . Similarly, for pattern 4. Suppose one’s origin from a particular sperm and egg is an essential feature of oneself. Then (24a) is true.

And, yet, many would hesitate to infer (24b) on the basis of (24a) . Each of these patterns follow from the core idea of the strict analysis. While these counterexamples may not constitute a conclusive objection, they do present a problem for the basic strict analysis. The second wave strict analyses surveyed in §2.2.1 are designed to solve it, however. They are also designed to address another suite of validities that are even more problematic.

The strict analysis is widely criticized for validating antecedent monotonic patterns. It is worth saying a bit more precisely, using Definition 9 and Figure 6 , why antecedent monotonicity holds for the strict conditional.

Figure 6: Strict Conditionals are Antecedent Monotonic. [An extended description of figure 6 is in the supplement.]

If \(\phi_1\strictif\psi\) is true, then the shaded blue region is empty, and the position of \(\phi_2\) reflects the fact that \({\llbracket}\phi_2{\rrbracket}^R_v\subseteq{\llbracket}\phi_1{\rrbracket}^R_v\)—recall that all worlds above the x -axis are \(\phi_1\)-worlds. Since the shaded blue region within \(\phi_2\) is also empty, all \(\phi_2\) worlds in \(R(w)\) are \(\psi\)-worlds. That is, \(\phi_2\strictif \psi\) is true.

Recall that Transititivity and Contraposition entail antecedent monotonicity , so it remains to show that both hold for the strict conditional. To see why Contraposition holds for the strict conditional, note again that if \(\phi\strictif\psi\) is true in w , then all \(\phi\)-worlds in \(R(w)\) are \(\psi\)-worlds, as depicted in the left Venn diagram in Figure 7 . Now suppose w is a \(\neg\psi\)-world in \(R(w)\). As the diagram makes clear, w has to be a \(\neg\phi\)-world, and so \(\neg\psi\strictif\neg\phi\) must be true in w . Similarly, if \(\neg\psi\strictif\neg\phi\) is true in w , then all \(\neg\psi\)-worlds in \(R(w)\) are \(\neg\phi\)-worlds, as depicted in the right Venn diagram in Figure 7 . Now suppose w is a \(\phi\)-world in \(R(w)\). As depicted, w has to be a \(\psi\)-world, and so \(\phi\strictif\psi\) must be true in w .

Figure 7: \(w\in {\llbracket\phi\strictif\psi\rrbracket}^R_v \iff w\in {\llbracket\neg\psi\strictif\neg\phi\rrbracket}^R_v\) (Contraposition). [An extended description of figure 7 is in the supplement.]

The validity of Transititivity for the strict conditional is also easy to see with a Venn diagram.

Figure 8: \(w\in{\llbracket\phi_2\strictif\phi_1\rrbracket}^R_v\cap{ \llbracket\phi_1\strictif\psi\rrbracket}^R_v\Leftrightarrow w\in{\llbracket\phi_2\strictif\psi\rrbracket}^R_v\) (Transitivity). [An extended description of figure 8 is in the supplement.]

The premises guarantee that all \(\phi_2\)-worlds in \(R(w)\) are \(\phi_1\)-worlds, and that all \(\phi_1\)-worlds in \(R(w)\) are \(\psi\)-worlds. That gives one the relationships depicted in Figure 8 . To show that \(\phi_2\strictif\psi\) follows, suppose that w is a \(\phi_2\)-world in \(R(w)\). As Figure 8 makes evident, w must then be a \(\psi\)-world.

Antecedent monotonic patterns are an ineliminable part of a strict conditional logic. Examples of them often sound compelling. For example, the transitive inference (25) sounds perfectly reasonable, as does the antecedent strengthening inference (26) .

Similar examples for SDA are easy to find. However, counterexamples to each of the four patterns have been offered.

Counterexamples to Antecedent Strengthening were already discussed back in §1.4 . Against Transititivity , Stalnaker (1968: 48) points out that (27c) does not intuitively follow from (27a) and (27b) .

Contra Contraposition , D. Lewis (1973b: 35) presents (28) .

Suppose Boris wanted to go, but stayed away to avoid Olga. Then (28b) is false. Further suppose that Olga would have been even more excited to attend if Boris had. In that case (28a) is true. Against SDA , Mckay & van Inwagen (1977: 354) offer:

(29b) does not intuitively follow from (29a) .

These counterexamples have been widely taken to be conclusive evidence against the strict analysis (e.g., D. Lewis 1973b; Stalnaker 1968 ), since they follow from the core assumptions of that analysis. As a result, D. Lewis (1973b) and Stalnaker (1968) developed similarity analyses which build the non-monotonicity of antecendents into the semantics of counterfactuals—see §2.3 . However, there was a subsequent wave of strict analyses designed to systematically address these counterexamples. In fact, they do so by unifying two features of counterfactuals: the non-monotonic interpretation of their antecedents and their context-sensitivity.

Beginning with Daniels and Freeman (1980) and Warmbrōd (1981a,b) , there was a second wave of strict analyses developed explicitly to address the non-monotonic interpretation of counterfactual antecedents. Warmbrōd (1981a,b) , Lowe (1983, 1990) , and Lycan (2001) account for the counterexamples to antecedent monotonic patterns within a systematic theory of how counterfactuals are context-sensitive. More recently, Gillies (2007) has argued that a strict analysis along those lines is actually preferable to an account that builds the non-monotonicity of counterfactual antecedents into their semantics, i.e., similarity analyses. This section will outline the basic features of these second wave strict conditional analyses.

The key idea in Warmbrōd (1981a,b) is that the accessibility sphere in the basic strict analysis should be viewed as a parameter of the context. Roughly, the idea is that \(R(w)\) corresponds to background facts assumed by the participants of a discourse context. For example, if they are assuming propositions (modeled as sets of possible worlds) A , B , and C then \(R(w)=A\cap B\cap C\). The other key idea is that trivial strict conditionals are not pragmatically useful in conversation. If a strict conditional \(\mathsf{A\strictif C}\) is asserted in a context with background facts \(R(w)\) and \(\mathsf{A}\) is inconsistent with \(R(w)\)—\({\llbracket}\mathsf{A}{\rrbracket}^R_v\cap R(w)={\emptyset}\), then asserting \(\mathsf{A\strictif C}\) does not provide any information. If there are no \(\mathsf{A}\)-worlds in \(R(w)\), then, trivially, all \(\mathsf{A}\)-worlds in \(R(w)\) are \(\mathsf{C}\)-worlds. Warmbrōd (1981a,b) proposes that conversationalists adapt a pragmatic rule of charitable interpretation to avoid trivialization:

On this view, \(R(w)\) may very well change over the course of a discourse as a result of conversationalists adhering to (P) . This part of the view is central to explaining away counterexamples to antecedent monotonic validities.

Consider again the example from Goodman 1947 that appeared to be a counterexample to Antecedent Strengthening .

Now note that if (30a) is going to come out true, the proposition that there is oxygen in the room O must be true in all worlds in the initial accessibility sphere \(R_0(w)\). However, if (30b) is interpreted against \(R_0(w)\), the antecedent will be inconsistent with \(R_0(w)\) and so express a trivial, uninformative proposition. Warmbrōd (1981a,b) proposes that in interpreting (30b) we are forced by to adopt a new, modified accessibility sphere \(R_1(w)\) where O is no longer assumed. But if this is right, (30a) and (30b) don’t constitute a counterexample to Antecedent Strengthening because they are interpreted against different accessibility spheres. It’s like saying All current U.S. presidents are intelligent doesn’t entail All current U.S. presidents are unintelligent because this sentence before Donald Trump was sworn in was true, but uttering it afterwards was false. There is an equivocation of context, or so Warmbrōd (1981a,b) contends.

Warmbrōd (1981a,b) outlines parallel explanations of the counterexamples presented to SDA , Contraposition , and Transititivity . This significantly complicates the issue of whether antecedent monotonicity is the key issue in understanding the semantics of counterfactuals. It appears that the non-monotonic interpretation of counterfactual antecedents can either be captured pragmatically in the way that accessibility spheres change in context (Warmbrōd 1981a,b) , or it can be captured semantically as we will see from similarity analyses in §2.3 . There are significant limitations to Warmbrōd’s ( (1981a,b) ) analysis: it does not capture nested conditionals, and does not actually predict how \(R(w)\) evolves to satisfy (P) . Fintel (2001) and Gillies (2007) offer accounts that remove these limitations, and pose a challenge for traditional similarity analyses.

Fintel (2001) and Gillies (2007) propose analyses where counterfactuals have strict truth-conditions, but they also have a dynamic meaning which effectively changes \(R(w)\) non-monotonically. They argue that such a theory can better explain particular phenomena. Chief among them is reverse Sobel sequences. Recall the sequence of counterfactuals (21) presented by Lewis ( 1973b, 1973c: 419 ), and attributed to Howard Sobel. Reversing these sequences is not felicitous:

Fintel (2001) and Gillies (2007) observe that similarity analyses render sequences like (31) semantically consistent. Their theories predict this infelicity by providing a theory of how counterfactuals in context can change \(R(w)\). Unlike Fintel (2001) , Gillies (2007) does not rely essentially on a similarity ordering over possible worlds to compute these changes to \(R(w)\), and so clearly counts as a second wave strict analysis. [ 28 ] The debate over whether counterfactuals are best given a strict or similarity analysis is very much ongoing. Moss (2012) , Starr (2014) , and K. Lewis (2018) have proposed three different ways of explaining reverse Sobel sequences within a similarity analysis. But Willer (2015, 2017, 2018) has argued on the basis of other data that a dynamic second wave strict analysis is preferable. This argument takes one into a logical comparison of strict and similarity analyses, which will be taken up in §2.4 after the similarity analysis has been presented in more detail.

Recall the rough idea of the similarity analysis sketched in §2.1 : worlds can be ordered by their similarity to the actual world, and counterfactuals say that the most similar—or least different—worlds where the antecedent is true are worlds where the consequent is also true. This idea is commonly attributed to David Lewis and Robert Stalnaker, but the actual history is a bit more nuanced. Although publication dates do not tell the full story, the approach was developed roughly contemporaneously by Stalnaker (1968) , Stalnaker and Thomason (1970) , D. Lewis (1973b) , Nute (1975b) , and Sprigge (1970) . [ 29 ] And, there is an even earlier statement of the view:

When we allow for the possibility of the antecedent’s being true in the case of a counterfactual, we are hypothetically substituting a different world for the actual one. It has to be supposed that this hypothetical world is as much like the actual one as possible so that we will have grounds for saying that the consequent would be realized in such a world. (Todd 1964: 107)

Recall the major difference between this proposal and the basic strict analysis : the similarity analysis uses a graded notion of similarity instead of an absolute notion of accessibility. It also allows most similar worlds to vary between counterfactuals with different antecedents. These differences invalidate antecedent monotonic inference patterns. This section will introduce similarity analyses in a bit more formal detail and describe the differences between analyses within this family.

The similarity analysis has come in many varieties and formulations, including the system of spheres approach informally described in §2.1 . That formulation is easiest for comparison to strict analyses. But there is a different formulation that is more intuitive and better facilitates comparison among different similarity analyses. This formulation appeals to a (set) selection function f , which takes a world w , a proposition p , and returns the set of p -worlds most similar to w : \(f(w,p)\). [ 30 ] \(\phi>\psi\) is then said to be true when the most f -similar \(\phi\)-worlds to w are \(\psi\)-worlds, i.e., every world in \(f(w,{\llbracket}\phi{\rrbracket}^f_v)\) is in \({\llbracket}\psi{\rrbracket}^f_v\). The basics of this approach can be summed up thus.

  • Most similar according to the selection function f
  • f takes a proposition p and a world w and returns the p -worlds most similar to w
  • \(\llbracket\phi > \psi\rrbracket^{f}_v=\{w\mid f(w,\llbracket\phi\rrbracket^{f}_v)\subseteq\llbracket\psi\rrbracket^{f}_v\}\)
  • Making “limit assumption”: \(\phi\)-worlds do not get indefinitely more and more similar to w

As noted, this formulation makes the limit assumption: \(\phi\)-worlds do not get indefinitely more and more similar to w . While D. Lewis (1973b) rejected this assumption, adopting it will serve exposition. It is discussed at length in the supplement Formal Constraints on Similarity . The logic of counterfactuals generated by a similarity analysis will depend on the constraints imposed on f . Different theorists have defended different constraints. Table 1 lists them, where \(p,q\subseteq W\) and \(w\in W\):

Table 1: Candidate Constraints on Selection Functions

Modulo the limit assumption, Table 2 provides an overview of which analyses have adopted which constraints.

Table 2: Similarity Analyses, modulo Limit Assumption

simply enforces that \(f(w,p)\) is indeed a set of p -worlds. Recall that \(f(w,p)\) is supposed to be the set of most similar p -worlds to w . The other constraints correspond to certain logical validities, as detailed in the supplement Formal Constraints on Similarity . This means that Pollock (1976) endorses the weakest logic for counterfactuals and Stalnaker (1968) the strongest. It is worth seeing how, independently of constraints (b)–(d), this semantics invalidates an antecedent monotonicity pattern like Antecedent Strengthening .

Consider an instance of Antecedent Strengthening involving \(\mathsf{A > C}\) and \(\mathsf{(A\land B)>C}\), and where the space of worlds is that given in Table 3 .

Table 3: A space of worlds W , and truth-values at each world

Now evaluate \(\mathsf{A > C}\) and \(\mathsf{(A\land B)>C}\) in \(w_{5}\) using a selection function \(f_1\) with the following features:

\(f_1(w_{5},{\llbracket}\mathsf{A}{\rrbracket}^{f_1}_v)=\{w_{2}\}\)

\(f_1(w_{5},{\llbracket}\mathsf{A}\land\mathsf{B}{\rrbracket}^{f_1}_v)=\{w_{1}\}\)

Since \(\mathsf{C}\) is true in \(w_{2}\), \(\mathsf{A > C}\) is true in \(w_{5}\) according to \(f_1\). But, since \(\mathsf{C}\) is false in \(w_{1}\), \(\mathsf{(A\land B) > C}\) is false in \(w_{5}\) according to \(f_1\). No constraints are needed here other than success . While \(f_1\) satisfies uniqueness , the counterexample works just as well if, say, \(f_1(w_{5},{\llbracket}\mathsf{A}{\rrbracket}^{f_1}_v)=\{w_{2},w_0\}\). Accordingly, all similarity analyses allow for the non-monotonic interpretation of counterfactual antecedents.

While Stalnaker (1968) and D. Lewis (1973b) remain the most popular similarity analyses, there are substantial logical issues which separate similarity analyses. These issues, and the constraints underlying them, are detailed in the supplement Formal Constraints on Similarity . Table 4 summarizes which validities go with which constraints.

Table 4: Selection Constraints & Associated Validities

A few comments are in order here, though. Strong centering is sufficient but not necessary for Modus Ponens, weak centering would do: \(w\in f(w,p)\) if \(w\in p\). LT and LAS follow from SSE, and allow similarity theorists to say why some instances of Transititivity and Antecedent Strengthening are intuitively compelling.

The issue of whether a second wave strict analysis ( §2.2.1 ) or a similarity analysis provides a better logic of counterfactuals is very much an open and subtle issue. As sections 2.2.1 and 2.3 detailed, both analyses have their own way of capturing the non-monotonic interpretation of antecedents. Both analyses also have their own way of capturing instances of monotonic inferences that do sound good. Perhaps this issue is destined for a stalemate. [ 31 ] But before declaring it such, it is important to investigate two patterns that are potentially more decisive: Simplification of Disjunctive Antecedents , and a pattern not yet discussed called Import-Export .

Both SDA and Import-Export are valid in a strict analyses and invalid on standard similarity analyses. Crucially, the counterexamples to them that have been offered by similarity theorists are significantly less compelling than those offered to patterns like Antecedent Strengthening . Import-Export relates counterfactuals like (33a) and (33b) .

It is hard to imagine one being true without the other. The basic strict analysis agrees: it renders them equivalent.

  • Import-Export \((\phi_1\land\phi_2)>\psi\,\leftmodels\vDash \phi_1>(\phi_2>\psi)\)

But it is not valid on a similarity analysis . [ 32 ] While Import-Export is generally regarded as a plausible principle, some have challenged it. Kaufmann (2005: 213) presents an example involving indicative conditionals which can be adapted to subjunctives. Consider a case where there is a wet match which will light if tossed in the campfire, but not if it is struck. It has not been lit. Consider now:

One might then deny (34a) . This match would not have lit if it had been struck, and if it had lit it would have to have been thrown into the campfire. (34b) , on the other hand, seems like a straightforward logical truth. However, it is worth noting that this intuition about (34a) is very fragile. The slight variation of (34a) in (35) is easy to hear as true.

This subtle issue may be moot, however. Starr (2014) shows that a dynamic semantic implementation of the similarity analysis can validate Import-Export , so it may not be important for settling between strict and similarity analyses.

As for the Simplification of Disjunctive Antecedents (SDA) , Fine (1975) , Nute (1975b) , Loewer (1976) , and Warmbrōd (1981) each object to the similarity analysis predicting that this pattern is invalid. Counterexamples like (29) from Mckay & van Inwagen 1977: 354) have a suspicious feature.

Starr (2014: 1049) and Warmbrōd (1981a: 284) observe that (29a) seems to be another way of saying that Spain would never have fought for the Allies. While Warmbrōd (1981a: 284) uses this to pragmatically explain-away this counterexample to his strict analysis, Starr (2014: 1049) makes a further critical point: it sounds inconsistent to say (29a) after asserting that Spain could have fought for the Allies.

Starr (2014: 1049) argues that this makes it inconsistent for a similarity theorist to regard this as a counterexample to SDA . On a similarity analysis of the could claim, it follows that there are no worlds in which Spain fought for the Allies most similar to the actual world: \(f(w_@,{\llbracket}\mathsf{Allies}{\rrbracket})={\emptyset}\). But if that’s the case, then (29b) is vacuously true on a similarity analysis, and so a similarity theorist cannot consistently claim that this is a case where the premise is true and conclusion false. It is, however, too soon for the strict theorist to declare victory. Nute (1980a) , Alonso-Ovalle (2009) , and Starr (2014: 1049) each develop similarity analyses where disjunction is given a non-Boolean interpretation to validate SDA without validating the other antecedent monotonic patterns. But even this is not the end of the SDA debate.

Nute (1980b: 33) considers a similar antecedent simplification pattern involving negated conjunctions:

  • Simplification of Negated Conjunctive Antecedents (SNCA) \(\neg(\phi_1\land \phi_2)>\neg \psi\vDash (\neg\phi_1>\psi)\land(\neg\phi_2>\psi)\)

Nute (1980b: 33) presents (37) in favor of SNCA.

Note that \(\mathsf{\neg(N\land A)}\) and \(\mathsf{\neg N\lor\neg A}\) are Boolean equivalents. However, non-Boolean analyses like Nute (1980a) , Alonso-Ovalle (2009) , and Starr (2014: 1049) designed to capture SDA break this equivalence, and so fail to predict that SNCA is valid. Willer (2015, 2017) develops a dynamic strict analysis which validates both SDA and SNCA. Fine (2012a,b) advocates for a departure from possible worlds semantics altogether in order to capture both SDA and SNCA. However, these accounts also face counterexamples. Fine (2012a,b) and Willer (2015, 2017) render \((\neg\phi_1\lor\neg\phi_2)>\psi\) and \(\neg(\phi_1\land\phi_2)>\psi\) equivalent, while Champollion, Ciardelli, and Zhang (2016) present a powerful counterexample to this equivalence.

Champollion, Ciardelli, and Zhang (2016) consider a light which is on when switches A and B are both up, or both down. Currently, both switches are up, and the light is on. Consider (38a) and (38b) whose antecedents are Boolean equivalents:

While (38a) is intuitively true, (38b) is not. [ 33 ] This is not a counterexample to SNCA , since the premise of that pattern is false. But such a counterexample is not hard to think up. [ 34 ]

Suppose the baker’s apprentice completely failed at baking our cake. It was burnt to a crisp, and the thin, lumpy frosting came out puke green. The baker planned to redecorate it to make it at least look delicious, but did not have time. We may explain our extreme dissatisfaction by asserting (39a) . But the baker should not infer (39b) and assume that his redecoration plan would have worked.

Willer (2017: §4.2) suggests that such a counterexample trades on interpreting \(\mathsf{\neg(B\land U)>H}\) as \(\mathsf{\neg B\land\neg U)>H}\), and provides an independent explanation of this on the basis of how negation and conjunction interact. If this is right, then an analysis which validates SDA and SNCA without rendering \(\neg(\phi_1\land\phi_2)>\psi\) and \(\neg\phi_1\lor\neg\phi_2>\psi\) equivalent is what’s needed. Ciardelli, Zhang, and Champollion (forthcoming) develop just such an analysis. As Ciardelli, Zhang, and Champollion (forthcoming: §6.4) explain, SDA and SNCA turn out to be valid for very different reasons. Champollion, Ciardelli, and Zhang (2016) and Ciardelli, Zhang, and Champollion (forthcoming) also argue that the falsity of (38b) cannot be predicted on a similarity analysis. This example must be added to a long list of examples which have been presented not as counterexamples to the logic of the similarity analysis, but to what it predicts (or fails to predict) about the truth of particular counterfactuals in particular contexts. This will be the topic of §2.5 , where it will also be explained why the strict analysis faces similar challenges.

Where does this leave us in logical the debate between strict and similarity analyses of counterfactuals? Even Import-Export and SDA fail to clearly identify one analysis as superior. It is possible to capture SDA on either analysis. Existing similarity analyses that validate SDA , however, also invalidate SNCA (Alonso-Ovalle 2009; Starr 2014) . By contrast existing strict analyses that validate SDA also validate SNCA (Willer 2015, 2017) . However, this is far from decisive. The validity of SNCA is still being investigated, and it is far from clear that it is impossible to have a similarity analysis that validates both SDA and SNCA, or a strict analysis that validates only SDA (perhaps using a non-Boolean semantics for disjunction). So even SNCA may fail to be the conclusive pattern needed to separate these analyses.

2.5 Truth-Conditions Revisited

In their own ways, Stalnaker (1968, 1984) and D. lewis (1973b) are candid that the similarity analysis is not a complete analysis of counterfactuals. As should be clear from §2.3 , the formal constraints they place on similarity are quite minimal and only serve to settle matters of logic. There are, in general, very many possible selection functions—and corresponding conceptions of similarity—for any given counterfactual. To explain how a given counterfactual like (40) expresses a true proposition, a similarity analysis must specify which particular conception of similarity informs it.

Of course, the strict analysis is in the same position. It cannot predict the truth of (40) without specifying a particular accessibility relation. In turn, the same question arises: on what basis do ordinary speakers determine some worlds to be accessible and others not? This section will overview attempts to answer these questions, and the many counterexamples those attempts have invited. These counterexamples have been a central motivation for pursuing alternative semantic analyses, which will be covered in §3 . While this section follows the focus of the literature on the similarity analysis ( §2.5.1 ), §2.5.2 will briefly detail how parallel criticisms apply to strict analyses.

What determines which worlds are counted as most similar when evaluating a counterfactual? Stalnaker (1968) explicitly sets this issue aside, but D. Lewis (1973b: 92) makes a clear proposal:

  • Lewis’ (1973b: 92) Proposal Our familiar, intuitive concept of comparative overall similarity, just applied to possible worlds, is employed in assessing counterfactuals.

Just as counterfactuals are context-dependent and vague, so is our intuitive notion of overall similarity. In comparing cost of living, New York and San Francisco may count as similar, but not in comparing topography. And yet, Lewis’ (1973b: 92) Proposal has faced a barrage of counterexamples. Lewis and Stalnaker parted ways in their responses to these counterexamples, though both grant that Lewis’ (1973b: 92) Proposal was not viable. Stalnaker (1984: Ch.7) proposes the projection strategy : similarity is determined by the way we “project our epistemic policies onto the world”. D. Lewis 1979) proposes a new system of weights that amounts to a kind of curve-fitting: we must first look to which counterfactuals are intuitively true, and then find ways of weighting respects of similarity—however complex—that support the truth of counterfactuals. Since Lewis’ (1973b: 92) Proposal and Lewis’ ( 1979 ) system of weights are more developed, and have received extensive critical attention, they will be the focus of this section. [ 35 ] It will begin with the objections to Lewis’ (1973b: 92) Proposal that motivated Lewis’ ( 1979 ) system of weights, and then some objections to that approach.

Fine (1975: 452) presents the future similarity objection to Lewis’ (1973b: 92) Proposal . (41) is plausibly a true statement about world history.

Suppose, optimistically, that there never will be a nuclear holocaust. Then, for every \(\mathsf{B\land H}\)-world, there will be a more similar \(\mathsf{B\land\neg H}\)-world, one where a small difference prevents the holocaust, such as a malfunction in the electrical detonation system. In short, a world where Nixon presses the button and a malfunction prevents a nuclear holocaust is more like our own than one where there is a nuclear holocaust that changes the face of the planet. But then Lewis’ (1973b: 92) Proposal incorrectly predicts that (41) is false.

Tichý (1976: 271) offers a similar counterexample. Given (42a) – (42c) , (42d) sounds false.

Lewis’ (1973b: 92) Proposal does not seem to predict the falsity of (42d) . After all, Jones is wearing his hat in the actual world, so isn’t a world where it’s not raining and he’s wearing his hat more similar to the actual one than one where it’s not raining and he isn’t wearing his hat?

(1979: 472) responds to these examples by proposing a ranked system of weights that give what he calls the standard resolution of similarity , which may be further modulated in context:

  • Avoid big, widespread, diverse violations of law. (“big miracles”)
  • Maximize the time period over which the worlds match exactly in matters of fact
  • Avoid even small, localized, simple violations of law. (“little miracles”)
  • It is of little or no importance to secure approximate similarity of particular fact, even in matters that concern us greatly.

While weight 2 gives high importance to keeping particular facts fixed up to the change required by the counterfactual, weight 4 makes clear that particular facts after that point need not be kept fixed. In the case of (42d) the fact that Jones is wearing his hat need not be kept fixed. It was a post-rain fact, so when one counterfactually supposes that it had not been raining, there is no reason to assume that Jones is still wearing his hat. Similarly, with example (41) . A world where Nixon pushes the button, a small miracle occurs to short-circuit the equipment and the nuclear holocaust is prevented will count as less similar than one where there is no small miracle and a nuclear holocaust results. A small-miracle and no-holocaust world is similar to our own only in one insignificant respect (particular matters of fact) and dissimilar in one important respect (the small miracle).

It is clear, however, that Lewis’ (1979) System of Weights is insufficiently general. Particular matters of fact often are held fixed.

Example (43) crucially holds fixed the outcome of a highly contingent particular fact: the coin outcome. Cases of this kind are discussed extensively by Edgington (2004) . Example (44) shows that a chancy outcome is not an essential feature of these cases. Noting the existence of recalcitrant cases, (1979: 472) simply says he wishes he knew why they came out differently. Additional counterexamples to the Lewis’ (1979) System of Weights have been proposed by Bowie (1979) , Kment (2006) , and Wasserman (2006) . [ 36 ] Kment (2006: 458) proposes a new similarity metric to handle this example which is sensitive to the way particular facts are explained, and is integrated into a general account of metaphysical modality in Kment (2014) . Ippolito (2016) proposes a new theory of how context determines similarity for counterfactuals which aims to make the correct predictions about many of the above cases.

Another response to these counterexamples has been to develop alternative semantic analyses of counterfactuals such as premise semantics (Kratzer 1989, 2012; Veltman 2005) and causal models (Schulz 2007 2011; Briggs 2012; Kaufmann 2013) . These accounts start from the observation that the counterexamples can be easily explained in a model where matters of fact depend on each other. In (42) , when we counterfactually retract the fact that it rained, we don’t keep the fact that the man was wearing his hat because that fact depended on it raining. Hence, (42d) is false. In (43) , when we counterfactually retract that you didn’t bet on heads, we keep the fact that the coin came up heads because it is independent of the fact that you didn’t bet on heads. These accounts offer models of how laws, and law-like generalizations, make facts dependent on each other, and argue that once this is done, there is no work left for similarity to do in the semantics of counterfactuals. While these accounts are the focus of §3 , it is worth presenting one of the additional counterexamples to the similarity analysis that has emerged from this literature.

Recall (38) from §2.4 . Champollion, Ciardelli, and Zhang (2016) and Ciardelli, Zhang, and Champollion (forthcoming) argue on the basis of this example that any similarity analysis will make incorrect predictions about the truth-conditions of counterfactuals. In this example a light is on either when Switch A and B are both up, or they are both down. Otherwise the light is off. Suppose both switches are up and the light is on.

Intuitively, (38a) is true, as are \(\mathsf{\neg A >\neg L}\) and \(\mathsf{\neg B >\neg L}\), but (38b) is false. Champollion, Ciardelli, and Zhang (2016: 321) argue that a similarity analysis cannot predict \(\mathsf{\neg A >\neg L}\) and \(\mathsf{\neg B >\neg L}\) to be true, while (38b) is false. In order for \(\mathsf{\neg A >\neg L}\) to be true, the particular fact that Switch B is up must count towards similarity. Similarly, for \(\mathsf{\neg B >\neg L}\) to be true, the particular fact that Switch A is up must count towards similarity. But then it follows that (38b) is true on a similarity analysis: the most similar worlds where A and B are not both up have to either be worlds where Switch B is down but Switch A is still up, or Switch A is down and Switch B is still up. In those worlds, the light would be off, so the similarity analysis incorrectly predicts (38b) to be true. Champollion, Ciardelli, and Zhang (2016) instead pursue a semantics in terms of causal models where counterfactually making \(\neg \mathsf{(A\land B)}\) true and making \(\mathsf{\neg A\lor\neg B}\) true come apart.

Do strict analyses avoid the troubles faced by similarity analyses when it comes to truth-conditions? This question is difficult to answer, and has not been explicitly discussed in the literature. Other than the theory of Warmbrōd (1981a,b) , strict theorists have not made proposals for the accessibility relation analogous to Lewis’ (1973b: 92) Proposal for similarity. And, Warmbrōd’s proposal about the pragmatics of the accessibility relation is this:

  • Warmbrōd’s (1981b: 280) Proposal In the normal case of interpreting a conditional with a nonabsurd antecedent p , the worlds accessible from w will be those that are as similar to w as the most similar p -worlds.

All subsequent second wave strict analyses have ended up in similar territory. The dynamic analyses developed by Fintel (2001) , Gillies (2007) , and Willer (2015, 2017, 2018) assign strict truth-conditions to counterfactuals, but have them induce changes in an evolving space of possible worlds. These changes must render the antecedent consistent with an evolving body of discourse. While Fintel (2001) and Willer (2018) explicitly appeal to a similarity ordering for this purpose, Gillies (2007) and Willer (2017) do not. Nevertheless, the formal structures used by Gillies (2007) and Willer (2017) for this purpose give rise to the same question: which facts stay and which facts go when rendering the counterfactual antecedent consistent? Accordingly, at present, it does not appear that the strict analysis avoids the kinds of concerns raised for the similarity analysis in §2.5.1 .

Recall Goodman’s Problem from §1.4 : the truth-conditions of counterfactuals intuitively depend on background facts and laws, but it is difficult to specify these facts and laws in a way that does not itself appeal to counterfactuals. Strict and similarity analyses make progress on the logic of conditionals without directly confronting this problem. But the discussion of § 2.5 makes salient a related problem. Lewis’ (1979) System of Weights amounts to reverse-engineering a similarity relation to fit the intuitive truth-conditions of counterfactuals. While Lewis’ ( 1979 ) approach avoids characterizing laws and facts in counterfactual terms, Bowie (1979: 496–497) argues that it does not explain why certain counterfactuals are true without appealing to counterfactuals. Suppose one asks why certain counterfactuals are true and the similarity theorist replies with Lewis’ ( 1979 ) recipe for similarity. If one asks why those facts about similarity make counterfactuals true, the similarity theorist cannot reply that they are basic self-evident truths about the similarity of worlds. Instead, they must say that those similarity facts make those counterfactuals true. Bowie’s ( 1979: 496–497 ) criticism is that this is at best uninformative, and at worst circular.

A related concern is voiced by Horwich (1987: 172) who asks “why we should have evolved such a baroque notion of counterfactual dependence”, namely that captured by Lewis’ (1979) System of Weights . The concern has two components: why would humans find it useful, and why would human psychology ground counterfactuals in this concept of similarity rather than our ready-at-hand intuitive concept of overall similarity? These questions are given more weight given the centrality of counterfactuals to human rationality and scientific explanation outlined in §1 . Psychological theories of counterfactual reasoning and representation have found tools other than similarity more fruitful ( §1.2 ). Similarly, work on scientific explanation has not assigned any central role for similarity ( 1.3 ), and as Hájek (2014: 250) puts it:

Science has no truck with a notion of similarity; nor does Lewis’ ( 1979 ) ordering of what matters to similarity have a basis in science.

Morreau (2010) has recently argued on formal grounds that similarity is poorly suited to the task assigned to it by the similarity analysis. The similarity analysis, especially as elaborated by D. Lewis (1979) , tries to weigh some similarities between worlds against their differences to arrive at a notion of overall comparative similarity between those worlds. Morreau (2010: 471) argues that:

[w]e cannot add up similarities or weigh them against differences. Nor can we combine them in any other way… No useful comparisons of overall similarity result. (Morreau 2010: §4)

articulates this argument formally via a reinterpretation of Arrow’s Theorem in social choice theory. Arrow’s Theorem shows that it is not possible to aggregate individuals’ preferences regarding some alternative outcomes into a coherent “collective preference” ordering over those outcomes, given minimal assumptions about their rationality and autonomy. As summarized in §6.3 of Arrow’s theorem , Morreau (2010) argues that the same applies to aggregating respects of similarity and difference: there is no way to add them up into a coherent notion of overall similarity.

Strict and similarity analyses of counterfactuals showed that it was possible to address the semantic puzzles described in §1.4 with formally explicit logical models. This dispelled widespread skepticism of counterfactuals and established a major area of interdisciplinary research. Strict analyses have been revealed to provide a stronger, more classical, logic, but must be integrated with a pragmatic explanation of how counterfactual antecedents are interpreted non-monotonically. Similarity analyses provide a much weaker, more non-classical, logic, but capture the non-monotonic interpretation of counterfactual antecedents within their core semantic model. It is now a highly subtle and intensely debated question which analysis provides a better logic for counterfactuals, and which version of each kind of analysis is best. This intense scrutiny and development has also generated a wave of criticism focused on their treatment of truth-conditions, Goodman’s Problem , and integration with thinking about counterfactuals in psychology and the philosophy of science ( §2.5 , §2.6 ). None of these criticisms are absolutely conclusive, and these two analyses, particularly the similarity analysis, remain standard in philosophy and linguistics. However, the criticisms are serious enough to merit exploring alternative analyses. These alternative accounts take inspiration from a particular diagnosis of the counterexamples discussed in §2.5 : facts depend on each other, so counterfactually assuming p involves not just giving up not-p , but any facts which depended on not-p . The next section will examine analyses of this kind.

3. Semantic Theories of Counterfactual Dependence

Similarity and strict analyses nowhere refer to facts, or propositions, depending on each other. Indeed, 1979 was primarily concerned with explaining which true counterfactuals, given a similarity analysis, manifest a relation of counterfactual dependence. Other analyses have instead started with the idea that facts depend on each other, and then explain how these relations of dependence make counterfactuals true. As will become clear, none of these analyses endorse the naive idea that \(\mathsf{A > B}\) is true only when B counterfactually depends on A . The dependence can be more complex, indirect, or B could just be true and independent of A . Theories in this family differ crucially in how they model counterfactual dependence. In premise semantics ( §3.1 ) dependence is modeled in terms of how facts, which are modeled as parts of worlds, are distributed across a space of worlds that has been constrained by laws, or law-like generalizations. In probabilistic semantics ( §3.2 ), this dependence is modeled as some form of conditional probability. In Bayesian networks, structural equations, and causal models ( §3.3 ), it is modeled in terms of the Bayesian networks discussed at the beginning of §1.2.3 . Because theories of these three kinds are very much still in development and often involve even more sophisticated formal models than those covered in §2 , this section will have to be more cursory than §2 to ensure breadth and accessibility.

Veltman (1976) and Kratzer (1981b) approached counterfactuals from a perspective closer to Goodman (1947) : counterfactuals involve explicitly adjusting a body of premises, facts or propositions to be consistent with the counterfactual’s antecedent, and checking to see if the consequent follows from the revised premise set—in a sense of “follow” to be articulated carefully. Since facts or premises hang together, changing one requires changing others that depend on it. The function of counterfactuals is to allow us to probe these connections between facts. While D. Lewis (1981) proved that the Kratzer (1981b) analysis was a special case of similarity semantics, subsequent refinements of premise semantics in Kratzer (1989, 1990, 2002, 2012) and Veltman (2005) evidenced important differences. Kratzer (1989: 626) nicely captures the key difference:

[I]t is not that the similarity theory says anything false about [particular] examples… It just doesn’t say enough. It stays vague where our intuitions are relatively sharp. I think we should aim for a theory of counterfactuals that is able to make more concrete predictions with respect to particular examples.

From a logical point of view, premise semantics and similarity semantics do not diverge. They diverge in the concrete predictions made about the truth-conditions of counterfactuals in particular contexts without adding additional constraints to the theory like Lewis’ (1979) System of Weights .

How does premise semantics aim to improve on the predictions of similarity semantics? It re-divides the labor between context and the semantics of counterfactuals to more accurately capture the intuitive truth-conditions of counterfactuals, and intuitive characterizations of how context influences counterfactuals. In premise semantics, context provides facts and law-like relations among them, and the counterfactual semantics exploits this information. By contrast, the similarity analysis assumes that context somehow makes a similarity relation salient, and has to make further stipulations like Lewis’ (1979) System of Weights about how facts and laws enter into the truth-conditions of counterfactuals in particular contexts. This can be illustrated by considering how Tichý’s ( 1976 ) example (42) is analyzed in premise semantics. This illustration will use the Veltman (2005) analysis because it is simpler than Kratzer (1989, 2012) —that is not to say it is preferable. The added complexity in Kratzer (1989, 2012) provides more flexibility and a broader empirical range including quantification and modal expressions other than would -counterfactuals.

Recall Tichý’s ( 1976 ) example, with the intuitively false counterfactual (42d) :

Veltman (2005) models how the sentences leading up to the counterfactual (42d) determine the facts and laws relevant to its interpretation. The law-like generalization in (42a) is treated as a strict conditional which places a hard constraint on the space of worlds relevant to evaluating the counterfactual. [ 37 ] The particular facts introduced by (42c) provide a soft constraint on the worlds relevant to interpreting the counterfactual. Figure 9 illustrates this model of the context and its evolution, including a third atomic sentence \(\mathsf{H}\) for reasons that will become clear shortly.

\(\hspace{15px}\underrightarrow{\medsquare(\mathsf{R\supset W})}\)

\(\quad\underrightarrow{\mathsf{R\land W}}\)

Figure 9: Context for (42) , Facts in Bold, Laws Crossing out Worlds

On this model a context provides a set of worlds compatible with the facts, in \(C_2\) \(\textit{Facts}_{C_2}={\{w_6,w_7\}}\), and the set of worlds compatible with the laws, in \(C_2\) \(\textit{Universe}_{C_2}={\{w_0,w_1,w_2,w_3,w_6,w_7\}}\). This model of context is one essential component of the analysis, but so too is the way Veltman (2005) models worlds, situations, and dependencies between facts. These further components allow Veltman (2005) to offer a procedure for “retracting” the fact that \(\mathsf{R}\) holds from a world.

Veltman’s ( 2005 ) analysis of counterfactuals identifies possible worlds with atomic valuations (functions from atomic sentences to truth-values) like those depicted in Figure 9 . So \(w_6={\{{\langle \mathsf{R},1\rangle},{\langle \mathsf{W},1\rangle},{\langle \mathsf{H},0\rangle}\}}\). This makes it possible to offer a simple model of situations , which are parts of worlds: any subset of a world. [ 38 ] It is now easy to think about one fact (sentence having a truth-value) as determining another fact (sentence having a truth value). In context \(C_3\), \(\mathsf{R}\) being 1 determines that \(\mathsf{W}\) will be 1. Once you know that \(\mathsf{R}\) is assigned to 1, you know that \(\mathsf{W}\) is too. Veltman’s ( 2005 ) proposal is that speakers evaluate a counterfactual by retracting the fact that the antecedent is false from the worlds in the context, which gives you some situations, and then consider all those worlds that contain those situations, are compatible with the laws, and make the antecedent true. If the consequent is true in all of those worlds, then we can say that the counterfactual is true in (or supported by) the context. So, to evaluate \(\neg \mathsf{R>W}\), one first retracts the fact that \(\mathsf{R}\) is true, i.e., that \(\mathsf{R}\) is assigned to 1, then one finds all the worlds consistent with the laws that contain those situations and assign \(\mathsf{R}\) to 0. If all of those worlds are also \(\mathsf{W}\) worlds, then the counterfactual is true in (or supported by) the context. For Veltman (2005) , the characterization of this retraction process relies essentially on the idea of facts determining other facts.

According to Veltman (2005) , when you are “retracting” a fact from the facts in the context, you begin by considering each \(w\in \textit{Facts}_C\) and find the smallest situations in w which contain only undetermined facts—he calls such a situation a basis for w . This is a minimal situation which, given the laws constraining \(\textit{Universe}_C\), determines all the other facts about that world. For example, \(w_6\) has only one basis, namely \(s_0={\{{\langle \mathsf{R},1\rangle},{\langle \mathsf{H},0\rangle}\}}\), and \(w_7\) has only one basis, namely \(s_1={\{{\langle \mathsf{R},1\rangle},{\langle \mathsf{H},1\rangle}\}}\). Once you have the bases for a world, you can retract a fact by finding the smallest change to the basis that no longer forces that fact to be true. So retracting the fact that \(\mathsf{R}\) is true from \(s_0\) produces \(s'_0={\{{\langle \mathsf{H},0\rangle}\}}\), and retracting it from \(s_1\) produces \(s'_1={\{{\langle \mathsf{H},1\rangle}\}}\). The set consisting of these two situations is the premise set .

To evaluate \(\mathsf{\neg R>W}\), one finds the set of worlds from \(\textit{Universe}_{C_3}\) that contains some member of the premise set \(s'_0\) or \(s'_1\): \({\{w_0,w_1,w_2,w_3\}}\)—these are the worlds consistent with the premise set and the laws. Are all of the \(\neg \mathsf{R}\)-worlds in \({\{w_0,w_1,w_2,w_3\}}\) also \(\mathsf{W}\)-worlds? No, \(w_2\) and \(w_3\) are not. Thus, \(\neg \mathsf{R>W}\) is not true in (or supported by) the context \(C_3\). This was the intuitively correct prediction about example (42) . Of course, the similarity analysis supplemented with Lewis’ (1979) System of Weights also makes this prediction. But consider again example (43) , which is not predicted:

This example relies seamlessly on three pieces of background knowledge about how betting works:

If you don’t bet, you don’t win: \(\mathsf{\medsquare(\neg B\supset\neg W)}\)

If you bet and it comes up heads, you win: \(\mathsf{\medsquare((B\land H)\supset W)}\)

If you bet and it doesn’t come up heads, you don’t win: \(\mathsf{\medsquare((B\land\neg H)\supset\neg W)}\)

And it specifies facts: \(\mathsf{\neg B\land H}\). The resulting context is detailed in Figure 10 :

Figure 10: Context for (43)

Now, consider the counterfactual \(\mathsf{B>W}\). The first step is to retract the fact that \(\mathsf{B}\) is false from each world in \(\textit{Facts}_{C_{(43)}}\). That’s just \(w_2\). This world has two bases—minimal situations consisting of undetermined facts—\(s_0={\{{\langle \mathsf{B},0\rangle},{\langle \mathsf{H},1\rangle}\}}\) and \(s_1={\{{\langle \mathsf{H},1\rangle},{\langle \mathsf{W},0\rangle}\}}\). [ 39 ] The next step is to retract the fact that \(\mathsf{B}\) is false from both bases. For \(s_0\) this yields \(s'_0={\{{\langle \mathsf{H},1\rangle}\}}\) and for \(s_1\) this also yields \(s'_0\)—since the fact that you didn’t win together with the fact that the coin came up heads, forces it to be false that you bet. Given this situation, the premise set consists of the two worlds in Universe (43) that contain \(s'_0\): \({\{w_2,w_7\}}\). Now, are all of the \(\mathsf{B}\)-worlds in this set also \(\mathsf{W}\)-worlds? Yes, \(w_7\) is the only \(\mathsf{B}\)-world, and it is also a \(\mathsf{W}\)-world. So Veltman (2005) correctly predicts that (43) is true in (supported by) its natural context.

It should now be more clear how premise semantics delivers on its promise to be more predictive than similarity semantics when it comes to counterfactuals in context, and affords a more natural characterization of how a context informs the interpretation of counterfactuals. This analysis was crucially based on the idea that some facts determine other facts, and that the process of retracting a fact is constrained by these relations. However, even premise semantics has encountered counterexamples.

Schulz (2007: 101) poses the following counterexample to Veltman (2005) .

Intuitively, (45d) is true in the context. Figure 11 details the context predicted for it by Veltman (2005) .

Figure 11: Context for (45d)

There are two bases for \(w_4\): \(s_0={\{{\langle \mathsf{A},1\rangle},{\langle \mathsf{L},0\rangle}\}}\)—the fact that Switch A is up and the light is off determines that Switch B is down—and \(s_1={\{{\langle \mathsf{A},1\rangle},{\langle \mathsf{B},0\rangle}\}}\)—the fact that Switch A is up and the fact that B is down determines that the light is off. (No smaller situation would determine the facts of \(w_4\).) Retracting \(\mathsf{B}\)’s falsity from \(s_0\) leads to trouble. \(s_0\) forces \(\mathsf{B}\) to be false, but there are two ways of changing this. First, one can remove the fact that the light is on, yielding \(s'_0={\{{\langle \mathsf{A},1\rangle}\}}\). Second, one can eliminate the fact that Switch A is up, yielding \(s''_0={\{{\langle \mathsf{L},0\rangle}\}}\). Because of \(s''_0\), the premise set will contain \(w_2\), meaning it allows that in retracting the fact that Switch B is down one can give up the fact that Switch A is up. But then there is a \(\mathsf{B}\)-world where \(\mathsf{L}\) is false, and \(\mathsf{B>L}\) is incorrectly predicted to be false.

Intuitively, the analysis went wrong in allowing the removal of the fact that Switch A is up when retracting the fact that Switch B is down. Schulz (2007: §5.5) provides a more sophisticated version of this diagnosis: although the fact that Switch A is up and the fact that the light is off together determine that Switch B is down, only the fact that the light is off depends on the fact that Switch B is down. If one could articulate this intuitive concept of dependence, and instead only retract facts that depend on the fact you are retracting (in this case the fact that B is down), then the error could be avoided. It is unclear how to implement this kind of dependence in Veltman’s ( 2005 ) framework. Schulz (2007: §5.5) goes on show that structural equations and causal models provide the necessary concept of dependence—for more on this approach see §3.3 below. After all, it seems plausible that the light being off causally depends on Switch B being down, but Switch A being up does not causally depend on Switch B being down. It remains to be seen whether the more powerful framework developed by Kratzer (1989, 2012) can predict (45) .

While premise semantics has been prominent among linguists, probabilistic theories have been very prominent among philosophers thinking about knowledge and scientific explanation. [ 40 ] Adams (1965, 1975) made a seminal proposal in this literature:

  • Adams’ Thesis The assertability of q if p is proportional to \(P(q\mid p)\), where P is a probability function representing the agent’s subjective credences—see Definition 1 .

However, Adams (1970) was also aware that indicative/subjunctive pairs like (3) / (4) differ in their assertability. To explain this, he proposed the prior probability analysis of counterfactuals (Adams 1976) :

  • Adams’ Prior Probability Analysis The assertability of \(\phi>\psi\) is proportional to \(P_0(\psi\mid\phi)\), where \(P_0\) is the agent’s credence prior to learning that \(\phi\) was false.

It would seem that this analysis accurately predicts our intuitions in (45) about \(\mathsf{B>L}\). Let \(P_0\) be an agent’s credence before learning that Switch B is down. (45a) requires that \(P_0(\mathsf{L}\mid \mathsf{A\land B})\) is (or is close to) 1, (45b) requires that \(P_0(\mathsf{\neg L}\mid \mathsf{\neg A\lor \neg B})\) is (or is close to) 1. The agent also learns that Switch A is up, so \(P_0(\mathsf{A})\) is (or is close to) 1. All of this together seems to guarantee that \(P_0(\mathsf{B\mid L})\) is also very high. However, this is due to an inessential artifact of the example: the agent learned that Switch B was down after learning that Switch A is up. This detail does not matter to the intuition. As was seen with example (43) , we often hold fixed facts that happen after the antecedent turns out false. Indeed, Adams’ Prior Probability Analysis makes the incorrect prediction that (43) is unassertible in its natural context.

This problem for Adams’ Prior Probability Analysis is addressed in Edgington (2003, 2004: 21) who amends the analysis: \(P_0\) may also reflect any facts the agent learns after they learn that the antecedent is false, provides that those facts are causally independent of the antecedent. This parallels the idea pursued by Schulz (2007: Ch.5) to integrate causal dependence into the analysis of counterfactuals. This idea was also pursued in a probabilistic framework by Kvart (1986, 1992) . Kvart (1986, 1992) , however, does not propose a prior probability analysis and does not regard the probabilities as subjective credences: they are instead objective probabilities (propensity or objective chance). Skyrms (1981) also proposes a propensity account, but pursues a prior propensity account analogous to the subjective one proposed by Adams (1976) .

Objective probability analyses have been popular among philosophers trying to capture the way that counterfactuals feature in physical explanations, and why they are so useful to agents like us in worlds like ours. Loewer (2007) is a good example of such an account, who grounds the truth of certain counterfactuals regarding our decisions like (46) in statistical mechanical probabilities.

Loewer (2007) proposes that (46) is true just in case (where \(P_{\textit{SM}}\) is the statistical mechanical probability distribution and \(M(t)\) is a description of the macro-state of the universe at t ):

Loewer (2007) acknowledges that this analysis is limited to counterfactuals like (46) . He argues that it can address the philosophical objections to the similarity analysis discussed in §2.6 , namely why counterfactuals are useful in scientific explanations, and for agents like us in a world like our own.

Conditional probability analyses do not proceed by assigning truth-conditions to (all) counterfactuals. They instead associate them with certain conditional probabilities. [ 41 ] This makes it difficult to integrate the theory into a comprehensive compositional semantics and logic for a natural language. Kaufmann (2005 2008) makes important advances here, but it remains an open issue for conditional probability analyses. Leitgeb (2012a,b) thoroughly develops a new conditional probability analysis which regards \(\mathsf{\phi>\psi}\) as true when the relevant conditional probability is sufficiently high. [ 42 ] But conditional probability analyses have other limitations. Without further development, these analyses are limited in their ability to explain how humans judge particular counterfactuals to be true. There is a large literature in psychology, beginning with Kahneman, Slovic, and Tversky 1982 , showing that human reasoning diverges in predictable way from precise probabilistic reasoning. Even if these performance differences didn’t turn up in counterfactuals and conditional probabilities, there is an implementation issue. As discussed in §1.2.3 , directly implementing probabilistic knowledge makes unreasonable demands on memory. Bayesian Networks are one proposed solution to this implementation issue. They are also used in the analysis of causal dependence ( §1.3 ), which conditional probability analyses must appeal to anyway. Since Bayesian Networks can also be used to directly formulate a semantics of counterfactuals, they provide an worthwhile alternative to conditional probability analyses despite proceeding from similar assumptions.

Recall from §1.2.3 the basic idea of a Bayesian Network: rather than storing probability values for all possible combinations of some set of variables, a Bayesian Network represents only the conditional probabilities of variables whose values depend on each other. This can be illustrated for (45) .

Sentences (45a) - (45c) can be encoded by the Bayesian Network and structural equations in Figure 12 .

Figure 12: Bayesian Network and Structural Equations for (45)

Recall that \(L\dequal A\land B\) means that the value of L equals the value of \(A\land B\), but also asymmetrically depends on the value of \(A\land B\): the value of \(A\land B\) determines the value of L , and not vice-versa. How, given the network in Figure 12 , does one evaluate the counterfactual \(\mathsf{B>N}\)? Several different answers have been given to this question.

Pearl (1995, 2000, 2009, 2013: Ch.7) proposes:

  • Interventionism Evaluate \(\mathsf{B > L}\) relative to a Bayesian Network by removing any incoming arrows to B , setting its value to 1, and projecting this change forward through the remaining network. If L is 1 in the resulting network, \(\mathsf{B > L}\) is true; otherwise it’s false.

On this approach, one simply deletes the assignment \(B=0\), replaces it with \(B=1\), and solves for L using the equation \(L\dequal A\land B\). Since the deletion of \(B=0\) does not effect the assignment \(A=1\), it follows that \(L=1\) and that the counterfactual is true. This simple recipe yields the right result. Pearl nicely sums up the difference between this kind of analysis and a similarity analysis:

In contrast with Lewis’s theory, counterfactuals are not based on an abstract notion of similarity among hypothetical worlds; instead, they rest directly on the mechanisms (or “laws,” to be fancy) that produce those worlds and on the invariant properties of those mechanisms. Lewis’s elusive “miracles” are replaced by principled [interventions] which represent the minimal change (to a model) necessary for establishing the antecedent… Thus, similarities and priorities—if they are ever needed—may be read into the [interventions] as an afterthought… but they are not basic to the analysis. (Pearl 2009: 239–240)

As interventionism is stated above, it does not apply to conditionals with logically complex antecedents or consequents. This limitation is addressed by Briggs (2012) , who also axiomatizes and compares the resultant logic to D. Lewis (1973b) and Stalnaker (1968) —significantly extending the analysis and results in Pearl (2009: Ch.7) . Integrations of causal models with premise semantics (Schulz 2007, 2011; Kaufmann 2013; Santorio 2014; Champollion, Ciardelli, & Zhang 2016; Ciardelli, Zhang, & Champollion forthcoming) provide another way of incorporating an interventionist analysis into a fully compositional semantics. However, interventionism does face other limitations.

Hiddleston (2005) presents the following example.

(48c) is intuitively true in this context. The network for (48) is given in Figure 13 .

Figure 13: Bayesian Network and Structural Equations for (48)

Hiddleston (2005) observes that interventionism does not predict \(\mathsf{F>B}\) to be true. It tells one to delete the arrow going in to F , set its value to 1, and project the consequences of doing so. However, none of the other values depend on F so they keep their actual values: \(L=0\) and \(B=0\). Accordingly, \(\mathsf{F>B}\) is false, contrary to intuition. Further, because the intervention on F has destroyed its connection to B , it’s not even possible to tweak interventionism to allow values to flow backwards (to the left) through the network. [ 43 ] Hiddleston’s ( 2005 ) counterexample highlights the possibility of another kind of counterexample featuring embedded conditionals. Consider again the network in Figure 12 . The following counterfactual seems true ( Starr 2012: 13 ).

And, considering a simple match, Fisher (2017b: §1) observes that (50b) is intuitively false.

In both cases, interventionism is destined to make the wrong prediction. With (49) , the intervention in the first antecedent removes the connection between Switch A and the light, so when the antecedent of the consequent is made true by intervention, it does not result in L ’s value becoming 0. And so the whole counterfactual comes out false. Similarly with (50b) , when the first antecedent is made true by intervention, it stays true even after the second antecedent is evaluated. Hence the whole conditional is predicted to be true. Fisher (2017a) also observes that interventionism also has no way of treating counterlegal counterfactuals like if Switch A had alone controlled the light, the light would be on .

These counterexamples to interventionism have stimulated alternative accounts like Hiddleston’s ( 2005 ) minimal network analysis and further developments of that analysis (Rips 2010; Rips & Edwards 2013; Fisher 2017b) . Instead of modifying an existing network to make the antecedent true, this analysis considers alternate networks where only the parent nodes of the antecedent which directly influence it are changed to make the antecedent come true. However, Pearl’s ( 2009 ) interventionist analysis has also been incorporated into the extended structural models analysis (Lucas & Kemp 2015) . This analysis aims to capture interventions as a special case of a more general proposal about how antecedents are made true. One important aspect of this proposal is that interventions often involve inserting a hidden node that amounts to an unknown cause of the antecedent. The analysis of Snider and Bjorndahl (2015) pursues a third idea: counterfactuals are not interpreted by manipulating a background network, but instead serve to constrain the class of possible networks compatible with the information shared in a conversation, as in Stalnaker’s ( 1978 ) theory of assertion. [ 44 ] Among these relations can be cause-to-effect networks as in (45d) , but also networks that involve the antecedent and consequent having a common cause, as in (48c) . As should be clear, this is a rapidly developing area of research where it is not possible to identify one analysis as standard or representative. It does bear emphasizing that this literature is driven not only by precise formal models, but also by experimental data which is brought to bear on the predictions of these analyses.

A few final philosophical remarks are in order about the kinds of analyses discussed here. If one follows Woodward (2002) and Hitchcock (2001) in their interpretation of these networks, a structural equation should be viewed as a primitive counterfactual. It follows that this is a non-reductive analysis of counterfactual dependence: it only explains how the truth of arbitrarily complex counterfactual sentences are grounded in basic relations of counterfactual dependence. However, note in the earlier quotation above from Pearl (2009: 239–240) that he interprets structural equations as basic mechanisms or laws, and so arguably counts as an analysis of counterfactuals in terms of laws. These amount to two very different philosophical positions that interact with the philosophical debates surveyed in §1.3 .

It is also worth noting that while many working in this framework apply these networks to causal relations, there is no reason to assume that the analysis would not apply to other kinds of dependence relations. For example, constitutional dependence is at the heart of counterfactuals like:

From a Bayesian Network approach to mental representation ( §1.2.3 ), this makes perfect sense: the networks encode probabilistic dependence which can come from causal or constitutional facts.

Finally, it is worth highlighting that the philosophical objections directed at the similarity analysis in §2.6 are addressed, at least to some degree, by structural equation analyses. Because the central constructs of this analysis—structural equations and Bayesian Networks—are also employed in models of mental representation, causation, and scientific explanation, it grounds counterfactuals in a construct already taken to explain how creatures like us cope with a world like the one we live in.

Premise semantics ( 3.1 ), conditional probability analyses ( §3.2 ) and structural equation analyses ( §3.3 ) all aim to analyze counterfactuals by focusing on certain relations between facts, rather than similarities between worlds. These accounts make clearer and more accurate predictions about particular counterfactuals in context than similarity analyses. But, ultimately, both premise semantics and conditional probability analyses had to incorporate causal dependence into their theories. Structural equation analyses do this from the start, and improve further on the predictions of premise semantics and conditional probability analyses. Another strength of this analysis is that it integrates elegantly into the broader applications of counterfactuals in theories of rationality, mental representation, causation, and scientific explanation surveyed in §1.1 . There is still rapid development of structural equation analyses, though, so it is too early to say where the analysis will stabilize, or how it will fair under thorough critical examination.

Philosophers, linguists, and psychologists remain fiercely divided on how to best understand counterfactuals. Rightly so. They are at the center of questions of deep human interest ( §1 ). The renaissance on this topic in the 1970s and 1980s focused on addressing certain semantic puzzles and capturing the logic of counterfactuals ( §2 ). From this seminal literature, similarity analyses (D. Lewis 1973b; Stalnaker 1968) have enjoyed the most widespread popularity in philosophy ( §2.3 ). But the logical debate between similarity and strict analyses is still raging, and strict analyses provide a viable logical alternative ( §2.4 ). Criticisms of these logical analyses have focused recent debates on our intuitions about particular utterances of counterfactuals in particular contexts. Structural equation analyses ( §3.3 ) have emerged as a particularly prominent alternative to similarity and strict analyses, which claims to improve on both in significant respects. These analyses are now being actively developed by philosophers, linguists, psychologists, and computer scientists.

  • Adams, Ernest W., 1965, “The Logic of Conditionals”, Inquiry , 8: 166–197. doi:10.1080/00201746508601430
  • –––, 1970, “Subjunctive and Indicative Conditionals”, Foundations of Language , 6(1): 89–94.
  • –––, 1975, The Logic of Conditionals , Dordrecht: D. Reidel.
  • –––, 1976, “Prior Probabilities and Counterfactual Conditionals”, in Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science , William L. Harper and Clifford Alan Hooker (eds.) (The University of Western Ontario Series in Philosophy of Science), Springer Netherlands, 6a:1–21. doi:10.1007/978-94-010-1853-1_1
  • Alonso-Ovalle, Luis, 2009, “Counterfactuals, Correlatives, and Disjunction”, Linguistics and Philosophy , 32(2): 207–244. doi:10.1177/0146167214563673
  • Alquist, Jessica L., Sarah E. Ainsworth, Roy F. Baumeister, Michael Daly, and Tyler F. Stillman, 2015, “The Making of Might-Have-Beens: Effects of Free Will Belief on Counterfactual Thinking”, Personality and Social Psychology Bulletin , 41(2): 268–283. doi:10.1177/0146167214563673
  • Anderson, Alan Ross, 1951, “A Note on Subjunctive and Counterfactual Conditionals”, Analysis , 12(2): 35–38. doi:10.2307/3327037
  • Arregui, Ana, 2007, “When Aspect Matters: The Case of Would-Conditionals”, Natural Language Semantics , 15(3): 221–264. doi:10.1007/s11050-007-9019-6
  • –––, 2009, “On Similarity in Counterfactuals”, Linguistics and Philosophy , 32(3): 245–278. doi:10.1007/s10988-009-9060-7
  • Barker, Stephen J., 1998, “Predetermination and Tense Probabilism”, Analysis , 58(4): 290–296. doi:10.1093/analys/58.4.290
  • Bennett, Jonathan, 1974, “Counterfactuals and Possible Worlds”, Canadian Journal of Philosophy , 4(2): 381–402. doi:10.1080/00455091.1974.10716947
  • –––, 2003, A Philosophical Guide to Conditionals , Oxford: Oxford University Press.
  • Bennett, Karen, 2017, Making Things Up , New York: Oxford University Press.
  • Bittner, Maria, 2011, “Time and Modality without Tenses or Modals”, in Tense Across Languages , Renate Musan and Monika Rathers (eds.), Tübingen: Niemeyer, 147–188. [ Bittner 2011 available online ]
  • Bobzien, Susanne, 2011, “Dialectical School”, in The Stanford Encyclopedia of Philosophy , Edward N. Zalta (ed.), Fall 2011, URL = < http://plato.stanford.edu/archives/fall2011/entries/dialectical-school/ >
  • Bowie, G. Lee, 1979, “The Similarity Approach to Counterfactuals: Some Problems”, Noûs , 13(4): 477–498. doi:10.2307/2215340
  • Brée, D.S., 1982, “Counterfactuals and Causality”, Journal of Semantics , 1(2): 147–185. doi:10.1093/jos/1.2.147
  • Briggs, R.A., 2012, “Interventionist Counterfactuals”, Philosophical Studies , 160(1): 139–166. doi:10.1007/s11098-012-9908-5
  • Byrne, Ruth M. J., 2005, The Rational Imagination: How People Create Alternatives to Reality , Cambridge, MA: MIT Press.
  • –––, 2016, “Counterfactual Thought”, Annual Review of Psychology , 67(1): 135–157. doi:10.1146/annurev-psych-122414-033249
  • Carnap, Rudolf, 1948, Introduction to Semantics , Cambridge, MA: Harvard University Press.
  • –––, 1956, Meaning and Necessity , second edition, Chicago: Chicago University Press.
  • Champollion, Lucas, Ivano Ciardelli, and Linmin Zhang, 2016, “Breaking de Morgan’s Law in Counterfactual Antecedents”, in Proceedings from Semantics and Linguistic Theory (SALT) 26 , Mary Moroney, Carol-Rose Little, Jacob Collard, and Dan Burgdorf (eds.), Ithaca, NY: CLC Publications, 304–324. doi:10.3765/salt.v26i0.3800
  • Chater, Nick, Mike Oaksford, Ulrike Hahn, and Evan Heit, 2010, “Bayesian Models of Cognition”, Wiley Interdisciplinary Reviews: Cognitive Science , 1(6): 811–823. doi:10.1002/wcs.79
  • Chisholm, Roderick M., 1955, “Law Statements and Counterfactual Inference”, Analysis , 15(5): 97–105. doi:10.1093/analys/15.5.97
  • Ciardelli, Ivano, Linmin Zhang, and Lucas Champollion, forthcoming, “Two Switches in the Theory of Counterfactuals”, Linguistics and Philosophy , first online: 15 June 2018. doi:10.1007/s10988-018-9232-4
  • Cohen, Jonathan and Aaron Meskin, 2006, “An Objective Counterfactual Theory of Information”, Australasian Journal of Philosophy , 84(3): 333–352. doi:10.1080/00048400600895821
  • Copeland, B. Jack, 2002, “The Genesis of Possible Worlds Semantics”, Journal of Philosophical Logic , 31(2): 99–137. doi:10.1023/A:1015273407895
  • Costello, Tom and John McCarthy, 1999, “Useful Counterfactuals”, Linköping Electronic Articles in Computer and Information Science , 4(12): 1–24.
  • Cresswell, Max J. and G.E. Hughes, 1996, A New Introduction to Modal Logic , London: Routledge.
  • Daniels, Charles B. and James B. Freeman, 1980, “An Analysis of the Subjunctive Conditional”, Notre Dame Journal of Formal Logic , 21(4): 639–655. doi:10.1305/ndjfl/1093883247
  • Declerck, Renaat and Susan Reed, 2001, Conditionals: A Comprehensive Emprical Analysis , (Topics in English Linguistics, 37), New York: De Gruyter Mouton.
  • Dretske, Fred I., 1981, Knowledge and the Flow of Information , Cambridge, MA: The MIT Press.
  • –––, 1988, Explaining Behavior: Reasons in a World of Causes , Cambridge, MA: MIT Press.
  • –––, 2002, “A Recipe for Thought”, in Philosophy of Mind: Contemporary and Classical Readings , David J. Chalmers (ed.), New York: Oxford University Press, 491–499.
  • –––, 2011, “Information-Theoretic Semantics”, in The Oxford Handbook of Philosophy of Mind , Brian McLaughlin, Angsar Beckermann, and Sven Walter (eds.), New York: Oxford University Press, 381–393.
  • Dudman, Victor Howard, 1984a, “Conditional Interpretations of ‘If’ Sentences”, Australian Journal of Linguistics , 4(2): 143–204. doi:10.1080/07268608408599325
  • –––, 1984b, “Parsing ‘If’-Sentences”, Analysis , 44(4): 145–153. doi:10.1093/analys/44.4.145
  • –––, 1988, “Indicative and Subjunctive”, Analysis , 48(3): 113–122. doi:10.1093/analys/48.3.113a
  • Edgington, Dorothy, 2003, “What If? Questions About Conditionals”, Mind & Language , 18(4): 380–401. doi:10.1111/1468-0017.00233
  • –––, 2004, “Counterfactuals and the Benefit of Hindsight”, in Cause and Chance: Causation in an Indeterministic World , Phil Dowe & Paul Noordhof (ed.), New York: Routledge, 12–27.
  • Fine, Kit, 1975, “Review of Lewis’ Counterfactuals”, Mind , 84: 451–458. doi:10.1093/mind/LXXXIV.1.451
  • –––, 2012a, “Counterfactuals Without Possible Worlds”, Journal of Philosophy , 109(3): 221–246. doi:10.5840/jphil201210938
  • –––, 2012b, “A Difficulty for the Possible Worlds Analysis of Counterfactuals”, Synthese , 189(1): 29–57. doi:10.1007/s11229-012-0094-y
  • Fintel, Kai von, 1999, “The Presupposition of Subjunctive Conditionals”, in The Interpretive Tract , Uli Sauerland and Orin Percus (eds.), Cambridge, MA: MITWPL, MIT Working Papers in Linguistics 25: 29–44. [ Fintel 1999 available online ]
  • –––, 2001, “Counterfactuals in a Dynamic Context”, in Ken Hale, A Life in Language , Michael Kenstowicz (ed.), Cambridge, MA: The MIT Press, 123–152. [ Fintel 2001 available online ]
  • –––, 2012, “Subjunctive Conditionals”, in The Routledge Companion to Philosophy of Language , Gillian Russell and Delia Graff Fara (eds.), New York: Routledge, 466–477. [ Fintel 2012 available online ]
  • Fintel, Kai von and Sabine Iatridou, 2002, “If and When If -Clauses Can Restrict Quantifiers”, Paper for the Workshop in Philosophy and Linguistics, University of Michigan, November 8–10, 2002. [ Fintel & Iatridou 2002 available online ]
  • Fisher, Tyrus, 2017a, “Causal Counterfactuals Are Not Interventionist Counterfactuals”, Synthese , 194(12): 4935–4957. doi:10.1007/s11229-016-1183-0
  • –––, 2017b, “Counterlegal Dependence and Causation’s Arrows: Causal Models for Backtrackers and Counterlegals”, Synthese , 194(12): 4983–5003. doi:10.1007/s11229-016-1189-7
  • Fodor, Jerry A., 1987, Psychosemantics: The Problem of Meaning in the Philosophy of Mind , Cambridge, MA: The MIT Press.
  • –––, (ed.), 1990, A Theory of Content and Other Essays , Cambridge, MA: The MIT Press.
  • Fraassen, Bas C. Van, 1966, “Singular Terms, Truth-Value Gaps and Free Logic”, Journal of Philosophy, 3: 481–495. doi:10.2307/2024549
  • Frege, Gottlob, 1893, Grundgesetze Der Arithmetik , Begriffsschriftlich Abgeleitet, Vol. 1, 1st ed, Jena: H. Pohle.
  • Galinsky, Adam D., Katie A. Liljenquist, Laura J. Kray, and Neil J. Roes, 2005, “Finding Meaning from Mutability: Making Sense and Deriving Significance through Counterfactual Thinking”, in The Psychology of Counterfactual Thinking , David R. Mandel, Denis J. Hilton, and Patrizia Catellani (eds), New York: Routledge, 110–127.
  • Gamut, L.T.F., 1991, Logic, Language and Meaning: Intensional Logic and Logical Grammar , Vol. 2, Chicago: The University of Chicago Press.
  • Gärdenfors, Peter, 1978, “Conditionals and Changes of Belief”, in The Logic and Epistemology of Scientific Belief , Ilkka Niiniluoto and Raimo Tuomela (eds.), Amsterdam: North-Holland.
  • –––, 1982, “Imaging and Conditionalization”, Journal of Philosophy , 79(12): 747–760. doi:10.2307/2026039
  • Gibbard, Allan F., 1980, “Two Recent Theories of Conditionals”, in Harper, Stalnaker, and Pearce 1980: 211–247. doi:10.1007/978-94-009-9117-0_10
  • Gibbard, Allan F. and William L. Harper, 1978, “Counterfactuals and Two Kinds of Expected Utility”, in Foundations and Applications of Decision Theory , Clifford Hooker, James J. Leach, and Edward McClennen (eds.), Dordrecht: D. Reidel, 125–162. doi:10.1007/978-94-009-9789-9_5
  • Gillies, Anthony, 2007, “Counterfactual Scorekeeping”, Linguistics and Philosophy , 30(3): 329–360. doi:10.1007/s10988-007-9018-6
  • –––, 2012, “Indicative Conditionals”, in The Routledge Companion to Philosophy of Language , Gillian Russell and Delia Graff Fara (eds.), New York: Routledge, 449–465.
  • Ginsburg, Matthew L., 1985, “Counterfactuals”, in Proceedings of the Ninth International Joint Conference on Artificial Intelligence , Aravind Joshi (ed.), Los Altos, CA: Morgan Kaufmann, 80–86. [ Ginsburg 1985 available online ]
  • Glymour, Clark, 2001, The Mind’s Arrows: Bayes Nets and Graphical Causal Models in Psychology , Cambridge, MA: MIT Press.
  • Goodman, Nelson, 1947, “The Problem of Counterfactual Conditionals”, The Journal of Philosophy , 44(5): 113–118. doi:10.2307/2019988
  • –––, 1954, Fact, Fiction and Forecast , Cambridge, MA: Harvard University Press.
  • Gopnik, Alison, Clark Glymour, David M. Sobel, Laura E. Schulz, Tamar Kushnir, and David Danks, 2004, “A Theory of Causal Learning in Children: Causal Maps and Bayes Nets”, Psychological Review, 111(1): 3–32. doi:10.1037/0033-295X.111.1.3
  • Gopnik, Alison and Joshua B. Tenenbaum, 2007, “Bayesian Networks, Bayesian Learning and Cognitive Development”, Developmental Science, 10(3): 281–287. doi:10.1111/j.1467-7687.2007.00584.x
  • Hájek, Alan, 2014, “Probabilities of Counterfactuals and Counterfactual Probabilities”, Journal of Applied Logic , 12(3): 235–251. doi:10.1016/j.jal.2013.11.001
  • Halpern, Joseph and Judea Pearl, 2005a, “Causes and Explanations: A Structural-Model Approach. Part I: Causes”, British Journal for Philosophy of Science , 56(4): 843–887. doi:10.1093/bjps/axi147
  • –––, 2005b, “Causes and Explanations: A Structural-Model Approach. Part II: Explanations”, British Journal for Philosophy of Science , 56(4): 889–911. doi:10.1093/bjps/axi148
  • Hansson, Sven Ove, 1989, “New Operators for Theory Change”, Theoria , 55(2): 114–132. doi:10.1111/j.1755-2567.1989.tb00725.x
  • Harper, William L., 1975, “Rational Belief Change, Popper Functions and the Counterfactuals”, Synthese , 30(1–2): 221–262. doi:10.1007/BF00485309
  • Harper, William L., Robert Stalnaker, and Glenn Pearce (eds.), 1980, Ifs: Conditionals, Belief, Decision, Chance and Time , Dordrecht: Springer Netherlands. doi:10.1007/978-94-009-9117-0
  • Hawthorne, John, 2005, “Chance and Counterfactuals”, Philosophy and Phenomenological Research , 70(2): 396–405. doi:10.1111/j.1933-1592.2005.tb00534.x
  • Heintzelman, Samantha J., Justin Christopher, Jason Trent, and Laura A. King, 2013, “Counterfactual Thinking about One’s Birth Enhances Well-Being Judgments”, The Journal of Positive Psychology , 8(1): 44–49. doi:10.1080/17439760.2012.754925
  • Herzberger, Hans, 1979, “Counterfactuals and Consistency”, The Journal of Philosophy , 76(2): 83–88. doi:10.2307/2025977
  • Hiddleston, Eric, 2005, “A Causal Theory of Counterfactuals”, Noûs , 39(4): 632–657. doi:10.1111/j.0029-4624.2005.00542.x
  • Hitchcock, Christopher, 2001, “The Intransitivity of Causation Revealed by Equations and Graphs”, The Journal of Philosophy , 98(6): 273–299. doi:10.2307/2678432
  • –––, 2007, “Prevention, Preemption, and the Principle of Sufficient Reason”, Philosophical Review , 116(4): 495–532. doi:10.1215/00318108-2007-012
  • Horwich, Paul, 1987, Asymmetries in Time , Cambridge, MA: MIT Press.
  • Iatridou, Sabine, 2000, “The Grammatical Ingredients of Counterfactuality”, Linguistic Inquiry , 31(2): 231–270. doi:10.1162/002438900554352
  • Ichikawa, Jonathan, 2011, “Quantifiers, Knowledge, and Counterfactuals”, Philosophy and Phenomenological Research , 82(2): 287–313. doi:10.1111/j.1933-1592.2010.00427.x
  • Ippolito, Michela, 2006, “Semantic Composition and Presupposition Projection in Subjunctive Conditionals”, Linguistics and Philosophy , 29(6): 631–672. doi:10.1007/s10988-006-9006-2
  • –––, 2008, “Subjunctive Conditionals”, in Proceedings of Sinn Und Bedeutung 12 , Atle Grønn (ed.), Oslo: Department of Literature, Area Studies and European Languages, University of Oslo, 256–270.
  • –––, 2013, Subjunctive Conditionals: A Linguistic Analysis , (Linguistic Inquiry Monograph Series 65), Cambridge, MA: MIT Press.
  • –––, 2016, “How Similar Is Similar Enough?”, Semantics and Pragmatics , 9(6): 1–60. doi:10.3765/sp.9.6
  • Isard, S.D., 1974, “What Would You Have Done If…”, Theoretical Linguistics , 1(1–3): 233–255. doi:10.1515/thli.1974.1.1-3.233
  • Jackson, Frank, 1987, Conditionals , Oxford: Basil Blackwell.
  • Kahneman, Daniel, Paul Slovic, and Amos Tversky (eds.), 1982, Judgement under Uncertainty: Heuristics and Biases , Cambridge: Cambridge University Press.
  • Kant, Immanuel, 1781, Critique of Pure Reason , Paul Guyer and Allen Wood (trans.), Cambridge: Cambridge University Press, 1998.
  • Kaufmann, Stefan, 2005, “Conditional Predictions”, Linguistics and Philosophy , 28(2): 181–231. doi:10.1007/s10988-005-3731-9
  • –––, 2008, “Conditionals Right and Left: Probabilities for the Whole Family”, Journal of Philosophical Logic , 38(1): 1–53. doi:10.1007/s10992-008-9088-0
  • –––, 2013, “Causal Premise Semantics”, Cognitive Science , 37(6): 1136–1170. doi:10.1111/cogs.12063
  • –––, 2017, “The Limit Assumption”, Semantics and Pragmatics , 10(18). doi:10.3765/sp.10.18
  • Khoo, Justin, 2015, “On Indicative and Subjunctive Conditionals”, Philosophers’ Imprint , 15(32): 1–40. [ Khoo 2015 available online ]
  • Kment, Boris, 2006, “Counterfactuals and Explanation”, Mind , 115(458): 261–310. doi:10.1093/mind/fzl261
  • –––, 2014, Modality and Explanatory Reasoning , New York: Oxford University Press. doi:10.1093/acprof:oso/9780199604685.001.0001
  • Koslicki, Kathrin, 2016, “Where Grounding and Causation Part Ways: Comments on Schaffer”, Philosophical Studies , 173(1): 101–112. doi:10.1007/s11098-014-0436-3
  • Kratzer, Angelika, 1981a, “The Notional Category of Modality”, in Words, Worlds and Contexts , Hans-Jürgen Eikmeyer and Hannes Rieser (eds.), Berlin: Walter de Gruyter, 38–74.
  • –––, 1981b, “Partition and Revision: The Semantics of Counterfactuals”, Journal of Philosophical Logic , 10(2): 201–216. doi:10.1007/BF00248849
  • –––, 1986, “Conditionals”, in Proceedings from the 22nd Regional Meeting of the Chicago Linguistic Society , Chicago: University of Chicago, 1–15. [ Kratzer 1986 available online ]
  • –––, 1989, “An Investigation of the Lumps of Thought”, Linguistics and Philosophy , 12(5): 607–653. doi:10.1007/BF00627775
  • –––, 1990, “How Specific Is a Fact?”, in Proceedings of the 1990 Conference on Theories of Partial In- Formation , Center for Cognitive Science, University of Texas at Austin.
  • –––, 1991, “Modality”, in Semantics: An International Handbook of Contemporary Research , A. von Stechow and D. Wunderlich (eds.), Berlin: De Gruyter Mouton, 639–650.
  • –––, 2002, “Facts: Particulars or Information Units?”, Linguistics and Philosophy , 25(5–6): 655–670. doi:10.1023/A:1020807615085
  • –––, 2012, Modals and Conditionals: New and Revised Perspectives , New York: Oxford University Press. doi:10.1093/acprof:oso/9780199234684.001.0001
  • Kray, Laura J., Linda G. George, Katie A. Liljenquist, Adam D. Galinsky, Philip E. Tetlock, and Neal J. Roese, 2010, “From What Might Have Been to What Must Have Been: Counterfactual Thinking Creates Meaning”, Journal of Personality and Social Psychology , 98(1): 106–118. doi:10.1037/a0017905
  • Kripke, Saul A., 1963, “Semantical Analysis of Modal Logic I: Normal Modal Propositional Calculi”, Zeitschrift Für Mathematische Logik Und Grundlagen Der Mathematik , 9(5–6): 67–96. doi:10.1002/malq.19630090502
  • Kvart, Igal, 1986, A Theory of Counterfactuals , Indianapolis, IN: Hackett.
  • –––, 1992, “Counterfactuals”, Erkenntnis , 36(2): 139–179. doi:10.1007/BF00217472
  • Lange, Marc, 1999, “Laws, Counterfactuals, Stability, and Degrees of Lawhood”, Philosophy of Science , 66(2): 243–267. doi:10.1086/392686
  • –––, 2000, Natural Laws in Scientific Practice , New York: Oxford University Press.
  • –––, 2009, Laws and Lawmakers: Science, Metaphysics, and the Laws of Nature , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780195328134.001.0001
  • Leitgeb, Hannes, 2012a, “A Probabilistic Semantics for Counterfactuals: Part A”, The Review of Symbolic Logic , 5(1): 26–84. doi:10.1017/S1755020311000153
  • –––, 2012b, “A Probabilistic Semantics for Counterfactuals: Part B”, The Review of Symbolic Logic , 5(1): 85–121. doi:10.1017/S1755020311000165
  • Levi, Isaac, 1988, “The Iteration of Conditionals and the Ramsey Test”, Synthese , 76(1): 49–81. doi:10.1007/BF00869641
  • Lewis, C. I., 1912, “Implication and the Algebra of Logic”, Mind , New Series, 21(84): 522–531. doi:10.1093/mind/XXI.84.522
  • –––, 1914, “The Calculus of Strict Implication”, Mind , New Series, 23(90): 240–247. doi:10.1093/mind/XXIII.1.240
  • Lewis, David K., 1973a, “Causation”, Journal of Philosophy , 70(17): 556–567. doi:10.2307/2025310
  • –––, 1973b, Counterfactuals , Cambridge, MA: Harvard University Press.
  • –––, 1973c, “Counterfactuals and Comparative Possibility”, Journal of Philosophical Logic , 2(4): 418–446. doi:10.2307/2215339
  • –––, 1979, “Counterfactual Dependence and Time’s Arrow”, Noûs , 13(4): 455–476. doi:10.2307/2215339
  • –––, 1981, “Ordering Semantics and Premise Semantics for Counterfactuals”, Journal of Philosophical Logic , 10(2): 217–234. doi:10.1007/BF00248850
  • Lewis, Karen S., 2016, “Elusive Counterfactuals”, Noûs , 50(2): 286–313. doi:10.1111/nous.12085
  • –––, 2017, “Counterfactuals and Knowledge”, in The Routledge Handbook of Epistemic Contextualism , Jonathan Jenkins Ichikawa (ed.), New York: Routledge, 411–424.
  • –––, 2018, “Counterfactual Discourse in Context”, Noûs , 52(3): 481–507. doi:10.1111/nous.12194
  • Loewer, Barry, 1976, “Counterfactuals with Disjunctive Antecedents”, The Journal of Philosophy , 73(16): 531–537. doi:10.2307/2025717
  • –––, 1983, “Information and Belief”, Behavioral and Brain Sciences , 6(1): 75–76. doi:10.1017/S0140525X00014783
  • –––, 2007, “Counterfactuals and the Second Law”, in Causation, Physics, and the Constitution of Reality: Russell’s Republic Revisited , Huw Price and Richard Corry (eds.), New York: Oxford University Press, 293–326.
  • Lowe, E. J., 1983, “A Simplification of the Logic of Conditionals”, Notre Dame Journal of Formal Logic , 24(3): 357–366. doi:10.1305/ndjfl/1093870380
  • –––, 1990, “Conditionals, Context, and Transitivity”, Analysis , 50(2): 80–87. doi:10.1093/analys/50.2.80
  • Lucas, Christopher G. and Charles Kemp, 2015, “An Improved Probabilistic Account of Counterfactual Reasoning”, Psychological Review, 122(4): 700–734. doi:10.1037/a0039655
  • Lycan, William G., 2001, Real Conditionals , Oxford: Oxford University Press.
  • Lyons, John, 1977, Semantics , Vol. 2, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511620614
  • Mackie, John L., 1974, The Cement of the Universe: A Study in Causation , Oxford: Oxford University Press.
  • Marr, David, 1982, Vision: A Computational Investigation into the Human Representation and Processing of Visual Information , San Francisco: W.H. Freeman.
  • Maudlin, Tim, 2007, Metaphysics Within Physics , New York: Oxford University Press. doi:10.1093/acprof:oso/9780199218219.001.0001
  • McKay, Thomas J. and Peter van Inwagen, 1977, “Counterfactuals with Disjunctive Antecedents”, Philosophical Studies , 31(5): 353–356. doi:10.1007/BF01873862
  • Morreau, Michael, 2009, “The Hypothetical Syllogism”, Journal of Philosophical Logic , 38(4): 447–464. doi:10.1007/s10992-008-9098-y
  • –––, 2010, “It Simply Does Not Add Up: Trouble with Overall Similarity”, The Journal of Philosophy , 107(9): 469–490. doi:10.5840/jphil2010107931
  • Moss, Sarah, 2012, “On the Pragmatics of Counterfactuals”, Noûs , 46(3): 561–586. doi:10.1111/j.1468-0068.2010.00798.x
  • Nelson, Everett J., 1933, “On Three Logical Principles in Intension”, The Monist , 43(2): 268–284. doi:10.5840/monist19334327
  • Nozick, Robert, 1969, “Newcomb’s Problem and Two Principles of Choice”, in Essays in Honor of Carl G. Hempel , Nicholas Rescher (ed.), Dordrecht: D. Reidel, 111–5.
  • Nute, Donald, 1975a, “Counterfactuals”, Notre Dame Journal of Formal Logic , 16(4): 476–482. doi:10.1305/ndjfl/1093891882
  • –––, 1975b, “Counterfactuals and the Similarity of Words”, The Journal of Philosophy , 72(21): 773–778. doi:10.2307/2025340
  • –––, 1980a, “Conversational Scorekeeping and Conditionals”, Journal of Philosophical Logic , 9(2): 153–166. doi:10.1007/BF00247746
  • –––, (ed.), 1980b, Topics in Conditional Logic , Dordrecht: Reidel. doi:10.1007/978-94-009-8966-5
  • Palmer, Frank Robert, 1986, Mood and Modality , Cambridge: Cambridge University Press.
  • Parisien, Christopher and Paul Thagard, 2008, “Robosemantics: How Stanley the Volkswagen Represents the World”, Minds and Machines , 18(2): 169–178. doi:10.1007/s11023-008-9098-2
  • Pearl, Judea, 1995, “Causation, Action, and Counterfactuals”, in Computational Learning and Probabilistic Reasoning , A. Gammerman (ed.), New York: John Wiley and Sons, 235–255.
  • –––, 2000, Causality: Models, Reasoning, and Inference , Cambridge: Cambridge University Press.
  • –––, 2002, “Reasoning with Cause and Effect”, AI Magazine , 23(1): 95–112. [ Pearl 2002 available online ]
  • –––, 2009, Causality: Models, Reasoning, and Inference , second edition, Cambridge: Cambridge University Press.
  • –––, 2013, “Structural Counterfactuals: A Brief Introduction”, Cognitive Science , 37(6): 977–985. doi:10.1111/cogs.12065
  • Peirce, Charles S., 1896, “The Regenerated Logic”, The Monist , 7(1): 19–40. doi:10.5840/monist18967121
  • Pendlebury, Michael, 1989, “The Projection Strategy and the Truth: Conditions of Conditional Statements”, Mind , 98(390): 179–205. doi:10.1093/mind/XCVIII.390.179
  • Pereboom, Derk, 2014, Free Will, Agency, and Meaning in Life , New York: Oxford University Press. doi:10.1093/acprof:oso/9780199685516.001.0001
  • Pollock, John L., 1976, Subjunctive Reasoning , Dordrecht: D. Reidel Publishing Co., doi:10.1007/978-94-010-1500-4
  • –––, 1981, “A Refined Theory of Counterfactuals”, Journal of Philosophical Logic , 10(2): 239–266. doi:10.1007/BF00248852
  • Quine, Willard Van Orman, 1960, Word and Object , Cambridge, MA: MIT Press.
  • –––, 1982, Methods of Logic , fourth edition, Cambridge, MA: Harvard University Press.
  • Rips, Lance J., 2010, “Two Causal Theories of Counterfactual Conditionals”, Cognitive Science , 34(2): 175–221. doi:10.1111/j.1551-6709.2009.01080.x
  • Rips, Lance J. and Brian J. Edwards, 2013, “Inference and Explanation in Counterfactual Reasoning”, Cognitive Science , 37(6): 1107–1135. doi:10.1111/cogs.12024
  • Ryle, Gilbert, 1949, The Concept of Mind , London: Hutchinson.
  • Sanford, David H., 1989, If P Then Q: Conditionals and the Foundations of Reasoning , London: Routledge.
  • Santorio, Paolo, 2014, “Filtering Semantics for Counterfactuals: Bridging Causal Models and Premise Semantics”, in the Proceedings of Semantics and Linguistic Theory (SALT) 24 , 494–513.
  • Schaffer, Jonathan, 2016, “Grounding in the Image of Causation”, Philosophical Studies , 173(1): 49–100. doi:10.1007/s11098-014-0438-1
  • Schulz, Katrin, 2007, “Minimal Models in Semantics and Pragmatics: Free Choice, Exhaustivity, and Conditionals”, PhD Thesis, Amsterdam: University of Amsterdam: Institute for Logic, Language and Computation. [ Schultz 2007 available online ]
  • –––, 2011, “If You’d Wiggled A, Then B Would’ve Changed”, Synthese , 179(2): 239–251. doi:10.1007/s11229-010-9780-9
  • –––, 2014, “Fake Tense in Conditional Sentences: A Modal Approach”, Natural Language Semantics , 22(2): 117–144. doi:10.1007/s11050-013-9102-0
  • Seto, Elizabeth, Joshua A. Hicks, William E. Davis, and Rachel Smallman, 2015, “Free Will, Counterfactual Reflection, and the Meaningfulness of Life Events”, Social Psychological and Personality Science , 6(3): 243–250. doi:10.1177/1948550614559603
  • Sider, Theodore, 2010, Logic for Philosophy , New York: Oxford University Press.
  • Skyrms, Brian, 1980, “The Prior Propensity Account of Subjunctive Conditionals”, in Harper, Stalnaker, and Pearce 1980: 259–265. doi:10.1007/978-94-009-9117-0_13
  • Sloman, Steven, 2005, Causal Models: How People Think About the World and Its Alternatives , New York: Oxford University Press. doi:10.1093/acprof:oso/9780195183115.001.0001
  • Sloman, Steven A. and David A. Lagnado, 2005, “Do We ‘Do’?”, Cognitive Science , 29(1): 5–39. doi:10.1207/s15516709cog2901_2
  • Slote, Michael, 1978, “Time in Counterfactuals”, Philosophical Review , 87(1): 3–27. doi:10.2307/2184345
  • Smilansky, Saul, 2000, Free Will and Illusion , New York: Oxford University Press.
  • Snider, Todd and Adam Bjorndahl, 2015, “Informative Counterfactuals”, Semantics and Linguistic Theory , 25: 1–17. doi:10.3765/salt.v25i0.3077
  • Spirtes, Peter, Clark Glymour, and Richard Scheines, 1993, Causation, Prediction, and Search , Berlin: Springer-Verlag.
  • –––, 2000, Causation, Prediction, and Search , second edition, Cambridge, MA: The MIT Press.
  • Sprigge, Timothy L.S., 1970, Facts, Worlds and Beliefs , London: Routledge & K. Paul.
  • –––, 2006, “My Philosophy and Some Defence of It”, in Consciousness, Reality and Value: Essays in Honour of T. L. S. Sprigge , Pierfrancesco Basile and Leemon B. McHenry (eds.), Heusenstamm: Ontos Verlag, 299–321.
  • Stalnaker, Robert C., 1968, “A Theory of Conditionals”, in Studies in Logical Theory , Nicholas Rescher (ed.), Oxford: Basil Blackwell, 98–112.
  • –––, 1972 [1980], “Letter to David Lewis”. Reprinted in Harper, Stalnaker, and Pearce 1980: 151–152. doi:10.1007/978-94-009-9117-0_7
  • –––, 1975, “Indicative Conditionals”, Philosophia , 5(3): 269–286. doi:10.1007/BF02379021
  • –––, 1978, “Assertion”, in Syntax and Semantics 9: Pragmatics , Peter Cole (ed.), New York: Academic Press, 315–332.
  • –––, 1980, “A Defense of Conditional Excluded Middle”, in Harper, Stalnaker, and Pearce 1980: 87–104. doi:10.1007/978-94-009-9117-0_4
  • –––, 1984, Inquiry , Cambridge, MA: MIT Press.
  • –––, 1999, Context and Content: Essays on Intentionality in Speech and Thought , Oxford: Oxford University Press.
  • Stalnaker, Robert C. and Richmond H. Thomason, 1970, “A Semantic Analysis of Conditional Logic”, Theoria , 36(1): 23–42. doi:10.1111/j.1755-2567.1970.tb00408.x
  • Starr, William B., 2012, The Structure of Possible Worlds , UCLA Workshop. [ Starr 2012 available online ]
  • –––, 2014, “A Uniform Theory of Conditionals”, Journal of Philosophical Logic , 43(6): 1019–1064. doi:10.1007/s10992-013-9300-8
  • Stone, Matthew, 1997, “The Anaphoric Parallel between Modality and Tense”. Philadelphia, PA: University of Pennsylvania Institute for Research in Cognitive Science, Technical Report No. MS-CIS-97-06. [ Stone 1997 available online ]
  • Swanson, Eric, 2012, “Conditional Excluded Middle without the Limit Assumption”, Philosophy and Phenomenological Research , 85(2): 301–321. doi:10.1111/j.1933-1592.2011.00507.x
  • Tadeschi, Philip, 1981, “Some Evidence for a Branching-Futures Semantic Model”, in Tense and Aspect , (Syntax and Semantics, 14), Philip Tedeschi and Annie Zaenen (eds.), New York: Academic Press, 239–269.
  • Tarski, Alfred, 1936, “Der Wahrheitsbegriff in Den Formalizierten Sprachen”, Studia Philosophica , 1: 261–405.
  • Thrun, Sebastian, Mike Montemerlo, Hendrik Dahlkamp, David Stavens, Andrei Aron, James Diebel, Philip Fong, et al., 2006, “Stanley: The Robot That Won the DARPA Grand Challenge”, Journal of Field Robotics , 23(9): 661–692. doi:10.1002/rob.20147
  • Tichý, Pavel, 1976, “A Counterexample to the Stalnaker-Lewis Analysis of Counterfactuals”, Philosophical Studies , 29(4): 271–273. doi:10.1007/BF00411887
  • Todd, William, 1964, “Counterfactual Conditionals and the Presuppositions of Induction”, Philosophy of Science , 31(2): 101–110. doi:10.1086/287987
  • Veltman, Frank, 1976, “Prejudices, Presuppositions and the Theory of Counterfactuals”, in Amsterdam Papers in Formal Grammar , J. Groenendijck and M. Stokhof (eds.) (Proceedings of the 1st Amsterdam Colloquium), University of Amsterdam, 248–281.
  • –––, 1985, “Logics for Conditionals”, Ph.D. Dissertation, Amsterdam: University of Amsterdam.
  • –––, 1986, “Data Semantics and the Pragmatics of Indicative Conditionals”, in On Conditionals , Elizabeth C. Traugott, Alice ter Meulen, Judy S. Reilly, and Charles A. Ferguson (eds.), Cambridge: Cambridge University Press.
  • –––, 2005, “Making Counterfactual Assumptions”, Journal of Semantics , 22(2): 159–180. doi:10.1093/jos/ffh022
  • Walters, Lee, 2014, “Against Hypothetical Syllogism”, Journal of Philosophical Logic , 43(5): 979–997. doi:10.1007/s10992-013-9305-3
  • Walters, Lee and J. Robert G. Williams, 2013, “An Argument for Conjunction Conditionalization”, The Review of Symbolic Logic , 6(04): 573–588. doi:10.1017/S1755020313000191
  • Warmbrōd, Ken, 1981a, “Counterfactuals and Substitution of Equivalent Antecedents”, Journal of Philosophical Logic , 10(2): 267–289. doi:10.1007/BF00248853
  • –––, 1981b, “An Indexical Theory of Conditionals”, Dialogue, Canadian Philosophical Review , 20(4): 644–664. doi:10.1017/S0012217300021399
  • Wasserman, Ryan, 2006, “The Future Similarity Objection Revisited”, Synthese , 150(1): 57–67. doi:10.1007/s11229-004-6256-9
  • Weatherson, Brian, 2001, “Indicative and Subjunctive Conditionals”, The Philosophical Quarterly , 51(203): 200–216. doi:10.1111/j.0031-8094.2001.00224.x
  • Willer, Malte, 2015, “Simplifying Counterfactuals”, in 20th Amsterdam Colloquium , Thomas Brochhagen, Floris Roelofsen, and Nadine Theiler (eds.), Amsterdam: ILLC, 428–437. [ Willer 2015 available online ]
  • –––, 2017, “Lessons from Sobel Sequences”, Semantics and Pragmatics , 10(4). doi:10.3765/sp.10.4
  • –––, 2018, “Simplifying with Free Choice”, Topoi , 37(3): 379–392. doi:10.1007/s11245-016-9437-5
  • Williams, J. Robert G., 2010, “Defending Conditional Excluded Middle”, Noûs , 44(4): 650–668. doi:10.1111/j.1468-0068.2010.00766.x
  • Williamson, Timothy, 2005, “Armchair Philosophy, Metaphysical Modality and Counterfactual Thinking”, Proceedings of the Aristotelian Society , 105(1): 1–23. doi:10.1111/j.0066-7373.2004.00100.x
  • –––, 2007, The Philosophy of Philosophy , Malden, MA: Blackwell.
  • Wilson, Alastair, 2018, “Metaphysical Causation”, Noûs , 52(4): 723–751. doi:10.1111/nous.12190
  • Woodward, Jim, 2002, “What Is a Mechanism? A Counterfactual Account”, Philosophy of Science , 69(S3): S366–S377. doi:10.1086/341859
  • –––, 2003, Making Things Happen: A Theory of Causal Explanation , Oxford: Oxford University Press. doi:10.1093/0195155270.001.0001
  • Zeman, Jay, 1997, “Peirce and Philo”, in Studies in the Logic of Charles Sanders Peirce , Nathan Houser, Don Roberts, and James Van Evra (eds.), Indianapolis: Indiana University Press, 402–417.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Byrne, Ruth, 2017, “ Counterfactual Reasoning ”, Oxford Bibliographies. (Surveys important psychological research on counterfactual reasoning.)

[Please contact the author with suggestions.]

causation: counterfactual theories of | conditionals | impossible worlds | laws of nature | logic: conditionals | logic: modal | modality: epistemology of | modality: varieties of | possibilism-actualism debate | possible worlds

Copyright © 2019 by W. Starr < w . starr @ cornell . edu >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

The Dicto Simpliciter Logical Fallacy

mstay/Getty Images

  • An Introduction to Punctuation
  • Ph.D., Rhetoric and English, University of Georgia
  • M.A., Modern English and American Literature, University of Leicester
  • B.A., English, State University of New York

Dicto Simpliciter is a  fallacy in which a general rule or observation is treated as universally true regardless of the circumstances or the individuals concerned. Also known as the fallacy of sweeping generalization ,  unqualified generalization , a dicto simpliciter ad dictum secundum quid , and fallacy of the accident ( fallacia accidentis ).

From the Latin, "from a saying without qualification"

Examples and Observations

  • "I know nothing about Jay-Z because ( sweeping generalization alert!) hip-hop stopped being interesting in about 1991; I've never knowingly listened to a Neil Young record all the way through because they all sound like someone strangling a cat (don't they?)." (Tony Naylor, "In Music, Ignorance Can Be Bliss." The Guardian , Jan. 1, 2008)
  • "In discussing people of whom we have little knowledge, we often use dicto simpliciter in the attempt to fix them the attributes of the groups they belong to... " Dicto simpliciter  arises whenever individuals are made to conform to group patterns. If they are treated in tight classes as 'teenagers,' 'Frenchmen,' or 'traveling salesmen,' and are assumed to bear the characteristics of those classes, no opportunity is permitted for their individual qualities to emerge. There are political ideologies which attempt to treat people in precisely this way, treating them only as members of sub-groups in society and allowing them only representation through a group whose values they may not, in fact, share." (Madsen Pirie, How to Win Every Argument: The Use and Abuse of Logic , 2nd ed. Bloomsbury, 2015)
  • New York Values "At the Republican presidential debate on Thursday, Senator Cruz attacked Donald Trump, one of his rivals for the party’s nomination, by saying darkly that he represented 'New York values.' "Asked to define the term, Senator Cruz offered a sweeping generalization for 8.5 million city dwellers. "'Everybody understands that the values in New York City are socially liberal and pro-abortion and pro-gay marriage,' he said. 'And focus on money and the media.'" (Mark Santora, "New Yorkers Quickly Unite Against Cruz After 'New York Values' Comment." The New York Times , January 15, 2016)
  • Everybody Should Exercise "' Dicto Simpliciter means an argument based on an unqualified generalization. For example: 'Exercise is good. Therefore everybody should exercise.' "'I agree,' said Polly earnestly. 'I mean exercise is wonderful. I mean it builds the body and everything.' "'Polly,' I said gently. 'The argument is a fallacy. Exercise is good is an unqualified generalization. For instance, if you have heart disease, exercise is bad, not good. Many people are ordered by their doctors not to exercise. You must qualify the generalization. You must say exercise is usually good, or exercise is good for most people. Otherwise, you have committed a Dicto Simpliciter. Do you see?' "'No,' she confessed. 'But this is marvy. Do more! Do more!'" (Max Shulman, The Many Loves of Dobie Gillis , 1951)
  • The Stork With One Leg "An amusing example of arguing a dicto simpliciter ad dictum secundum quid is contained in the following story told by Boccaccio in the Decameron : A servant who was roasting a stork for his master was prevailed upon by his sweetheart to cut off a leg for her to eat. When the bird came upon the table, the master desired to know what had become of the other leg. The man answered that storks never had more than one leg. The master, very angry, but determined to strike his servant dumb before he punished him, took him next day into the fields where they saw some storks, standing each on one leg, as storks do. The servant turned triumphantly to his master; on which the latter shouted, and the birds put down their other legs and flew away. 'Ah, sir,' said the servant, 'you did not shout to the stork at dinner yesterday: if you had done so, he would have shown his other leg too.'" (J. Welton, A Manual of Logic . Clive, 1905)
  • Deduction and Induction
  • Logical Fallacy
  • Top 12 Logical Fallacies
  • Hasty Generalization (Fallacy)
  • Overview of Ad Misericordiam Arguments
  • Circular Reasoning Definition and Examples
  • Complex Question Fallacy
  • What is Tu Quoque (Logical Fallacy) in Rhetoric?
  • Slippery Slope Fallacy - Definition and Examples
  • Understanding the 'Poisoning the Well' Logical Fallacy
  • What is a Logical Fallacy?
  • Fallacies of Relevance: Appeal to Authority
  • Definition and Examples of an Ad Hominem Fallacy
  • False Analogy (Fallacy)
  • paralogism (rhetoric and logic)
  • Undistributed Middle (Fallacy)
  • Contradictory Premises in an Argument
  • What Is a Post Hoc Logical Fallacy?
  • How Logical Fallacy Invalidates Any Argument

Structure & Outlining

Logical fallacies handlist.

Logical Fallacies Handlist: Fallacies are statements that might sound reasonable or superficially true but are actually flawed or dishonest. When readers detect them, these logical fallacies backfire by making the audience think the writer is (a) unintelligent or (b) deceptive. It is important to avoid them in your own arguments, and it is also important to be able to spot them in others’ arguments so a false line of reasoning won’t fool you. Think of this as intellectual kung-fu : the vital art of self-defense in a debate. For extra impact, learn both the Latin terms and the English equivalents. You can click here to download a PDF version of this material . In general, one useful way to organize fallacies is by category. We have below fallacies of relevance , component fallacies , fallacies of ambiguity , and fallacies of omission . We will discuss each type in turn. The last point to discuss is Occam’s Razor . FALLACIES OF RELEVANCE : These fallacies appeal to evidence or examples that are not relevant to the argument at hand. Appeal to Force ( Argumentum Ad Baculum or the “Might-Makes-Right” Fallacy): This argument uses force, the threat of force, or some other unpleasant backlash to make the audience accept a conclusion. It commonly appears as a last resort when evidence or rational arguments fail to convince a reader. If the debate is about whether or not 2+2=4, an opponent’s argument that he will smash your nose in if you don’t agree with his claim doesn’t change the truth of an issue. Logically, this consideration has nothing to do with the points under consideration. The fallacy is not limited to threats of violence, however. The fallacy includes threats of any unpleasant backlash–financial, professional, and so on. Example: “Superintendent, you should cut the school budget by $16,000. I need not remind you that past school boards have fired superintendents who cannot keep down costs.” While intimidation may force the superintendent to conform, it does not convince him that the choice to cut the budget was the most beneficial for the school or community. Lobbyists use this method when they remind legislators that they represent so many thousand votes in the legislators’ constituencies and threaten to throw the politician out of office if he doesn’t vote the way they want. Teachers use this method if they state that students should hold the same political or philosophical position as the teachers or risk failing the class. Note that it is isn’t a logical fallacy, however, to assert that students must fulfill certain requirements in the course or risk failing the class! Genetic Fallacy : The genetic fallacy is the claim that an idea, product, or person must be untrustworthy because of its racial, geographic, or ethnic origin. “That car can’t possibly be any good! It was made in Japan!” Or, “Why should I listen to her argument? She comes from California, and we all know those people are flakes.” Or, “Ha! I’m not reading that book. It was published in Tennessee, and we know all Tennessee folk are hillbillies and rednecks!” This type of fallacy is closely related to the fallacy of argumentum ad hominem or personal attack , appearing immediately below. Personal Attack ( Argumentum Ad Hominem , literally, “argument toward the man.” Also called “Poisoning the Well”): Attacking or praising the people who make an argument, rather than discussing the argument itself. This practice is fallacious because the personal character of an individual is logically irrelevant to the truth or falseness of the argument itself. The statement “2+2=4” is true regardless if it is stated by criminals, congressmen, or pastors. There are two subcategories: (1) Abusive : To argue that proposals, assertions, or arguments must be false or dangerous because they originate with atheists, Christians, Muslims, communists, capitalists, the John Birch Society, Catholics, anti-Catholics, racists, anti-racists, feminists, misogynists (or any other group) is fallacious. This persuasion comes from irrational psychological transference rather than from an appeal to evidence or logic concerning the issue at hand. This is similar to the genetic fallacy , and only an anti-intellectual would argue otherwise.
(2) Circumstantial : To argue that an opponent should accept or reject an argument because of circumstances in his or her life. If one’s adversary is a clergyman, suggesting that he should accept a particular argument because not to do so would be incompatible with the scriptures is such a fallacy. To argue that, because the reader is a Republican or Democrat, she must vote for a specific measure is likewise a circumstantial fallacy. The opponent’s special circumstances have no control over the truth or untruth of a specific contention. The speaker or writer must find additional evidence beyond that to make a strong case. This is also similar to the genetic fallacy in some ways. If you are a college student who wants to learn rational thought, you simply must avoid circumstantial fallacies.

Argumentum ad Populum (Literally “Argument to the People”): Using an appeal to popular assent, often by arousing the feelings and enthusiasm of the multitude rather than building an argument. It is a favorite device with the propagandist, the demagogue, and the advertiser. An example of this type of argument is Shakespeare’s version of Mark Antony’s funeral oration for Julius Caesar. There are three basic approaches:

(1) Bandwagon Approach : “Everybody is doing it.” This argumentum ad populum asserts that, since the majority of people believes an argument or chooses a particular course of action, the argument must be true, or the course of action must be followed, or the decision must be the best choice. For instance, “85% of consumers purchase IBM computers rather than Macintosh; all those people can’t be wrong. IBM must make the best computers.” Popular acceptance of any argument does not prove it to be valid, nor does popular use of any product necessarily prove it is the best one. After all, 85% of people may once have thought planet earth was flat, but that majority’s belief didn’t mean the earth really was flat when they believed it! Keep this in mind, and remember that everybody should avoid this type of logical fallacy.
(2) Patriotic Approach : “Draping oneself in the flag.” This argument asserts that a certain stance is true or correct because it is somehow patriotic, and that those who disagree are unpatriotic. It overlaps with pathos and argumentum ad hominem to a certain extent. The best way to spot it is to look for emotionally charged terms like Americanism, rugged individualism, motherhood, patriotism, godless communism, etc. A true American would never use this approach. And a truly free man will exercise his American right to drink beer, since beer belongs in this great country of ours. This approach is unworthy of a good citizen. (3) Snob Approach : This type of argumentum ad populum doesn’t assert “everybody is doing it,” but rather that “all the best people are doing it.” For instance, “Any true intellectual would recognize the necessity for studying logical fallacies.” The implication is that anyone who fails to recognize the truth of the author’s assertion is not an intellectual, and thus the reader had best recognize that necessity.

In all three of these examples, the rhetorician does not supply evidence that an argument is true; he merely makes assertions about people who agree or disagree with the argument. For Christian students in religious schools like Carson-Newman, we might add a fourth category, “ Covering Oneself in the Cross .” This argument asserts that a certain political or denominational stance is true or correct because it is somehow “Christian,” and that anyone who disagrees is behaving in an “un-Christian” or “godless” manner. (It is similar to the patriotic approach except it substitutes a gloss of piety instead of patriotism.) Examples include the various “Christian Voting Guides” that appear near election time, many of them published by non-Church related organizations with hidden financial/political agendas, or the stereotypical crooked used-car salesman who keeps a pair of bibles on his dashboard in order to win the trust of those he would fleece. Keep in mind Moliere’s question in Tartuffe : “Is not a face quite different than a mask?” Is not the appearance of Christianity quite different than actual Christianity? Christians should beware of such manipulation since they are especially vulnerable to it.

Appeal to Tradition ( Argumentum Ad Traditionem; aka Argumentum Ad Antiquitatem ): This line of thought asserts that a premise must be true because people have always believed it or done it. For example, “We know the earth is flat because generations have thought that for centuries!” Alternatively, the appeal to tradition might conclude that the premise has always worked in the past and will thus always work in the future: “Jefferson City has kept its urban growth boundary at six miles for the past thirty years. That has been good enough for thirty years, so why should we change it now? If it ain’t broke, don’t fix it.” Such an argument is appealing in that it seems to be common sense, but it ignores important questions. Might an alternative policy work even better than the old one? Are there drawbacks to that long-standing policy? Are circumstances changing from the way they were thirty years ago? Has new evidence emerged that might throw that long-standing policy into doubt?

Appeal to Improper Authority ( Argumentum Ad Verecundium, literally “argument from that which is improper”): An appeal to an improper authority, such as a famous person or a source that may not be reliable or who might not know anything about the topic. This fallacy attempts to capitalize upon feelings of respect or familiarity with a famous individual. It is not fallacious to refer to an admitted authority if the individual’s expertise is within a strict field of knowledge. On the other hand, to cite Einstein to settle an argument about education or economics is fallacious. To cite Darwin, an authority on biology, on religious matters is fallacious. To cite Cardinal Spellman on legal problems is fallacious. The worst offenders usually involve movie stars and psychic hotlines. A subcategory is the Appeal to Biased Authority . In this sort of appeal, the authority is one who actually is knowledgeable on the matter, but one who may have professional or personal motivations that render his professional judgment suspect: for instance, “To determine whether fraternities are beneficial to this campus, we interviewed all the frat presidents.” Or again, “To find out whether or not sludge-mining really is endangering the Tuskogee salamander’s breeding grounds, we interviewed the owners of the sludge-mines, who declared there is no problem.” Indeed, it is important to get “both viewpoints” on an argument, but basing a substantial part of your argument on a source that has personal, professional, or financial interests at stake may lead to biased arguments. As Upton Sinclair once stated, “It’s difficult to get a man to understand something when his salary depends upon his not understanding it.” Sinclair is pointing out that even a knowledgeable authority might not be entirely rational on a topic when he has economic incentives that bias his thinking.

Appeal to Emotion (Argumentum Ad Misericordiam , literally, “argument from pity”): An emotional appeal concerning what should be a logical issue during a debate. While pathos generally works to reinforce a reader’s sense of duty or outrage at some abuse, if a writer tries to use emotion merely for the sake of getting the reader to accept what should be a logical conclusion, the argument is a fallacy. For example, in the 1880s, prosecutors in a Virginia court presented overwhelming proof that a boy was guilty of murdering his parents with an ax. The defense presented a “not-guilty” plea for on the grounds that the boy was now an orphan, with no one to look after his interests if the court was not lenient. This appeal to emotion obviously seems misplaced, and the argument is irrelevant to the question of whether or not he did the crime.

Argument from Adverse Consequences: Asserting that an argument must be false because the implications of it being true would create negative results. For instance, “The medical tests show that Grandma has advanced cancer. However, that can’t be true because then she would die! I refuse to believe it!”  The argument is illogical because truth and falsity are not contingent based upon how much we like or dislike the consequences of that truth. Grandma, indeed, might have cancer, in spite of how negative that fact may be or how cruelly it may affect us.

Argument from Personal Incredulity : Asserting that opponent’s argument must be false because you personally don’t understand it or can’t follow its technicalities. For instance, one person might assert, “I don’t understand that engineer’s argument about how airplanes can fly. Therefore, I cannot believe that airplanes are able to fly.” Au contraire , that speaker’s own mental limitations do not limit the physical world—so airplanes may very well be able to fly in spite of a person’s inability to understand how they work. One person’s comprehension is not relevant to the truth of a matter.

Begging the Question (also called Petitio Principii , this term is sometimes used interchangeably with Circular Reasoning ): If writers assume as evidence for their argument the very conclusion they are attempting to prove, they engage in the fallacy of begging the question. The most common form of this fallacy is when the first claim is initially loaded with the very conclusion one has yet to prove. For instance, suppose a particular student group states, “Useless courses like English 101 should be dropped from the college’s curriculum.” The members of the student group then immediately move on in the argument, illustrating that spending money on a useless course is something nobody wants. Yes, we all agree that spending money on useless courses is a bad thing. However, those students never did prove that English 101 was itself a useless course–they merely “begged the question” and moved on to the next “safe” part of the argument, skipping over the part that’s the real controversy, the heart of the matter, the most important component. Begging the question is often hidden in the form of a complex question (see below).

Circular Reasoning is closely related to begging the question . Often the writers using this fallacy word take one idea and phrase it in two statements. The assertions differ sufficiently to obscure the fact that that the same proposition occurs as both a premise and a conclusion. The speaker or author then tries to “prove” his or her assertion by merely repeating it in different words. Richard Whately wrote in Elements of Logic (London 1826): “To allow every man unbounded freedom of speech must always be on the whole, advantageous to the state; for it is highly conducive to the interest of the community that each individual should enjoy a liberty perfectly unlimited of expressing his sentiments.” Obviously the premise is not logically irrelevant to the conclusion, for if the premise is true the conclusion must also be true. It is, however, logically irrelevant in proving the conclusion. In the example, the author is repeating the same point in different words, and then attempting to “prove” the first assertion with the second one. A more complex but equally fallacious type of circular reasoning is to create a circular chain of reasoning like this one: “God exists.” “How do you know that God exists?” “The Bible says so.” “Why should I believe the Bible?” “Because it’s the inspired word of God.” If we draw this out as a chart, it looks like this:

The so-called “final proof” relies on unproven evidence set forth initially as the subject of debate. Basically, the argument goes in an endless circle, with each step of the argument relying on a previous one, which in turn relies on the first argument yet to be proven. Surely God deserves a more intelligible argument than the circular reasoning proposed in this example!

Hasty Generalization ( Dicto Simpliciter , also called “Jumping to Conclusions,” “Converse Accident”): Mistaken use of inductive reasoning when there are too few samples to prove a point. Example: “Susan failed Biology 101. Herman failed Biology 101. Egbert failed Biology 101. I therefore conclude that most students who take Biology 101 will fail it.” In understanding and characterizing general situations, a logician cannot normally examine every single example. However, the examples used in inductive reasoning should be typical of the problem or situation at hand. Maybe Susan, Herman, and Egbert are exceptionally poor students. Maybe they were sick and missed too many lectures that term to pass. If a logician wants to make the case that most students will fail Biology 101, she should (a) get a very large sample–at least one larger than three–or (b) if that isn’t possible, she will need to go out of his way to prove to the reader that her three samples are somehow representative of the norm. If a logician considers only exceptional or dramatic cases and generalizes a rule that fits these alone, the author commits the fallacy of hasty generalization.

One common type of hasty generalization is the Fallacy of Accident . This error occurs when one applies a general rule to a particular case when accidental circumstances render the general rule inapplicable. For example, in Plato’s Republic , Plato finds an exception to the general rule that one should return what one has borrowed: “Suppose that a friend when in his right mind has deposited arms with me and asks for them when he is not in his right mind. Ought I to give the weapons back to him? No one would say that I ought or that I should be right in doing so. . . .” What is true in general may not be true universally and without qualification. So remember, generalizations are bad. All of them. Every single last one. Except, of course, for those that are not.

Another common example of this fallacy is the misleading statistic . Suppose an individual argues that women must be incompetent drivers, and he points out that last Tuesday at the Department of Motor Vehicles, 50% of the women who took the driving test failed. That would seem to be compelling evidence from the way the statistic is set forth. However, if only two women took the test that day, the results would be far less clear-cut. Incidentally, the cartoon Dilbert makes much of an incompetent manager who cannot perceive misleading statistics. He does a statistical study of when employees call in sick and cannot come to work during the five-day work week. He becomes furious to learn that 40% of office “sick-days” occur on Mondays (20%) and Fridays (20%)–just in time to create a three-day weekend. Suspecting fraud, he decides to punish his workers. The irony, of course, is that these two days compose 40% of a five day work week, so the numbers are completely average. Similar nonsense emerges when parents or teachers complain that “50% of students perform at or below the national average on standardized tests in mathematics and verbal aptitude.” Of course they do! The very nature of an average implies that!

False Cause : This fallacy establishes a cause/effect relationship that does not exist. There are various Latin names for various analyses of the fallacy. The two most common include these types:

(1) Non Causa Pro Causa (Literally, “Not the cause for a cause”): A general, catch-all category for mistaking a false cause of an event for the real cause. (2) Post Hoc, Ergo Propter Hoc (Literally: “After this, therefore because of this”): This type of false cause occurs when the writer mistakenly assumes that, because the first event preceded the second event, it must mean the first event caused the later one. Sometimes it does, but sometimes it doesn’t. It is the honest writer’s job to establish clearly that connection rather than merely assert it exists. Example: “A black cat crossed my path at noon. An hour later, my mother had a heart-attack. Because the first event occurred earlier, it must have caused the bad luck later.” This is how superstitions begin. The most common examples are arguments that viewing a particular movie or show, or listening to a particular type of music “caused” the listener to perform an antisocial act–to snort coke, shoot classmates, or take up a life of crime. These may be potential suspects for the cause, but the mere fact that an individual did these acts and subsequently behaved in a certain way does not yet conclusively rule out other causes. Perhaps the listener had an abusive home-life or school-life, suffered from a chemical imbalance leading to depression and paranoia, or made a bad choice in his companions. Other potential causes must be examined before asserting that only one event or circumstance alone earlier in time caused a event or behavior later. For more information, see correlation and causation .

Irrelevant Conclusion ( Ignorantio Elenchi ): This fallacy occurs when a rhetorician adapts an argument purporting to establish a particular conclusion and directs it to prove a different conclusion. For example, when a particular proposal for housing legislation is under consideration, a legislator may argue that decent housing for all people is desirable. Everyone, presumably, will agree. However, the question at hand concerns a particular measure. The question really isn’t, “Is it good to have decent housing?” The question really is, “Will this particular measure actually provide it or is there a better alternative?” This type of fallacy is a common one in student papers when students use a shared assumption–such as the fact that decent housing is a desirable thing to have–and then spend the bulk of their essays focused on that fact rather than the real question at issue. It’s similar to begging the question , above.

One of the most common forms of Ignorantio Elenchi is the “ Red Herring .” A red herring is a deliberate attempt to change the subject or divert the argument from the real question at issue to some side-point; for instance, “Senator Jones should not be held accountable for cheating on his income tax. After all, there are other senators who have done far worse things.” Another example: “I should not pay a fine for reckless driving. There are many other people on the street who are dangerous criminals and rapists, and the police should be chasing them, not harassing a decent tax-paying citizen like me.” Certainly, worse criminals do exist, but that it is another issue! The questions at hand are (1) did the speaker drive recklessly, and (2) should he pay a fine for it?

Another similar example of the red herring is the fallacy known as Tu Quoque (Latin for “And you too!”), which asserts that the advice or argument must be false simply because the person presenting the advice doesn’t consistently follow it herself. For instance, “Susan the yoga instructor claims that a low-fat diet and exercise are good for you–but I saw her last week pigging out on oreos, so her argument must be a load of hogwash.” Or, “Reverend Jeremias claims that theft is wrong, but how can theft be wrong if Jeremias himself admits he stole objects when he was a child?” Or “Thomas Jefferson made many arguments about equality and liberty for all Americans, but he himself kept slaves, so we can dismiss any thoughts he had on those topics.”

Straw Man Argument : A subtype of the red herring , this fallacy includes any lame attempt to “prove” an argument by overstating, exaggerating, or over-simplifying the arguments of the opposing side. Such an approach is building a straw man argument. The name comes from the idea of a boxer or fighter who meticulously fashions a false opponent out of straw, like a scarecrow, and then easily knocks it over in the ring before his admiring audience. His “victory” is a hollow mockery, of course, because the straw-stuffed opponent is incapable of fighting back. When a writer makes a cartoon-like caricature of the opposing argument, ignoring the real or subtle points of contention, and then proceeds to knock down each “fake” point one-by-one, he has created a straw man argument.

For instance, one speaker might be engaged in a debate concerning welfare. The opponent argues, “Tennessee should increase funding to unemployed single mothers during the first year after childbirth because they need sufficient money to provide medical care for their newborn children.” The second speaker retorts, “My opponent believes that some parasites who don’t work should get a free ride from the tax money of hard-working honest citizens. I’ll show you why he’s wrong . . .” In this example, the second speaker is engaging in a straw man strategy, distorting the opposition’s statement about medical care for newborn children into an oversimplified form so he can more easily appear to “win.” However, the second speaker is only defeating a dummy-argument rather than honestly engaging in the real nuances of the debate.

Non Sequitur (literally, “It does not follow”): A non sequitur is any argument that does not follow from the previous statements. Usually what happened is that the writer leaped from A to B and then jumped to D, leaving out step C of an argument she thought through in her head, but did not put down on paper. The phrase is applicable in general to any type of logical fallacy, but logicians use the term particularly in reference to syllogistic errors such as the undistributed middle term , non causa pro causa , and ignorantio elenchi . A common example would be an argument along these lines: “Giving up our nuclear arsenal in the 1980’s weakened the United States’ military. Giving up nuclear weaponry also weakened China in the 1990s. For this reason, it is wrong to try to outlaw pistols and rifles in the United States today.” There’s obviously a step or two missing here.

The “Slippery Slope” Fallacy (also called “The Camel’s Nose Fallacy”) is a non sequitur in which the speaker argues that, once the first step is undertaken, a second or third step will inevitably follow, much like the way one step on a slippery incline will cause a person to fall and slide all the way to the bottom. It is also called “the Camel’s Nose Fallacy” because of the image of a sheik who let his camel stick its nose into his tent on a cold night. The idea is that the sheik is afraid to let the camel stick its nose into the tent because once the beast sticks in its nose, it will inevitably stick in its head, and then its neck, and eventually its whole body. However, this sort of thinking does not allow for any possibility of stopping the process. It simply assumes that, once the nose is in, the rest must follow–that the sheik can’t stop the progression once it has begun–and thus the argument is a logical fallacy. For instance, if one were to argue, “If we allow the government to infringe upon our right to privacy on the Internet, it will then feel free to infringe upon our privacy on the telephone. After that, FBI agents will be reading our mail. Then they will be placing cameras in our houses. We must not let any governmental agency interfere with our Internet communications, or privacy will completely vanish in the United States.” Such thinking is fallacious; no logical proof has been provided yet that infringement in one area will necessarily lead to infringement in another, no more than a person buying a single can of Coca-Cola in a grocery store would indicate the person will inevitably go on to buy every item available in the store, helpless to stop herself. So remember to avoid the slippery slope fallacy; once you use one, you may find yourself using more and more logical fallacies.

Either/Or Fallacy (also called “the Black-and-White Fallacy,” “Excluded Middle,” “False Dilemma,” or “False Dichotomy”): This fallacy occurs when a writer builds an argument upon the assumption that there are only two choices or possible outcomes when actually there are several. Outcomes are seldom so simple. This fallacy most frequently appears in connection to sweeping generalizations: “Either we must ban X or the American way of life will collapse.” “We go to war with Canada, or else Canada will eventually grow in population and overwhelm the United States.” “Either you drink Burpsy Cola, or you will have no friends and no social life.” Either you must avoid either/or fallacies, or everyone will think you are foolish.

Faulty Analogy : Relying only on comparisons to prove a point rather than arguing deductively and inductively. For example, “education is like cake; a small amount tastes sweet, but eat too much and your teeth will rot out. Likewise, more than two years of education is bad for a student.” The analogy is only acceptable to the degree a reader thinks that education is similar to cake. As you can see, faulty analogies are like flimsy wood, and just as no carpenter would build a house out of flimsy wood, no writer should ever construct an argument out of flimsy material.

Undistributed Middle Term : A specific type of error in deductive reasoning in which the minor premise and the major premise of a syllogism might or might not overlap. Consider these two examples: (1) “All reptiles are cold-blooded. All snakes are reptiles. All snakes are cold-blooded.” In the first example, the middle term “snakes” fits in the categories of both “reptile” and “things-that-are-cold-blooded.” (2) “All snails are cold-blooded. All snakes are cold-blooded. All snails are snakes.” In the second example, the middle term of “snakes” does not fit into the categories of both “things-that-are-cold-blooded” and “snails.” Sometimes, equivocation (see below) leads to an undistributed middle term.

Contradictory Premises (also known as a logical paradox): Establishing a premise in such a way that it contradicts another, earlier premise. For instance, “If God can do anything, he can make a stone so heavy that he can’t lift it.” The first premise establishes a deity that has the irresistible capacity to move other objects. The second premise establishes an immovable object impervious to any movement. If the first object capable of moving anything exists, by definition, the immovable object cannot exist, and vice-versa .

Closely related is the fallacy of Special Pleading , in which the writer creates a universal principle, then insists that principle does not for some reason apply to the issue at hand. For instance, “Everything must have a source or creator. Therefore God must exist and he must have created the world. What? Who created God? Well, God is eternal and unchanging–He has no source or creator.” In such an assertion, either God must have His own source or creator, or else the universal principle of everything having a source or creator must be set aside—the person making the argument can’t have it both ways.

FALLACIES OF AMBIGUITY : These errors occur with ambiguous words or phrases, the meanings of which shift and change in the course of discussion. Such more or less subtle changes can render arguments fallacious.

Equivocation : Using a word in a different way than the author used it in the original premise, or changing definitions halfway through a discussion. When we use the same word or phrase in different senses within one line of argument, we commit the fallacy of equivocation. Consider this example: “Plato says the end of a thing is its perfection; I say that death is the end of life; hence, death is the perfection of life.” Here the word end means “goal” in Plato’s usage, but it means “last event” or “termination” in the author’s second usage. Clearly, the speaker is twisting Plato’s meaning of the word to draw a very different conclusion. Compare with amphiboly , below.

Amphiboly (from the Greek word “indeterminate”): This fallacy is similar to equivocation. Here, the ambiguity results from grammatical construction. A statement may be true according to one interpretation of how each word functions in a sentence and false according to another. When a premise works with an interpretation that is true, but the conclusion uses the secondary “false” interpretation, we have the fallacy of amphiboly on our hands. In the command, “Save soap and waste paper,” the amphibolous use of “waste” results in the problem of determining whether “waste” functions as a verb or as an adjective.

Composition : This fallacy is a result of reasoning from the properties of the parts of the whole to the properties of the whole itself–it is an inductive error. Such an argument might hold that, because every individual part of a large tractor is lightweight, the entire machine also must be lightweight. This fallacy is similar to Hasty Generalization (see above), but it focuses on parts of a single whole rather than using too few examples to create a categorical generalization. Also compare it with Division (see below).

Division : This fallacy is the reverse of composition . It is the misapplication of deductive reasoning. One fallacy of division argues falsely that what is true of the whole must be true of individual parts. Such an argument notes that, “Microtech is a company with great influence in the California legislature. Egbert Smith works at Microtech. He must have great influence in the California legislature.” This is not necessarily true. Egbert might work as a graveyard shift security guard or as the copy-machine repairman at Microtech–positions requiring little interaction with the California legislature. Another fallacy of division attributes the properties of the whole to the individual member of the whole: “Sunsurf is a company that sells environmentally safe products. Susan Jones is a worker at Sunsurf. She must be an environmentally minded individual.” (Perhaps she is motivated by money alone?)

Fallacy of Reification (Also called “ Fallacy of Misplaced Concreteness ” by Alfred North Whitehead): The fallacy of treating a word or an idea as equivalent to the actual thing represented by that word or idea, or the fallacy of treating an abstraction or process as equivalent to a concrete object or thing.  In the first case, we might imagine a reformer trying to eliminate illicit lust by banning all mention of extra-marital affairs or certain sexual acts in publications. The problem is that eliminating the words for these deeds is not the same as eliminating the deeds themselves. In the second case, we might imagine a person or declaring “a war on poverty.” In this case, the fallacy comes from the fact that “war” implies a concrete struggle with another concrete entity which can surrender or be exterminated. “Poverty,” however is an abstraction that cannot surrender or sign peace treaties, cannot be shot or bombed, etc. Reification of the concept merely muddles the issue of what policies to follow and leads to sloppy thinking about the best way to handle a problem. It is closely related to and overlaps with faulty analogy and equivocation .

FALLACIES O F OMISSION : These errors occur because the logician leaves out necessary material in an argument or misdirects others from missing information.

Stacking the Deck : In this fallacy, the speaker “stacks the deck” in her favor by ignoring examples that disprove the point and listing only those examples that support her case. This fallacy is closely related to hasty generalization, but the term usually implies deliberate deception rather than an accidental logical error. Contrast it with the straw man argument .

‘No True Scotsman’ Fallacy : Attempting to stack the deck specifically by defining terms in such a narrow or unrealistic manner as to exclude or omit relevant examples from a sample. For instance, suppose speaker #1 asserts, “The Scottish national character is brave and patriotic. No Scottish soldier has ever fled the field of battle in the face of the enemy.” Speaker #2 objects, “Ah, but what about Lucas MacDurgan? He fled from German troops in World War I.” Speaker #1 retorts, “Well, obviously he doesn’t count as a true Scotsman because he did not live up to Scottish ideals, thus he forfeited his Scottish identity.” By this fallacious reasoning, any individual who would serve as evidence contradicting the first speaker’s assertion is conveniently and automatically dismissed from consideration. We commonly see this fallacy when a company asserts that it cannot be blamed for one of its particularly unsafe or shoddy products because that particular one doesn’t live up to its normally high standards, and thus shouldn’t “count” against its fine reputation. Likewise, defenders of Christianity as a positive historical influence in their zeal might argue the atrocities of the eight Crusades do not “count” in an argument because the Crusaders weren’t living up to Christian ideals, and thus aren’t really Christians, etc. So, remember this fallacy. Philosophers and logicians never use it, and anyone who does use it by definition is not really a philosopher or logician.

Argument from the Negative : Arguing from the negative asserts that, since one position is untenable, the opposite stance must be true. This fallacy is often used interchangeably with Argumentum Ad Ignorantium (listed below) and the either/or fallacy (listed above). For instance, one might mistakenly argue that, since the Newtonian theory of mathematics is not one hundred percent accurate, Einstein’s theory of relativity must be true. Perhaps not. Perhaps the theories of quantum mechanics are more accurate, and Einstein’s theory is flawed. Perhaps they are all wrong. Disproving an opponent’s argument does not necessarily mean your own argument must be true automatically, no more than disproving your opponent’s assertion that 2+2=5 would automatically mean your argument that 2+2=7 must be the correct one. Keeping this mind, students should remember that arguments from the negative are bad, arguments from the positive must automatically be good.

Appeal to a Lack of Evidence ( Argumentum Ad Ignorantium , literally “Argument from Ignorance”): Appealing to a lack of information to prove a point, or arguing that, since the opposition cannot disprove a claim, the opposite stance must be true. An example of such an argument is the assertion that ghosts must exist because no one has been able to prove that they do not exist. Logicians know this is a logical fallacy because no competing argument has yet revealed itself.

Hypothesis Contrary to Fact ( Argumentum Ad Speculum ): Trying to prove something in the real world by using imaginary examples alone, or asserting that, if hypothetically X had occurred, Y would have been the result. For instance, suppose an individual asserts that if Einstein had been aborted in utero , the world would never have learned about relativity, or that if Monet had been trained as a butcher rather than going to college, the impressionistic movement would have never influenced modern art. Such hypotheses are misleading lines of argument because it is often possible that some other individual would have solved the relativistic equations or introduced an impressionistic art style. The speculation might make an interesting thought-experiment, but it is simply useless when it comes to actually proving anything about the real world. A common example is the idea that one “owes” her success to another individual who taught her. For instance, “You owe me part of your increased salary. If I hadn’t taught you how to recognize logical fallacies, you would be flipping hamburgers at McDonald’s for minimum wages right now instead of taking in hundreds of thousands of dollars as a lawyer.” Perhaps. But perhaps the audience would have learned about logical fallacies elsewhere, so the hypothetical situation described is meaningless.

Complex Question (Also called the “Loaded Question”): Phrasing a question or statement in such as way as to imply another unproven statement is true without evidence or discussion. This fallacy often overlaps with begging the question (above), since it also presupposes a definite answer to a previous, unstated question. For instance, if I were to ask you “Have you stopped taking drugs yet?” my hidden supposition is that you have been taking drugs. Such a question cannot be answered with a simple yes or no answer. It is not a simple question but consists of several questions rolled into one. In this case the unstated question is, “Have you taken drugs in the past?” followed by, “If you have taken drugs in the past, have you stopped taking them now?” In cross-examination, a lawyer might ask a flustered witness, “Where did you hide the evidence?” or “when did you stop beating your wife?” The intelligent procedure when faced with such a question is to analyze its component parts. If one answers or discusses the prior, implicit question first, the explicit question may dissolve.

Complex questions appear in written argument frequently. A student might write, “Why is private development of resources so much more efficient than any public control?” The rhetorical question leads directly into his next argument. However, an observant reader may disagree, recognizing the prior, implicit question remains unaddressed. That question is, of course, whether private development of resources really is more efficient in all cases, a point which the author is skipping entirely and merely assuming to be true without discussion.

To master logic more fully, become familiar with the tool of Occam’s Razor .

  • Logical Fallacies Handlist. Authored by : Dr. Kip Wheeler. Provided by : Carson Newman University. Located at : https://web.cn.edu/kwheeler/fallacies_list.html . License : CC BY-SA: Attribution-ShareAlike

Counterfactual Thinking

  • Living reference work entry
  • First Online: 09 August 2022
  • Cite this living reference work entry

Book cover

  • Felipe De Brigard 2  

47 Accesses

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Byrne, R. M. J. (1997). Cognitive processes in counterfactual thinking about what might have been. In D. L. Medin (Ed.), The psychology of learning and motivation: Advances in research and theory, Vol. 37 (pp. 105–154). Academic.

Google Scholar  

Byrne, R. M. J. (2002). Mental models and counterfactual thoughts about what might have been. Trends in Cognitive Sciences, 6 , 426–431.

Article   Google Scholar  

Byrne, R. M. J. (2016). Counterfactual thought. Annual Review of Psychology, 67 , 135–57.

Chisholm, R. M. (1946). The contrary-to-fact conditional. Mind: A Quarterly Review of Psychology and Philosophy, 55 , 289–307.

De Brigard, F., & Parikh, N. (2019). Episodic counterfactual thinking. Current Directions in Psychological Science, 28 (1), 59–66.

De Brigard, F., Addis, D., Ford, J. H., Schacter, D. L., & Giovanello, K. S. (2013). Remembering what could have happened: Neural correlates of episodic counterfactual thinking. Neuropsychologia, 51 , 2401–2414.

De Brigard, F., Spreng, R. N., Mitchell, J. P., & Schacter, D. L. (2015). Neural activity associated with self, other, and object-based counterfactual thinking. NeuroImage, 109 , 12–26.

De Brigard, F., Hanna, E., St Jacques, P. L., & Schacter, D. L. (2019). How thinking about what could have been affects how we feel about what was. Cognition and Emotion, 33 (4), 646–659.

Epstude, K., & Roese, N. J. (2008). The functional theory of counterfactual thinking. Personality and Social Psychology Review, 12 (2), 168–192.

Girotto, V., Legrenzi, P., & Rizzo, A. (1991). Event controllability in counterfactual thinking. Acta Psychologica, 78 , 111–133.

Girotto, V., Ferrante, D., Pighin, S., & Gonzalez, M. (2007). Postdecisional counterfactual thinking by actors and readers. Psychological Science, 18 , 510–515.

Goodman, N. (1947). The problem of counterfactual conditionals. The Journal of Philosophy, 44 , 113–128.

Hofstadter, D. R. (1985). Metamagical themas: Questing for the essence of mind and pattern . Penguin Books.

Johnson-Laird, P. (1986). Mental models . Harvard University Press.

Kahneman, D., & Miller, D. T. (1986). Norm theory: Comparing reality to its alternatives. Psychological Review, 93 (2), 136–153.

Kahneman, D., & Tversky, A. (1982). The simulation heuristic. In D. Kahneman, E. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (201-208) . Cambridge University Press.

Chapter   Google Scholar  

Khoudary, M., O’Neill, K., Faul, L., Murray, S., Smallman, R., & De Brigard, F. (2022). Neural differences between internal and external episodic counterfactual thoughts. Philosophical Transactions of the Royal Society B. https://doi.org/10.1098/rstb.2021.0337

Lewis, D. K. (1973). Counterfactuals . Cambridge: Harvard University Press.

Markman, K. D., & McMullen, M. N. (2003). A reflection and evaluation model of comparative thinking. Personality and Social Psychology Review, 7 (3), 244–267.

Markman, K. D., & McMullen, M. N. (2005). Reflective and evaluative modes of mental simulation. In D. R. Mandel, D. J. Hilton, & P. Catellani (Eds.), The psychology of counterfactual thinking (pp. 77–93). Routledge.

Miller, D. T., & Gunasegaram, S. (1990). Temporal order and the perceived mutability of events: Implications for blame assignment. Journal of personality and social psychology, 59 (6), 1111–1118.

Parikh, N., Ruzic, L., Stewart, G. W., Spreng, N. R., & De Brigard, F. (2018). What if? Neural activity underlying semantic and episodic counterfactual thinking. NeuroImage, 178 , 332–345.

Parikh, N., De Brigard, F., & LaBar, K. S. (2021). The efficacy of downward counterfactual thinking for regulating emotional memories in anxious individuals. Frontiers in psychology, 12 , 712066–712066.

Petrocelli, J. V., Percy, E. J., Sherman, S. J., & Tormala, Z. L. (2011). Counterfactual potency. Journal of personality and social psychology, 100 (1), 30–46.

Phillips, J., & Cushman, F. (2017). Morality constrains the default representation of what is possible. Proceedings of the National Academy of Sciences, USA, 114 , 469–4654.

Phillips, J., Morris, A., & Cushman, F. (2019). How we know what not to think. Trends in Cognitive Sciences, 23 (12), 1026–1040.

Roese, N. J., & Epstude, K. (2017). The functional theory of counterfactual thinking: New evidence, new challenges, new insights. In J. Olson (Ed.), Advances in experimental social psychology (Vol. 56, pp. 1–79). Academic.

Roese, N. J., & Olson, J. M. (1995). Counterfactual thinking: A critical overview. In N. J. Roese & J. M. Olson (Eds.), What might have been: The social psychology of counterfactual thinking (pp. 1–59). Mahwah: Erlbaum.

Schacter, D. L., & Addis, D. R. (2007). The cognitive neuroscience of constructive memory: remembering the past and imagining the future. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 362(1481), 773–786.

Schacter, D. L., Benoit, R. G., De Brigard, F., & Szpunar, K. K. (2015). Episodic future thinking and episodic counterfactual thinking: Intersections between memory and decisions. Neurobiology of Learning and Memory, 117 , 14–21.

Segura, S., Fernandez-Berrocal, P., & Byrne, R. M. J. (2002). Temporal and causal order effects in counterfactual thinking. Quarterly Journal of Experimental Psychology, 55 , 1295–1305.

Tetlock, P. E., & Belkin, A. (Eds.). (1996). Counterfactual thought experiments in world politics: Logical, methodological, and psychological perspectives . Princeton University Press.

Download references

Author information

Authors and affiliations.

Duke University, Durham, NC, USA

Felipe De Brigard

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Felipe De Brigard .

Section Editor information

Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland

Ramiro Tau Dr.

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive licence to Springer Nature Switzerland AG

About this entry

Cite this entry.

De Brigard, F. (2022). Counterfactual Thinking. In: The Palgrave Encyclopedia of the Possible. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-319-98390-5_43-1

Download citation

DOI : https://doi.org/10.1007/978-3-319-98390-5_43-1

Received : 03 May 2022

Accepted : 07 May 2022

Published : 09 August 2022

Publisher Name : Palgrave Macmillan, Cham

Print ISBN : 978-3-319-98390-5

Online ISBN : 978-3-319-98390-5

eBook Packages : Springer Reference Behavioral Science and Psychology Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

A List Of Fallacious Arguments

"the jawbone of an ass is just as dangerous a weapon today as in sampson's time." --- richard nixon.

attacking the person instead of attacking his argument. For example, "Von Daniken's books about ancient astronauts are worthless because he is a convicted forger and embezzler." (Which is true, but that's not why they're worthless.) Another example is this syllogism, which alludes to Alan Turing's homosexuality: Turing thinks machines think. Turing lies with men. Therefore, machines don't think.
simply attempting to make the other person angry, without trying to address the argument at hand. Sometimes this is a delaying tactic. Needling is also Ad Hominem if you insult your opponent. You may instead insult something the other person believes in ("Argumentum Ad YourMomium"), interrupt, clown to show disrespect, be noisy, fail to pass over the microphone, and numerous other tricks. All of these work better if you are running things - for example, if it is your radio show, and you can cut off the other person's microphone. If the host or moderator is firmly on your side, that is almost as good as running the show yourself. It's even better if the debate is videotaped, and you are the person who will edit the video. If you wink at the audience, or in general clown in their direction, then we are shading over to Argument By Personal Charm . Usually, the best way to cope with insults is to show mild amusement, and remain polite. A humorous comeback will probably work better than an angry one.
attacking an exaggerated or caricatured version of your opponent's position. For example, the claim that "evolution means a dog giving birth to a cat ." Another example: "Senator Jones says that we should not fund the attack submarine program. I disagree entirely. I can't understand why he wants to leave us defenseless like that." On the Internet, it is common to exaggerate the opponent's position so that a comparison can be made between the opponent and Hitler.
arguing that scholars debate a certain point. Therefore, they must know nothing, and their entire field of knowledge is "in crisis" or does not properly exist at all. For example, two historians debated whether Hitler killed five million Jews or six million Jews. A Holocaust denier argued that this disagreement made his claim credible, even though his death count is three to ten times smaller than the known minimum. Similarly, in "The Mythology of Modern Dating Methods" (John Woodmorappe, 1999) we find on page 42 that two scientists "cannot agree" about which one of two geological dates is "real" and which one is "spurious". Woodmorappe fails to mention that the two dates differ by less than one percent.
saying an opponent must be wrong, because if he is right, then bad things would ensue. For example: God must exist, because a godless society would be lawless and dangerous. Or: the defendant in a murder trial must be found guilty, because otherwise husbands will be encouraged to murder their wives. Wishful thinking is closely related. "My home in Florida is one foot above sea level. Therefore I am certain that global warming will not make the oceans rise by fifteen feet." Of course, wishful thinking can also be about positive consequences, such as winning the lottery, or eliminating poverty and crime.
using the arguments that support your position, but ignoring or somehow disallowing the arguments against. Uri Geller used special pleading when he claimed that the presence of unbelievers (such as stage magicians) made him unable to demonstrate his psychic powers.
assuming there are only two alternatives when in fact there are more. For example, assuming Atheism is the only alternative to Fundamentalism, or being a traitor is the only alternative to being a loud patriot.
this is a particular case of the Excluded Middle . For example, "We must deal with crime on the streets before improving the schools." (But why can't we do some of both ?) Similarly, "We should take the scientific research budget and use it to feed starving children."
the claim that whatever has not yet been proved false must be true (or vice versa). Essentially the arguer claims that he should win by default if his opponent can't make a strong enough case. There may be three problems here. First, the arguer claims priority, but can he back up that claim ? Second, he is impatient with ambiguity, and wants a final answer right away. And third, "absence of evidence is not evidence of absence."
asking your opponent a question which does not have a snappy answer. (Or anyway, no snappy answer that the audience has the background to understand.) Your opponent has a choice: he can look weak or he can look long-winded. For example, "How can scientists expect us to believe that anything as complex as a single living cell could have arisen as a result of random natural processes ?" Actually, pretty well any question has this effect to some extent. It usually takes longer to answer a question than ask it. Variants are the rhetorical question , and the loaded question, such as "Have you stopped beating your wife ?"
asking a question in a way that leads to a particular answer. For example, "When are we going to give the old folks of this country the pension they deserve ?" The speaker is leading the audience to the answer "Right now." Alternatively, he could have said "When will we be able to afford a major increase in old age pensions?" In that case, the answer he is aiming at is almost certainly not "Right now."
assuming that something true in general is true in every possible case. For example, "All chairs have four legs." Except that rocking chairs don't have any legs, and what is a one-legged "shooting stick" if it isn't a chair ? Similarly, there are times when certain laws should be broken. For example, ambulances are allowed to break speed laws.
over-simplifying. As Einstein said, everything should be made as simple as possible, but no simpler. Political slogans such as "Taxation is theft" fall in this category.
if an argument or arguer has some particular origin, the argument must be right (or wrong). The idea is that things from that origin, or that social class, have virtue or lack virtue. (Being poor or being rich may be held out as being virtuous.) Therefore, the actual details of the argument can be overlooked, since correctness can be decided without any need to listen or think.
if you learn the psychological reason why your opponent likes an argument, then he's biased, so his argument must be wrong.
assuming that two ends of a spectrum are the same, since one can travel along the spectrum in very small steps. The name comes from the idea that being clean-shaven must be the same as having a big beard, since in-between beards exist. Similarly, all piles of stones are small, since if you add one stone to a small pile of stones it remains small. However, the existence of pink should not undermine the distinction between white and red.
snobbery that very old (or very young) arguments are superior. This is a variation of the Genetic Fallacy , but has the psychological appeal of seniority and tradition (or innovation). Products labelled "New ! Improved !" are appealing to a belief that innovation is of value for such products. It's sometimes true. And then there's cans of "Old Fashioned Baked Beans".
ideas from elsewhere are made unwelcome. "This Is The Way We've Always Done It." This fallacy is a variant of the Argument From Age . It gets a psychological boost from feelings that local ways are superior, or that local identity is worth any cost, or that innovations will upset matters. An example of this is the common assertion that America has "the best health care system in the world", an idea that this 2007 New York Times editorial refuted. People who use the Not Invented Here argument are sometimes accused of being stick-in-the-mud's. Conversely, foreign and "imported" things may be held out as superior.
an idea is rejected without saying why. Dismissals usually have overtones. For example, "If you don't like it, leave the country" implies that your cause is hopeless, or that you are unpatriotic, or that your ideas are foreign , or maybe all three. "If you don't like it, live in a Communist country" adds an emotive element.
arguing that evidence will someday be discovered which will (then) support your point.
discrediting the sources used by your opponent. This is a variation of Ad Hominem .
using emotionally loaded words to sway the audience's sentiments instead of their minds. Many emotions can be useful: anger, spite, envy, condescension, and so on. For example, argument by condescension: "Support the ERA ? Sure, when the women start paying for the drinks! Hah! Hah!" Americans who don't like the Canadian medical system have referred to it as "socialist", but I'm not quite sure if this is intended to mean "foreign", or "expensive", or simply guilty by association. Cliche Thinking and Argument By Slogan are useful adjuncts, particularly if you can get the audience to chant the slogan. People who rely on this argument may seed the audience with supporters or "shills", who laugh, applaud or chant at proper moments. This is the live-audience equivalent of adding a laugh track or music track. Now that many venues have video equipment, some speakers give part of their speech by playing a prepared video. These videos are an opportunity to show a supportive audience, use emotional music, show emotionally charged images, and the like. The idea is old: there used to be professional cheering sections. (Monsieur Zig-Zag, pictured on the cigarette rolling papers, acquired his fame by applauding for money at the Paris Opera.) If the emotion in question isn't harsh, Argument By Poetic Language helps the effect. Flattering the audience doesn't hurt either.
getting the audience to cut you slack. Example: Ronald Reagan. It helps if you have an opponent with much less personal charm. Charm may create trust, or the desire to "join the winning team", or the desire to please the speaker. This last is greatest if the audience feels sex appeal. Reportedly George W. Bush lost a debate when he was young, and said later that he would never be "out-bubba'd" again.
"I did not murder my mother and father with an axe ! Please don't find me guilty; I'm suffering enough through being an orphan." Some authors want you to know they're suffering for their beliefs. For example, "Scientists scoffed at Copernicus and Galileo; they laughed at Edison, Tesla and Marconi; they won't give my ideas a fair hearing either. But time will be the judge. I can wait; I am patient; sooner or later science will be forced to admit that all matter is built, not of atoms, but of tiny capsules of TIME." There is a strange variant which shows up on Usenet. Somebody refuses to answer questions about their claims, on the grounds that the asker is mean and has hurt their feelings. Or, that the question is personal.
threats, or even violence. On the Net, the usual threat is of a lawsuit. The traditional religious threat is that one will burn in Hell. However, history is full of instances where expressing an unpopular idea could you get you beaten up on the spot, or worse. "The clinching proof of my reasoning is that I will cut anyone who argues further into dogmeat." -- Attributed to Sir Geoffery de Tourneville, ca 1350 A.D.
being loud. Trial lawyers are taught this rule: If you have the facts, pound on the facts. If you have the law, pound on the law. If you don't have either, pound on the table.
reasoning in a circle. The thing to be proved is used as one of your assumptions. For example: "We must have a death penalty to discourage violent crime". (This assumes it discourages crime.) Or, "The stock market fell because of a technical adjustment." (But is an "adjustment" just a stock market fall ?)
using what you are trying to disprove. That is, requiring the truth of something for your proof that it is false. For example, using science to show that science is wrong. Or, arguing that you do not exist, when your existence is clearly required for you to be making the argument. This is a relative of Begging The Question , except that the circularity there is in what you are trying to prove, instead of what you are trying to disprove. It is also a relative of Reductio Ad Absurdum , where you temporarily assume the truth of something.
the claim that the speaker is an expert, and so should be trusted. There are degrees and areas of expertise. The speaker is actually claiming to be more expert, in the relevant subject area, than anyone else in the room. There is also an implied claim that expertise in the area is worth having. For example, claiming expertise in something hopelessly quack (like iridology ) is actually an admission that the speaker is gullible.
a strange variation on Argument From Authority . For example, the TV commercial which starts "I'm not a doctor, but I play one on TV." Just what are we supposed to conclude ?
an Appeal To Authority is made, but the authority is not named. For example, "Experts agree that ..", "scientists say .." or even "they say ..". This makes the information impossible to verify, and brings up the very real possibility that the arguer himself doesn't know who the experts are. In that case, he may just be spreading a rumor. The situation is even worse if the arguer admits it's a rumor.
"Albert Einstein was extremely impressed with this theory." (But a statement made by someone long-dead could be out of date. Or perhaps Einstein was just being polite. Or perhaps he made his statement in some specific context. And so on.) To justify an appeal, the arguer should at least present an exact quote. It's more convincing if the quote contains context, and if the arguer can say where the quote comes from. A variation is to appeal to unnamed authorities . There was a New Yorker cartoon, showing a doctor and patient. The doctor was saying: "Conventional medicine has no treatment for your condition. Luckily for you, I'm a quack." So the joke was that the doctor boasted of his lack of authority.
a variation on Appeal To Authority , but the Authority is outside his area of expertise. For example, "Famous physicist John Taylor studied Uri Geller extensively and found no evidence of trickery or fraud in his feats." Taylor was not qualified to detect trickery or fraud of the kind used by stage magicians. Taylor later admitted Geller had tricked him, but he apparently had not figured out how. A variation is to appeal to a non-existent authority. For example, someone reading an article by Creationist Dmitri Kuznetsov tried to look up the referenced articles. Some of the articles turned out to be in non-existent journals. Another variation is to misquote a real authority. There are several kinds of misquotation. A quote can be inexact or have been edited. It can be taken out of context. (Chevy Chase: "Yes, I said that, but I was singing a song written by someone else at the time.") The quote can be separate quotes which the arguer glued together. Or, bits might have gone missing. For example, it's easy to prove that Mick Jagger is an assassin. In "Sympathy For The Devil" he sang: "I shouted out, who killed the Kennedys, When after all, it was ... me."
the speaker says "I used to believe in X". This is simply a weak form of asserting expertise. The speaker is implying that he has learned about the subject, and now that he is better informed, he has rejected X. So perhaps he is now an authority, and this is an implied Argument From Authority . A more irritating version of this is "I used to think that way when I was your age." The speaker hasn't said what is wrong with your argument: he is merely claiming that his age has made him an expert. "X" has not actually been countered unless there is agreement that the speaker has that expertise. In general, any bald claim always has to be buttressed. For example, there are a number of Creationist authors who say they "used to be evolutionists", but the scientists who have rated their books haven't noticed any expertise about evolution.
claiming that two situations are highly similar, when they aren't. For example, "The solar system reminds me of an atom, with planets orbiting the sun like electrons orbiting the nucleus. We know that electrons can jump from orbit to orbit; so we must look to ancient records for sightings of planets jumping from orbit to orbit also." Or, "Minds, like rivers, can be broad. The broader the river, the shallower it is. Therefore, the broader the mind, the shallower it is." Or, "We have pure food and drug laws; why can't we have laws to keep movie-makers from giving us filth ?"
the claim that two things, both analogous to a third thing, are therefore analogous to each other. For example, this debate: "I believe it is always wrong to oppose the law by breaking it." "Such a position is odious: it implies that you would not have supported Martin Luther King." "Are you saying that cryptography legislation is as important as the struggle for Black liberation ? How dare you !"
this is a relative of Bad Analogy . It is suggested that some resemblance is proof of a relationship. There is a WW II story about a British lady who was trained in spotting German airplanes. She made a report about a certain very important type of plane. While being quizzed, she explained that she hadn't been sure, herself, until she noticed that it had a little man in the cockpit, just like the little model airplane at the training class.
an abstract thing is talked about as if it were concrete. (A possibly Bad Analogy is being made between concept and reality.) For example, "Nature abhors a vacuum."
assuming that because two things happened, the first one caused the second one. (Sequence is not causation.) For example, "Before women got the vote, there were no nuclear weapons." Or, "Every time my brother Bill accompanies me to Fenway Park, the Red Sox are sure to lose." Essentially, these are arguments that the sun goes down because we've turned on the street lights.
earthquakes in the Andes were correlated with the closest approaches of the planet Uranus. Therefore, Uranus must have caused them. (But Jupiter is nearer than Uranus, and more massive too.) When sales of hot chocolate go up, street crime drops. Does this correlation mean that hot chocolate prevents crime ? No, it means that fewer people are on the streets when the weather is cold. The bigger a child's shoe size, the better the child's handwriting. Does having big feet make it easier to write ? No, it means the child is older.
trying to use one cause to explain something, when in fact it had several causes. For example, "The accident was caused by the taxi parking in the street." (But other drivers went around the taxi. Only the drunk driver hit the taxi.)
using as evidence a well-known wise saying, as if that is proven, or as if it has no exceptions.
a specific example of Cliche Thinking . This is used when a rule has been asserted, and someone points out the rule doesn't always work. The cliche rebuttal is that this is "the exception that proves the rule". Many people think that this cliche somehow allows you to ignore the exception, and continue using the rule. In fact, the cliche originally did no such thing. There are two standard explanations for the original meaning. The first is that the word "prove" meant test . That is why the military takes its equipment to a Proving Ground to test it. So, the cliche originally said that an exception tests a rule. That is, if you find an exception to a rule, the cliche is saying that the rule is being tested, and perhaps the rule will need to be discarded. The second explanation is that the stating of an exception to a rule, proves that the rule exists. For example, suppose it was announced that "Over the holiday weekend, students do not need to be in the dorms by midnight". This announcement implies that normally students do have to be in by midnight. Here is a discussion of that explanation. In either case, the cliche is not about waving away objections.
the claim, as evidence for an idea, that many people believe it, or used to believe it, or do it. If the discussion is about social conventions, such as "good manners", then this is a reasonable line of argument. However, in the 1800's there was a widespread belief that bloodletting cured sickness. All of these people were not just wrong, but horribly wrong, because in fact it made people sicker. Clearly, the popularity of an idea is no guarantee that it's right. Similarly, a common justification for bribery is that "Everybody does it". And in the past, this was a justification for slavery.
assuming that a whole has the same simplicity as its constituent parts. In fact, a great deal of science is the study of emergent properties . For example, if you put a drop of oil on water, there are interesting optical effects. But the effect comes from the oil/water system: it does not come just from the oil or just from the water. Another example: "A car makes less pollution than a bus. Therefore, cars are less of a pollution problem than buses." Another example: "Atoms are colorless. Cats are made of atoms, so cats are colorless."
assuming that what is true of the whole is true of each constituent part. For example, human beings are made of atoms, and human beings are conscious, so atoms must be conscious.
unrelated points are treated as if they should be accepted or rejected together. In fact, each point should be accepted or rejected on its own merits. For example, "Do you support freedom and the right to bear arms ?"
there is an old saying about how if you allow a camel to poke his nose into the tent, soon the whole camel will follow. The fallacy here is the assumption that something is wrong because it is right next to something that is wrong. Or, it is wrong because it could slide towards something that is wrong. For example, "Allowing abortion in the first week of pregnancy would lead to allowing it in the ninth month." Or, "If we legalize marijuana, then more people will try heroin." Or, "If I make an exception for you then I'll have to make an exception for everyone."
refusing to accept something after everyone else thinks it is well enough proved. For example, there are still Flat Earthers.
asserting that some fact is due to chance. For example, the arguer has had a dozen traffic accidents in six months, yet he insists they weren't his fault. This may be Argument By Pigheadedness . But on the other hand, coincidences do happen, so this argument is not always fallacious.
if you say something often enough, some people will begin to believe it. There are some net.kooks who keeping reposting the same articles to Usenet, presumably in hopes it will have that effect.
this is hard to detect, of course. You have to ask questions. For example, an amazingly accurate "prophecy" of the assassination attempt on President Reagan was shown on TV. But was the tape recorded before or after the event ? Many stations did not ask this question. (It was recorded afterwards.) A book on "sea mysteries" or the "Bermuda Triangle" might tell us that the yacht Connemara IV was found drifting crewless, southeast of Bermuda, on September 26, 1955. None of these books mention that the yacht had been directly in the path of Hurricane Iona, with 180 mph winds and 40-foot waves.
also called cherry picking, the enumeration of favorable circumstances, or as the philosopher Francis Bacon described it, counting the hits and forgetting the misses. For example, a state boasts of the Presidents it has produced, but is silent about its serial killers. Or, the claim "Technology brings happiness". (Now, there's something with hits and misses.) Casinos encourage this human tendency. There are bells and whistles to announce slot machine jackpots, but losing happens silently. This makes it much easier to think that the odds of winning are good.
making it seem as if the weakest of an opponent's arguments was the best he had. Suppose the opponent gave a strong argument X and also a weaker argument Y. Simply rebut Y and then say the opponent has made a weak case. This is a relative of Argument By Selective Observation , in that the arguer overlooks arguments that he does not like. It is also related to Straw Man (Fallacy Of Extension) , in that the opponent's argument is not being fairly represented.
drawing a broad conclusion from a small number of perhaps unrepresentative cases. (The cases may be unrepresentative because of Selective Observation .) For example, "They say 1 out of every 5 people is Chinese. How is this possible ? I know hundreds of people, and none of them is Chinese." So, by generalization, there aren't any Chinese anywhere. This is connected to the Fallacy Of The General Rule . Similarly, "Because we allow terminally ill patients to use heroin, we should allow everyone to use heroin." It is also possible to under-generalize. For example, "A man who had killed both of his grandmothers declared himself rehabilitated, on the grounds that he could not conceivably repeat his offense in the absence of any further grandmothers." -- "Ports Of Call" by Jack Vance
"I've thrown three sevens in a row. Tonight I can't lose." This is Argument By Generalization , but it assumes that small numbers are the same as big numbers. (Three sevens is actually a common occurrence. Thirty three sevens is not.) Or: "After treatment with the drug, one-third of the mice were cured, one-third died, and the third mouse escaped." Does this mean that if we treated a thousand mice, 333 would be cured ? Well, no.
President Dwight Eisenhower expressed astonishment and alarm on discovering that fully half of all Americans had below average intelligence. Similarly, some people get fearful when they learn that their doctor wasn't in the top half of his class. (But that's half of them.) "Statistics show that of those who contract the habit of eating, very few survive." -- Wallace Irwin.
for example, the declining life expectancy in the former Soviet Union is due to the failures of communism. But, the quite high infant mortality rate in the United States is not a failure of capitalism. This is related to Internal Contradiction .
something that just does not follow. For example, "Tens of thousands of Americans have seen lights in the night sky which they could not identify. The existence of life on other planets is fast becoming certainty !" Another example: arguing at length that your religion is of great help to many people. Then, concluding that the teachings of your religion are undoubtably true. Or: "Bill lives in a large building, so his apartment must be large."
irresistible forces meeting immovable objects, and the like.
if it sounds good, it must be right. Songs often use this effect to create a sort of credibility - for example, "Don't Fear The Reaper" by Blue Oyster Cult. Politically oriented songs should be taken with a grain of salt, precisely because they sound good.
if it's short, and connects to an argument, it must be an argument. (But slogans risk the Reductive Fallacy .) Being short, a slogan increases the effectiveness of Argument By Repetition . It also helps Argument By Emotive Language (Appeal To The People) , since emotional appeals need to be punchy. (Also, the gallery can chant a short slogan.) Using an old slogan is Cliche Thinking .
using big complicated words so that you will seem to be an expert. Why do people use "utilize" when they could utilize "use" ? For example, crackpots used to claim they had a Unified Field Theory (after Einstein). Then the word Quantum was popular. Lately it seems to be Zero Point Fields.
this is the extreme version of Argument By Prestigious Jargon . An invented vocabulary helps the effect, and some net.kooks use lots of CAPitaLIZation. However, perfectly ordinary words can be used to baffle. For example, "Omniscience is greater than omnipotence, and the difference is two. Omnipotence plus two equals omniscience. META = 2." [From R. Buckminster Fuller's No More Secondhand God .] Gibberish may come from people who can't find meaning in technical jargon, so they think they should copy style instead of meaning. It can also be a "snow job", AKA "baffle them with BS", by someone actually familiar with the jargon. Or it could be Argument By Poetic Language . An example of poetic gibberish: "Each autonomous individual emerges holographically within egoless ontological consciousness as a non-dimensional geometric point within the transcendental thought-wave matrix."
using a word to mean one thing, and then later using it to mean something different. For example, sometimes "Free software" costs nothing, and sometimes it is without restrictions. Some examples: "The sign said 'fine for parking here', and since it was fine, I parked there." All trees have bark. All dogs bark. Therefore, all dogs are trees. "Consider that two wrongs never make a right, but that three lefts do." - "Deteriorata", National Lampoon
the use of words that sound better. The lab rat wasn't killed, it was sacrificed . Mass murder wasn't genocide, it was ethnic cleansing . The death of innocent bystanders is collateral damage . Microsoft doesn't find bugs, or problems, or security vulnerabilities: they just discover an issue with a piece of software. This is related to Argument By Emotive Language , since the effect is to make a concept emotionally palatable.
this is very much like Euphemism , except that the word changes are done to claim a new, different concept rather than soften the old concept. For example, an American President may not legally conduct a war without a declaration of Congress. So, various Presidents have conducted "police actions", "armed incursions", "protective reaction strikes," "pacification," "safeguarding American interests," and a wide variety of "operations". Similarly, War Departments have become Departments of Defense, and untested medicines have become alternative medicines. The book "1984" has some particularly good examples.
for example, "No one knows how old the Pyramids of Egypt are." (Except, of course, for the historians who've read records and letters written by the ancient Egyptians themselves.) Typically, the presence of one error means that there are other errors to be uncovered.
Errors of Fact caused by stating offhand opinions as proven facts. (The speaker's thought process being "I don't see how this is possible, so it isn't.") An example from Creationism is given here . This isn't lying , quite. It just seems that way to people who know more about the subject than the speaker does.
intentional Errors of Fact . In some contexts this is called bluffing. If the speaker thinks that lying serves a moral end, this would be a Pious Fraud .
in science, espousing some thing that the speaker knows is generally ill-regarded, or even generally held to be disproven. For example, claiming that HIV is not the cause of AIDS, or claiming that homeopathic remedies are not just placebos. In politics, the phrase may be used more broadly, to mean espousing some position that the establishment or opposition party does not hold. This is sometimes done to make people think, and sometimes it is needling , or perhaps it supports an external agenda. But it can also be done just to oppose conformity, or as a pose or style choice: to be a "maverick" or lightning rod. Or, perhaps just for the ego of standing alone: "It is not enough to succeed. Friends must be seen to have failed." -- Truman Capote "If you want to prove yourself a brilliant scientist, you don't always agree with the consensus. You show you're right and everyone else is wrong." -- Daniel Kirk-Davidoff discussing Richard Lindzen
arguing from something that might have happened, but didn't.
saying two contradictory things in the same argument. For example, claiming that Archaeopteryx is a dinosaur with hoaxed feathers, and also saying in the same book that it is a "true bird". Or another author who said on page 59, "Sir Arthur Conan Doyle writes in his autobiography that he never saw a ghost." But on page 200 we find "Sir Arthur's first encounter with a ghost came when he was 25, surgeon of a whaling ship in the Arctic.." This is much like saying "I never borrowed his car, and it already had that dent when I got it." This is related to Inconsistency .
this is sometimes used to avoid having to defend a claim, or to avoid making good on a promise. In general, there is something you are not supposed to notice. For example, I got a bill which had a big announcement about how some tax had gone up by 5%, and the costs would have to be passed on to me. But a quick calculation showed that the increased tax was only costing me a dime, while a different part of the the bill had silently gone up by $10. This is connected to various diversionary tactics, which may be obstructive, obtuse, or needling . For example, if you quibble about the meaning of some word a person used, they may be quite happy about being corrected, since that means they've derailed you, or changed the subject. They may pick nits in your wording, perhaps asking you to define "is". They may deliberately misunderstand you: "You said this happened five years before Hitler came to power. Why are you so fascinated with Hitler ? Are you anti-Semitic ?"
if you go from one idea to the next quickly enough, the audience won't have time to think. This is connected to Changing The Subject and (to some audiences) Argument By Personal Charm . However, some psychologists say that to understand what you hear, you must for a brief moment believe it. If this is true, then rapid delivery does not leave people time to reject what they hear.
almost claiming something, but backing out. For example, "It may be, as some suppose, that ghosts can only be seen by certain so-called sensitives, who are possibly special mutations with, perhaps, abnormally extended ranges of vision and hearing. Yet some claim we are all sensitives." Another example: "I don't necessarily agree with the liquefaction theory, nor do I endorse all of Walter Brown's other material, but the geological statements are informative." The strange thing here is that liquefaction theory (the idea that the world's rocks formed in flood waters) was demolished in 1788. To "not necessarily agree" with it, today, is in the category of "not necessarily agreeing" with 2+2=3. But notice that writer implies some study of the matter, and only partial rejection. A similar thing is the failure to rebut. Suppose I raise an issue. The response that "Woodmorappe's book talks about that" could possibly be a reference to a resounding rebuttal. Or perhaps the responder hasn't even read the book yet. How can we tell ? [I later discovered it was the latter.]
a statement is made, but it is sufficiently unclear that it leaves some sort of leeway. For example, a book about Washington politics did not place quotation marks around quotes. This left ambiguity about which parts of the book were first-hand reports and which parts were second-hand reports, assumptions, or outright fiction. Of course, lack of clarity is not always intentional. Sometimes a statement is just vague. If the statement has two different meanings, this is Amphiboly. For example, "Last night I shot a burglar in my pyjamas."
if you make enough attacks, and ask enough questions, you may never have to actually define your own position on the topic.
information is given, but it is not the latest information on the subject. For example, some creationist articles about the amount of dust on the moon quote a measurement made in the 1950's. But many much better measurements have been done since then.
the speaker seems to have information that there is no possible way for him to get, on the basis of his own statements. For example: "The first man on deck, seaman Don Smithers, yawned lazily and fingered his good luck charm, a dried seahorse. To no avail ! At noon, the Sea Ranger was found drifting aimlessly, with every man of its crew missing without a trace !"
ignoring all of the most reasonable explanations. This makes the desired explanation into the only one. For example: "I left a saucer of milk outside overnight. In the morning, the milk was gone. Clearly, my yard was visited by fairies." There is an old rule for deciding which explanation is the most plausible. It is most often called "Occam's Razor", and it basically says that the simplest is the best. The current phrase among scientists is that an explanation should be "the most parsimonious", meaning that it should not introduce new concepts (like fairies) when old concepts (like neighborhood cats) will do. On ward rounds, medical students love to come up with the most obscure explanations for common problems. A traditional response is to tell them "If you hear hoof beats, don't automatically think of zebras".
telling a story which ties together unrelated material, and then using the story as proof they are related.
logic reversal. A correct statement of the form "if P then Q" gets turned into "Q therefore P". For example, "All cats die; Socrates died; therefore Socrates was a cat." Another example: "If the earth orbits the sun, then the nearer stars will show an apparent annual shift in position relative to more distant stars (stellar parallax). Observations show conclusively that this parallax shift does occur. This proves that the earth orbits the sun." In reality, it proves that Q [the parallax] is consistent with P [orbiting the sun]. But it might also be consistent with some other theory. (Other theories did exist. They are now dead, because although they were consistent with a few facts, they were not consistent with all the facts.) Another example: "If space creatures were kidnapping people and examining them, the space creatures would probably hypnotically erase the memories of the people they examined. These people would thus suffer from amnesia. But in fact many people do suffer from amnesia. This tends to prove they were kidnapped and examined by space creatures." This is also a Least Plausible Hypothesis explanation.
if your opponent successfully addresses some point, then say he must also address some further point. If you can make these points more and more difficult (or diverse) then eventually your opponent must fail. If nothing else, you will eventually find a subject that your opponent isn't up on. This is related to Argument By Question . Asking questions is easy: it's answering them that's hard. If each new goal causes a new question, this may get to be Infinite Regression. It is also possible to lower the bar, reducing the burden on an argument. For example, a person who takes Vitamin C might claim that it prevents colds. When they do get a cold, then they move the goalposts, by saying that the cold would have been much worse if not for the Vitamin C.
if the arguer doesn't understand the topic, he concludes that nobody understands it. So, his opinions are as good as anybody's.
unfortunately, there simply isn't a common-sense answer for many questions. In politics, for example, there are a lot of issues where people disagree. Each side thinks that their answer is common sense. Clearly, some of these people are wrong. The reason they are wrong is because common sense depends on the context, knowledge and experience of the observer. That is why instruction manuals will often have paragraphs like these: When boating, use common sense. Have one life preserver for each person in the boat. When towing a water skier, use common sense. Have one person watching the skier at all times.
the arguer hasn't bothered to learn anything about the topic. He nevertheless has an opinion, and will be insulted if his opinion is not treated with respect. For example, someone looked at a picture on one of my web pages , and made a complaint which showed that he hadn't even skimmed through the words on the page. When I pointed this out, he replied that I shouldn't have had such a confusing picture.
if a conclusion can be reached in an obviously fallacious way, then the conclusion is incorrectly declared wrong. For example, "Take the division 64/16. Now, canceling a 6 on top and a six on the bottom, we get that 64/16 = 4/1 = 4." "Wait a second ! You can't just cancel the six !" "Oh, so you're telling us 64/16 is not equal to 4, are you ?"
showing that your opponent's argument leads to some absurd conclusion. This is in general a reasonable and non-fallacious way to argue. If the issues are razor-sharp, it is a good way to completely destroy his argument. However, if the waters are a bit muddy, perhaps you will only succeed in showing that your opponent's argument does not apply in all cases, That is, using Reductio Ad Absurdum is sometimes using the Fallacy Of The General Rule . However, if you are faced with an argument that is poorly worded, or only lightly sketched, Reductio Ad Absurdum may be a good way of pointing out the holes. An example of why absurd conclusions are bad things: Bertrand Russell, in a lecture on logic, mentioned that in the sense of material implication, a false proposition implies any proposition. A student raised his hand and said "In that case, given that 1 = 0, prove that you are the Pope". Russell immediately replied, "Add 1 to both sides of the equation: then we have 2 = 1. The set containing just me and the Pope has 2 members. But 2 = 1, so it has only 1 member; therefore, I am the Pope."
if one does not understand a debate, it must be "fair" to split the difference, and agree on a compromise between the opinions. (But one side is very possibly wrong, and in any case one could simply suspend judgment.) Journalists often invoke this fallacy in the name of "balanced" coverage. "Some say the sun rises in the east, some say it rises in the west; the truth lies probably somewhere in between."
claiming that some idea has been proved (or disproved) by a pivotal discovery. This is the "smoking gun" version of history. Scientific progress is often reported in such terms. This is inevitable when a complex story is reduced to a soundbite, but it's almost always a distortion. In reality, a lot of background happens first, and a lot of buttressing (or retraction) happens afterwards. And in natural history, most of the theories are about how often certain things happen (relative to some other thing). For those theories, no one experiment could ever be conclusive.
a charge of wrongdoing is answered by a rationalization that others have sinned, or might have sinned. For example, Bill borrows Jane's expensive pen, and later finds he hasn't returned it. He tells himself that it is okay to keep it, since she would have taken his. War atrocities and terrorism are often defended in this way. Similarly, some people defend capital punishment on the grounds that the state is killing people who have killed. This is related to Ad Hominem (Argument To The Man) .
a fraud done to accomplish some good end, on the theory that the end justifies the means. For example, a church in Canada had a statue of Christ which started to weep tears of blood. When analyzed, the blood turned out to be beef blood. We can reasonably assume that someone with access to the building thought that bringing souls to Christ would justify his small deception. In the context of debates, a Pious Fraud could be a lie . More generally, it would be when an emotionally committed speaker makes an assertion that is shaded, distorted or even fabricated. For example, British Prime Minister Tony Blair was accused in 2003 of "sexing up" his evidence that Iraq had Weapons of Mass Destruction. Around the year 400, Saint Augustine wrote two books, De Mendacio [On Lying] and Contra Medacium [Against Lying], on this subject. He argued that the sin isn't in what you do (or don't) say, but in your intent to leave a false impression. He strongly opposed Pious Fraud. I believe that Martin Luther also wrote on the subject.

Back to the Science and Skepticism page.

Email a comment..

Strawperson fallacy : arguing against a view by attacking an exaggerated or absurd extension of the view. "A special case of irrelevant conclusion. Attacking a misstated form of an argument. a deliberately weakened form.   https://yourlogicalfallacyis.com/strawman

Tuo quoque : See Ad hominem.

Two Wrong Make A Right : Answering a charge of wrongdoing, not by showing that no wring was done, but rather by claiming others do it too.

Unqualified Generalization : See Accident .

Sweeping Generalization : See Accident.

A fallacy is a kind of error in reasoning. The list of fallacies   found here Partial List of Fallacies contains 231 names of the most common fallacies, and it provides brief explanations and examples of each of them. Fallacious arguments should not be persuasive, but they too often are. Fallacies may be created unintentionally, or they may be created intentionally in order to deceive other people. - Internet Encyclopedia of Philosophy  

MASTER LIST  146 https://utminers.utep.edu/omwilliamson/ENGL1311/fallacies.htm

Logical fallacies 3 minutes https://edu.gcfglobal.org/en/problem-solving-and-decision-making/logical-fallacies/1/.

Counterfactual fallacy

A counterfactual fallacy occurs when someone states a fact, states that something would be true if the stated fact were not true, and provides no evidence for this position.

The fallacy is a causation fallacy and an informal fallacy .

  • 1 Alternative names
  • 2.1 Speculative evidence
  • 3 Explanation
  • 5 External links

Alternative names [ edit ]

  • argumentum ad speculum
  • hypothesis contrary to fact

Form [ edit ]

Or even more egregiously:

The second form doesn't even explain the causal connection between A and B; it really is just wild speculation. The first form is a special case of denying the antecedent , applied to counterfactual reasoning; it ignores the possibility of B still occurring as an effect of causes other than A, even if A had not occurred.

Speculative evidence [ edit ]

You commit this fallacy if you draw conclusions from evidence that hasn't been collected yet, but that, one supposes, would have come out in favor of one's own opinion.

If there is no evidence to support a particular point, do not rely on that point to carry your argument. If pressed on a point where there is not valid evidence to support it, acknowledge the lack of data and suggest that the matter needs to be investigated in order to resolve the disputed issue.

Explanation [ edit ]

Confusing "what might have been" with "what ought to have been"; speculating what would have happened in other circumstances, then drawing conclusions from the speculation.

Examples [ edit ]

  • "We'd never have all this crime if [X] was president." This is unknowable because [X] isn't president.
  • "In this country citizens are permitted to own guns . If guns were outlawed, citizens would be unable to protect themselves and there would be an uncontrollable crime wave."

External links [ edit ]

  • See the Wikipedia article on Counterfactual conditional .
  • Hypothesis Contrary to Fact , Logically Fallacious
  • Logical Fallacy of Hypothesis Contrary to Fact , SeekFind
  • Counterfactuals (PDF) , Richard Holton
  • Counterfactuals , OneGoodMove
  • Hypothesis Contrary to Fact , Robert Gass
  • Hypothesis Contrary to Fact , DAVID PETERSON
  • Speculative Evidence , Bruce Thompson
  • ↑ https://www2.palomar.edu/users/bthompson/Hypothesis%20Contrary%20to%20Fact.html
  • Causation fallacies
  • Informal fallacies
  • Latin phrases
  • Pages using DynamicPageList parser function

Navigation menu

Conspiracy Theory PWR 1, Fall 2008 Stanford University Jonah Willihnganz

Logical Fallacies (Handout developed by Kimberly Moekle)

All of these definitions come from “Stephen’s Guide to the Logical Fallacies,” located at http://datanation.com/fallacies/index.htm , where you can find further information on all of the fallacies listed below.  Stephen Downes is a Senior Researcher for the National Research Council of Canada, where he currently works as an “information architect,” and has become a leading voice in the areas of learning objects and metadata, as well as the emerging field of weblogs in education and content syndication.

Fallacies of Distraction

  • False Dilemma : two choices are given when in fact there are three options
  • From Ignorance : because something is not known to be true, it is assumed to be false
  • Slippery Slope : a series of increasingly unacceptable consequences is drawn
  • Complex Question : two unrelated points are conjoined as a single proposition

Appeals to Motives in Place of Support |

  • Appeal to Force : the reader is persuaded to agree by force
  • Appeal to Pity : the reader is persuaded to agree by sympathy
  • Consequences : the reader is warned of unacceptable consequences
  • Prejudicial Language : value or moral goodness is attached to believing the author
  • Popularity : a proposition is argued to be true because it is widely held to be true

Changing the Subject

  • Attacking the Person :
  • the person's character is attacked
  • the person's circumstances are noted
  • the person does not practice what is preached
  • Appeal to Authorit y:
  • the authority is not an expert in the field
  • experts in the field disagree
  • the authority was joking, drunk, or in some other way not being serious
  • Anonymous Authority : the authority in question is not named
  • S tyle Over Substance : the manner in which an argument (or arguer) is presented is felt to affect the truth of the conclusion

Inductive Fallacies

  • Hasty Generalization : the sample is too small to support an inductive generalization about a population
  • Unrepresentative Sample : the sample is unrepresentative of the sample as a whole
  • False Analogy : the two objects or events being compared are relevantly dissimilar
  • Slothful Induction : the conclusion of a strong inductive argument is denied despite the evidence to the contrary
  • Fallacy of Exclusion : evidence which would change the outcome of an inductive argument is excluded from consideration

Fallacies Involving Statistical Syllogisms

  • Accident: a generalization is applied when circumstances suggest that there should be an exception
  • Converse Accident : an exception is applied in circumstances where a generalization should apply

Causal Fallacies

  • Post Hoc : because one thing follows another, it is held to cause the other
  • Joint effect : one thing is held to cause another when in fact they are both the joint effects of an underlying cause
  • Insignificant : one thing is held to cause another, and it does, but it is insignificant compared to other causes of the effect
  • Wrong Direction : the direction between cause and effect is reversed
  • Complex Cause : the cause identified is only a part of the entire cause of the effect

Missing the Point

  • Begging the Question : the truth of the conclusion is assumed by the premises
  • Irrelevant Conclusion : an argument in defense of one conclusion instead proves a different conclusion
  • Straw Man : the author attacks an argument different from (and weaker than) the opposition's best argument

Fallacies of Ambiguity

  • Equivocation : the same term is used with two different meanings
  • Amphiboly : the structure of a sentence allows two different interpretations
  • Accent: the emphasis on a word or phrase suggests a meaning contrary to what the sentence actually says

Category Errors

  • Composition : because the attributes of the parts of a whole have a certain property, it is argued that the whole has that property
  • Division: because the whole has a certain property, it is argued that the parts have that property

Non Sequitur

  • Affirming the Consequent : any argument of the form: If A then B, B, therefore A
  • Denying the Antecedent : any argument of the form: If A then B, Not A, thus Not B
  • Inconsistency : asserting that contrary or contradictory statements are both true

Syllogistic Errors

  • Fallacy of Four Terms : a syllogism has four terms
  • Undistributed Middle : two separate categories are said to be connected because they share a common property
  • Illicit Major : the predicate of the conclusion talks about all of something, but the premises only mention some cases of the term in the predicate
  • Illicit Minor : the subject of the conclusion talks about all of something, but the premises only mention some cases of the term in the subject
  • Fallacy of Exclusive Premises : a syllogism has two negative premises
  • Fallacy of Drawing an Affirmative Conclusion From a Negative Premise : as the name implies
  • Existential Fallacy : a particular conclusion is drawn from universal premises

Fallacies of Explanation

  • Subverted Support (The phenomenon being explained doesn't exist)
  • Non-support (Evidence for the phenomenon being explained is biased)
  • Untestability : (The theory which explains cannot be tested)
  • Limited Scope (The theory which explains can only explain one thing)
  • Limited Depth (The theory which explains does not appeal to underlying causes)

Fallacies of Definition

  • Too Broad (The definition includes items which should not be included)
  • Too Narrow (The definition does not include all the items which should be included)
  • Failure to Elucidate (The definition is more difficult to understand than the word or concept being defined)
  • Circular Definition (The definition includes the term being defined as a part of the definition)
  • Conflicting Conditions (The definition is self-contradictory)

Spring Sale: Get 15% off selected writing courses, only through April 19! Learn more »

Writers.com

A logical fallacy occurs when someone tries to persuade you with a faulty argument. Sometimes, logical fallacies are innocuous: the writer has a good argument to make, it was just set up through faulty logic. However, logical fallacies run rampant among less-than-sincere writers, and if you want to write well and read well, then knowing our list of logical fallacies will help arm you against faulty arguments.

Because people are constantly trying to persuade you of something—politicians, advertisers, social media posts, etc.—logical fallacies occur all the time. Good persuasive writers will know how to avoid these common logical fallacies, and good readers will know how to identify them without being persuaded.

So, what is a logical fallacy? And why do they matter for my writing? Understanding the arguments in this list of logical fallacies will help strengthen your writing and ability to write effective arguments. But before we look at some examples of logical fallacies, let’s get clear on these persuasive and invasive mistakes in rhetoric.

Logical Fallacy Definition: What is a Logical Fallacy?

Common types of logical fallacies, a note on good persuasive writing, logical fallacies examples: fallacies of relevance, logical fallacies examples: fallacies of unacceptable premises, logical fallacies examples: formal fallacies, other logical fallacies examples.

Simply put, a logical fallacy is an error in reasoning that undermines the logic of an argument. It does not necessarily undermine the persuasiveness of that argument, however; unless you are well-versed in the different types of logical fallacies, you can certainly be persuaded by one yourself.

A logical fallacy is an error in reasoning that undermines the logic, but not necessarily the persuasiveness, of an argument.

A common logical fallacy example is a red herring. A red herring is an attempt to divert the audience’s attention from the argument itself. It might look something like this:

Some people criticize the SAT for measuring test taking skills, not college readiness. Nonetheless, a high SAT score will get you into better colleges.

This statement isn’t actually addressing the issue of the SAT’s validity, it’s distracting you by bringing up the importance of a high test score, going so far as dismissing the original claim entirely.

All logical fallacies have one thing in common: they don’t hold up to scrutiny. But there are different ways in which writers might present less-than-foolproof arguments. Let’s examine the common types of logical fallacies.

All logical fallacies have one thing in common: they don’t hold up to scrutiny.

Check Out Our Online Writing Courses!

how to write a children's picture book

Write Your Picture Book!

with Kelly Bingham

April 10th, 2024

Picture books have changed greatly over the last few decades, and the market is wide open for fresh ideas. Join us in this six-week intensive where we’ll take that idea of yours and turn it into a manuscript!

tiny and true flash nonfiction essay course

Tiny and True: Creating Flash Essays with Mindfulness

with Susan Barr-Toman

April 17th, 2024

How do you tell the full truth in under 1,000 words? Learn the art of flash essays and write nuggets of wisdom in this tiny essay class.

Poetry chapbook writing course

A Poet’s Calling Card: Writing and Composing a Chapbook

with Caitlin Scarano

The poetry chapbook gives poets the chance to make a small, artful collection around a poetic obsession. Learn how to craft yours in this 8 week chapbook intensive.

hypothesis contrary to fact examples in real life

Plot Your Novel

with Jack Smith

Over eight weeks, you'll develop a solid basis in the fictional elements—protagonist, setting, secondary characters, point of view, plot, and theme—while you develop the outline of your novel. You'll receive feedback at all stages from your fellow writers and your instructor.

haiku and senryu writing course

Poems of All Sizes: Haiku, Tanka, and Japanese Poetic Forms

with Miho Kinnas

April 18th, 2024

Explore the history and poetics of Japanese poetry forms, and write haiku, tanka, renga, haiga, and linked verse poetry.

Most logical fallacies can be sorted into one of three categories:

  • For example: You had a bad day because Mercury is in retrograde.
  • For example: You had a bad day because you always have bad days when it rains before noon. 
  • For example: Because rain symbolizes sadness , and because you are having a bad day, the rain is causing your bad day. 
  • In a formal fallacy, the flaw is in the logic and conclusion. Most other fallacies are informal fallacies, in which the flaw is simply the logic.

We’ll examine these three categories shortly. But before we examine some examples of logical fallacies, let’s talk about good persuasive writing.

By now, you’re probably familiar with the basic structure of an argumentative essay. Most essays, including those at the higher academic level, generally follow a thesis statement , followed by supporting claims , evidence , and a conclusion . Most essays also address potential counterclaims and offer rebuttal arguments .

The structure is the easy part. Aside from side-stepping all logical fallacies, how do you write a persuasive essay that’s actually, well, persuasive?

Here are a few tips:

  • Speak to your reader. Knowing your audience is crucial to making an effective argument. What ideas are they likely to resonate with? What vocabulary and word choice will they most likely understand? Even if you don’t know your exact audience, speaking to them will help you make a genuine connection with your readers.
  • Be concrete. Tie your thesis and arguments to the real world, even if your writing isn’t about real world issues. For example, an essay about the values of optimism can demonstrate those values through concrete examples: anecdotes, case studies, and psychological research, as well as moral and philosophical reasoning.
  • Sound like yourself. Using a lofty vocabulary or purple prose will not win over any of your readers. Part of building effective ethos is sounding like a reasonable voice, one which the reader can trust and rely on, and that comes through employing smart writing style strategies .
  • Know your rhetorical devices . A good balance of ethos, pathos, logos, and kairos will go a long way towards persuasiveness. And, knowing different types of argumentative and rhetorical structures will certainly come in handy. Similes, metaphors, and analogies are also great ways of demonstrating an argument.

Of course, these strategies alone don’t make for great persuasive writing. Having solid logic behind your reasoning and carefully crafted arguments will make your essays shine. As such, let’s look at some common logical fallacies and discuss how you can avoid them.

Logical Fallacies Examples

A good persuasive essay requires good thinking, writing, researching, and revising. Nonetheless, even the best thinkers are prone to these common logical fallacies. Understanding the errors of logic in this list, how they happen, and how to avoid them will strengthen your ability to argue and to identify faulty arguments.

We’ve sectioned this list by the different types of logical fallacies. Let’s examine them below!

Fallacies of Relevance are any number of informal logical fallacies in which an irrelevant argument is presented as relevant, distorting the conclusion or misdirecting the audience. You may have heard of the red herring logical fallacy before; most fallacies of relevance are, in some way, red herrings.

Fallacies of Relevance are logical fallacies in which an irrelevant argument is presented as relevant, distorting the conclusion or misdirecting the audience.

Let’s look closer at each one.

Ad Hominem Logical Fallacy

An Ad Hominem (Latin: “against the person”) attack is a logical fallacy in which the person is argued against, rather than the argument the person is making. In other words, it attacks the source but not the credibility of the argument.

Here are a few examples:

  • The car salesman is lying about the quality of the car because it’s his job to sell cars.
  • “You have no reason to raise the minimum wage if you’ve never run a business before. ”
  • “I just saw my boss do a hit and run. Clearly, this means he’s a bad boss. ”

None of these examples actually engage with logic. Accusing someone of lying or ignorance is a lazy way of avoiding the argument. And, while someone who commits a hit and run has questionable ethics, there isn’t a clear relationship between bad driving and bad leadership.

If any of these attacks sound familiar, it’s because Ad Hominem is a prominent feature of our cultural and political landscape. Now, there is something to be said about questioning the ethos of the person making an argument. There are plenty of people, politicians and otherwise, who do have ulterior motives and hidden agendas behind their logic and reasoning.

However, in good argumentation, you cannot simply question the ethos of the person. You must engage with the arguments themselves; an Ad Hominem attack is simply a distraction, meant to make the audience angry or distracted from the issues at hand.

In good argumentation, you must engage with the arguments themselves.

Appeal to Consequences Logical Fallacy

The Appeal to Consequences argues that a premise is correct or incorrect based on whether the outcome is positive or negative. In other words, if a certain hypothesis leads to an undesirable consequence, the hypothesis “must” be wrong; if the consequence is positive, it “must” be right.

For example:

  • Rent prices are bound to decrease because more people will be able to afford housing.
  • It’s impossible to spend all your money gambling because then you couldn’t afford to eat.

Of course, valid hypotheses can result in negative outcomes, because an argument is valid irrespective of its outcome. And invalid hypotheses can suggest positive outcomes because “wishful thinking” is inherently a logical fallacy.

Appeal to Emotion Logical Fallacy

An Appeal to Emotion occurs when an argument tries to evoke an emotional response, rather than a logical one. For example:

  • “You should eat your food because a poor, starving child in Africa doesn’t have any. ”
  • (Appealing to your sense of guilt.)
  • “If you pass this law, thousands of your constituents will ransack your office. ” (Appealing to your sense of fear.)
  • “You can’t raise the minimum wage; your childhood enemy might make more money. ” (Appealing to your sense of hatred.)

Now, this logical fallacy is similar to the rhetorical device “pathos.” The difference is that, in good rhetoric, pathos is not the central argument. Pathos is a feature of good argumentation, because a good rhetorician knows which emotions to evoke from the audience and how those emotions inspire action or belief. But, when that emotional response is the desired outcome of the argument, without credible logic to back it up, then the speaker is trying to twist your feelings without good reasoning.

  • Pathos-inspired logic: Martin Luther King, Jr.’s “I Have a Dream” speech included many examples of racial inequality, including how “one hundred years [after slavery], the Negro lives on a lonely island of poverty in the midst of a vast ocean of material prosperity”. Calling attention to something ostensibly unfair inspired action; elsewhere in the speech, King uses ethos and logos to demand a better life for Black Americans—which, for the skeptical member of King’s audience, will also improve the lives of all Americans.
  • Appeal to Emotion fallacy: Let’s say King’s entire speech was just pathos. Or, let’s say King started arguing “if we don’t achieve racial equality, America will burn and everyone will die.” Then, the purpose of the speech would have been simply to make people angry and afraid, rather than to push for a more equitable society. The difference, here, lies in the purpose of the speech, and in the facts and logic played out on the national stage.

Appeal to Force Logical Fallacy

An Appeal to Force argues that physical or emotional harm is a consequence of certain arguments. It is related to the Appeal to Emotion in that it inspires fear.

  • “If you don’t work extra hours without pay, you’ll be fired without severance .”
  • “Maybe you’ll agree with me after I break a few of your ribs .”
  • “If you don’t vote for me, your rent will skyrocket, the streets will be riddled with crime, and your children will have no future to speak of.”

Obviously, these arguments aren’t arguments at all: they’re trying to coerce you into agreeing with something that has no logical backing.

Appeal to Ignorance Logical Fallacy

The Appeal to Ignorance is a logical fallacy in which something must be true because there is no evidence against it . In other words, the fallacy is that the absence of counterevidence means there is no counterevidence. However, the absence of something is not an argument for its own absence: “absence of evidence is not evidence of absence.”

  • Aliens do not exist because we have not come into contact with them.
  • We haven’t come into contact with the core of a black hole, so you cannot assume that the core of a black hole is not made up of bird’s feathers.

The Appeal to Ignorance is especially consequential in the courtroom. For example, if you don’t have an alibi, that means you must have killed the victim. The logic isn’t sound, but the wrong jury, or a jury with strong prejudices, might buy it.

Appeal to Improper Authority Logical Fallacy

The Appeal to Improper Authority argues that an argument must be true because it came from an authority figure. This is misplaced ethos, because the logical fallacy assumes one’s authority automatically grants ethos on a position, instead of that ethos being earned through argumentation.

  • “She has bipolar disorder. Trust me, I’m a psychology major. ”
  • “ My high school gym teacher told me never to use ice on a sprained ankle.”

Sometimes, the Appeal to Improper Authority is an appeal to the wrong kind of authority. Being a psychology major isn’t justification for diagnosing someone; you should have an advanced degree and research experience. You should also have conducted a psych evaluation on the person in question. Other times, this Appeal isn’t enough justification; you still need to back your arguments with logic. What knowledge does your degree as a psych major give you to make a certain conclusion?

However, this is not license to assume something is incorrect just because it comes from an authority figure. For example, many people assume that the advice from a doctor must be wrong. While doctors do make mistakes, attacking the credibility of a doctor, rather than the science behind the decisions they make, is just an Ad Hominem.

Appeal to Tradition Logical Fallacy

The Appeal to Tradition logical fallacy says “we’ve always done it this way.” Rather than interrogate the logic behind a certain action, the argument assumes the action is logically sound because it’s been done for a certain amount of time.

  • “Our family has always voted this way. Grandpa would kill me if I voted any other way!” (This neglects that a party’s positions change over time, as well as the political needs of a city/state/nation.)
  • “ Women have always tended to the hearth and raised the kids. It’s easier this way!”

Sometimes, tradition is rooted in logic. But a good argument will illuminate that logic, and that logic’s relevance to the modern day, rather than assume the logic exists.

Argument From Incredulity Logical Fallacy

An argument from incredulity occurs when you argue that something can’t be true solely because it’s difficult to imagine, hard to understand, or else doesn’t conform to your particular worldview.

  • Not believing we landed men on the moon.
  • “I don’t understand your argument, therefore it isn’t logical.”
  • “Your argument doesn’t align with my spiritual or political beliefs. Therefore, it’s wrong.”

This logical fallacy is often at play among conspiracy theorists, but it’s just another easy way to avoid the hard work of understanding and responding to logically sound arguments.

Argumentum ad Populum Logical Fallacy

The Argumentum ad Populum (Argument to the People, or “to Popularity”) is based on the premise that, if a certain number of people believe in the argument, it must be correct. This logical fallacy has a few different manifestations, including:

  • The Bandwagon Argument: “Most people believe that the iPhone is superior, so you should buy an iPhone.”
  • The Patriotic Argument (Jingoism): “You must buy an iPhone, because you’re supporting an American company with American values. Any other phone is tantamount to treason!”
  • The Snob Argument: “Anyone who’s rich and important has an iPhone. So, you should have one if you want to be rich and important.”

This argument can be difficult to respond to, because if the argument is wrong, you might be implying that the masses have poor logic. Well, sometimes they do. Argumentum ad Populum is simply peer pressure, not sound logic.

Genetic Fallacy

The Genetic Fallacy occurs when you base the validity, or invalidity, of an argument solely on its source. Ad Hominem can be a type of Genetic Fallacy, but you can also attack an argument’s validity by saying it came from Wikipedia, YouTube, or a certain publisher or newspaper.

  • “My parents told me not to trust dentists, so I don’t trust dentists .” (This is also an Appeal to Improper Authority.)
  • “Your information comes from Wikipedia. Clearly, your argument isn’t grounded on reliable data. ”

You should certainly interrogate the source of information. However, good critical arguments will examine the research and methodologies behind that data, instead of just assuming invalidity.

Irrelevant Conclusion Logical Fallacy

The logical fallacy Irrelevant Conclusion, also known as ignoratio elenchi, describes a conclusion that is irrelevant to the premises allegedly supporting it.

  • “Fire can’t be dangerous to humans because it keeps us warm in the winter. ”
  • “Cane sugar is good for you because it’s white, which is a pure color .”

Most logical fallacies of relevance are, in some way, fallacies of Irrelevant Conclusion.

Straw Man Argument Logical Fallacy

The Straw Man Argument occurs when you refute someone’s argument by responding to a completely different, utterly warped argument that the original person did not make. In other words, you distort an argument to make it easier to attack. The Straw Man is often a kind of Ad Hominem. It might look something like this:

Person 1: Investing money in your happiness today helps keep you motivated for longer term goals.

Person 2: What are you, some kind of hedonist?

This logical fallacy also occurs when you quote someone out of context. Think Fred Jones saying “I think Coolsville sucks !” in Scooby-Doo 2: Monsters Unleashed .

Tu Quoque Logical Fallacy

Tu Quoque is another form of Ad Hominem, in which a person’s behavior or past beliefs are called into question to discredit their current argument.

  • “Doctors tell you not to smoke, but doctors smoke all the time .”
  • “ You cheated on your girlfriend , so why can’t I?”

Tu Quoque is sometimes called the Appeal to Hypocrisy. The importance of hypocrisy is not to be understated, but when it comes to logic and reasoning, someone being a hypocrite doesn’t necessarily discredit the argument at hand.

Fallacies of Unacceptable Premises attempt to introduce premises that, though possibly true, do not ultimately support the argument’s conclusions. This is different from fallacies of relevance because the premises are relevant, they just don’t support the conclusions.

Fallacies of Unacceptable Premises attempt to introduce premises that, though possibly true, do not ultimately support the argument’s conclusions.

Begging the Question Logical Fallacy

Begging the Question is a logical fallacy in which the validity of the conclusion is buried in the premise of the argument. In other words, the logic undergirding an argument makes assumptions that, when questioned, reveal the argument’s lack of reasoning. It is a premise restating the conclusion without supporting the conclusion.

  • This is just saying “I’m the boss” in two different ways. It doesn’t actually explain why the boss gets to make those decisions.
  • Well, yes. That’s the definition of a bestseller. But this doesn’t explain why the apple turnover sells so well.
  • The premise is saying the same thing as the conclusion, perhaps with a moral appeal attached. Take it a step further: what benefits do we get from raising the minimum wage? The argument hasn’t been made yet .

Begging the Question happens a lot more often than you might think. By knowing this logical fallacy and noticing it, you’ll be able to question a person’s logic (or lack thereof) much more directly.

Division Fallacy

The division fallacy occurs when you assume that something true for a whole entity is also true for each individual component of that entity. For example:

  • There is a lot of money in the technology sector.
  • You work in the technology sector.
  • You make a lot of money.

Plenty of people make a lot of money in tech, but this assumption is riddled with errors. There are some low-paying positions in tech, and this argument does not take into account how money is distributed in tech.

False Dilemma Logical Fallacy

A False Dilemma occurs when an argument presents the audience a limited number of sides to an issue, when many more sides exist. By doing this, the argument hopes to make you choose its side over the other, when the situation is actually much more nuanced.

  • “You either support the war or you hate your country.”
  • “In high school, you’re either a nerd, a jock, or a prep.”
  • “Anything that doesn’t support a free market Capitalist economy is clearly part of an authoritarian Communist agenda.”

Binary thinking is a prominent—and dangerous —way of thinking. Good, honest rhetoricians will recognize that one issue can have many sides, and that good thinking acknowledges gray spaces and ambiguities, rather than trying to paint a black and white picture of the world. Rhetoricians should be confident in their arguments, but if someone presents themselves as knowing everything , especially if they present a limited number of sides to an issue, be skeptical.

Slippery Slope Logical Fallacy

The Slippery Slope fallacy argues that a small first step will result in a later, usually catastrophic major event. It amplifies the stakes of an argument without providing clear justification that the catastrophe will occur.

  • “Failing this one test means you might fail the class, which all but guarantees you won’t obtain your Master’s Degree. ”
  • “Weed is a gateway drug. Within a few years, you’ll be a jobless, homeless addict craving your next fix. ”
  • If you give this person a pass for being late, you’ll have to give everyone a pass, and then the rules won’t matter anymore. 
  • Lowering the voting age to 16 will encourage 12 year olds to try and vote. Eventually, this country will be run by children. 

This isn’t to say that all catastrophizing is automatically a Slippery Slope. Rather, it’s to note that small decisions can lead to a variety of outcomes; if a catastrophic outcome is predicted, that prediction must be underscored with clear, structurally sound logic.

Hasty Generalization Logical Fallacy

A Hasty Generalization is a logical fallacy where a conclusion is drawn from a limited amount of information. The argument simply does not have enough data to support the conclusion it arrives at.

  • “My neighbor has tanned every day for the past 20 years and has flawless skin. Therefore, sun exposure doesn’t cause skin cancer. ”
  • “Someone on the South Side flipped me off today. Everyone who lives there is so mean. ”
  • “1000 people committed food stamp fraud last year. All 3 million of them must be gaming the system. ”

As you can see, Hasty Generalizations are really useful tools for assigning blame and turning the audience against a certain group of people. If you want to claim something about a group or an outcome, a good argument uses robust, clearly organized data to support that claim.

Faulty Analogy Logical Fallacy

A Faulty Analogy is the use of an analogy to compare two things that do not merit a direct comparison. (In brief, an analogy is a literary device in which two or more discrete things are compared as equals.) Using a Faulty Analogy misrepresents the topic at hand.

  • There’s a false equivalence of those different “worlds” here.
  • Part of the reason for this difference is that more people drive than take opiates . In any case, this is also presenting a False Dilemma: why can’t we improve both situations?
  • This assumes that the two chance happenings are related to one another. But luck does not operate in any logical or meaningful way. The two simply can’t be compared.

When someone makes an argument using an analogy, ask yourself whether the items being compared exist on the same playing field. If they don’t, a logical fallacy is likely at play.

The Fallacy Fallacy

The Fallacy Fallacy occurs when you assume that an argument is incorrect because it contains a logical fallacy.

Now, that might seem ironic , or even completely contradictory. Isn’t that the entire point of this article?

What this means is, an argument can have the correct conclusion even if it uses a logical fallacy. The argument itself is incorrect, but the conclusion can still be true, it just needs to be reached using a different logic or set of data.

  • Obviously, sharks can swim, but not because they’re not horses.
  • It could very well be raining in Seattle right now. But the reason it’s raining has nothing to do with the existence of Seattle, it has to do with the weather conditions Seattle finds itself in.

Don’t disregard the existence of this common logical fallacy. If a conclusion seems accurate, or even just intriguing, approach it with a sense of curiosity. Sure, the argument you’re given might be wrong, but under what conditions might it be right? And why is that?

Good logical thinking doesn’t just call out bad arguments, it also creates opportunities to discover more about the world.

Formal fallacies are logical fallacies involving an error in deductive reasoning. As a refresher, deductive reasoning is the use of existing information (premises) to create new information (conclusions).

Formal fallacies are logical fallacies involving an error in deductive reasoning.
  • A bird has wings, feathers, and claws.
  • A cardinal has wings, feathers, and claws.
  • A cardinal is a bird.

Formal fallacies include the following:

  • Affirming the consequent
  • Denying the antecedent
  • Affirming a disjunct
  • Denying a conjunct
  • Fallacy of the undistributed middle
  • Fallacy of four terms

You may have heard of the term non sequitur before. All formal fallacies are non sequiturs, because their conclusions do not follow the claims associated with them.

Affirming the Consequent Logical Fallacy

Affirming the Consequent occurs when the premise and the conclusion are switched in a formal argument. Let’s say you argue the following:

  • If it is raining, then it is cloudy.
  • It is rainy, thus
  • It is cloudy.

Affirming the Consequent means switching the order of the latter two bullets. So, the logical fallacy would be:

  • It is cloudy, thus
  • It is raining.

This isn’t true, because it can be cloudy without it raining. The “if” and “then” statements have been reversed, resulting in a conclusion that can’t be supported.

Denying the Antecedent Logical Fallacy

Denying the Antecedent occurs when you take a standard argument, put it in the negative, and then argue that the negative is just as true. In other words, you argue that the opposite of a true argument is just as true.

Let’s take the above example. This argument is correct:

The “antecedent” would look like this:

  • It is not raining, thus
  • It is not cloudy.

Obviously, it can be cloudy without it being rainy. The premise remains true, but assuming the inverse is also true leads to poor logic.

Affirming a Disjunct Logical Fallacy

Affirming a Disjunct arises out of the ambiguity of the word “or”. In formal logic, “or” can be inclusive (meaning “and/or”), or it can be exclusive (meaning “either/or”). Because of this ambiguity, an argument can seem as though it is creating a false binary, leading to a false conclusion.

  • To get rich, you must work hard or network well.
  • You got rich by networking well.
  • Therefore, you did not work hard.

It is possible that the conclusion is true. It is equally possible that you worked hard and networked well. Affirming a Disjunct occurs when that “or” is interpreted as “exclusive,” rather than “inclusive.”

Denying a Conjunct Logical Fallacy

Denying a Conjunct follows a similar formal fallacy as Affirming a Disjunct, in which the argument seems to be creating a binary that actually cannot be supported. In this logical fallacy, you argue that two things cannot both be true, then conclude that if one is false, the other must be true.

  • You cannot be both an American and a North Korean.
  • You are not North Korean, thus
  • You are American.

Obviously, you can be something other than American or North Korean. The premise of the argument is true, because you can’t have dual citizenship between the two countries, but the interpretation of that premise as a binary is false.

Fallacy of the Undistributed Middle

In the Fallacy of the Undistributed Middle, the middle term, which links the premise to the conclusion, doesn’t actually have a relationship to the premise or the conclusion, leading to a faulty conclusion.

  • All birds have beaks.
  • An octopus has a beak, thus
  • An octopus is a bird.

The conclusion is obviously incorrect. Moreover, the middle term isn’t doing any work for the argument. It tells us that octopi and birds have beaks, but it doesn’t tell us the relationship between birds and octopi, nor does the argument say that birds are the only organisms with beaks. The argument is creating a connection that doesn’t exist in the argument, leading to a conclusion it cannot support.

Fallacy of Four Terms

The Fallacy of Four Terms occurs when a standard syllogistic argument (the kind we’ve been referencing throughout this section) has four or more terms, rather than the requisite three.

By terms, we don’t mean bullet points, we mean the points of comparison in an argument. Here’s a proper syllogism:

  • All books (P) are written by humans (Q).
  • If this text is a book (P), then
  • It was written by a human (Q).

The letters in parentheses highlight that a syllogism follows this structure:

  • All Ps are Qs

There are variations to a proper syllogistic argument, but they always have 3 terms: a PQ term, a P term, and a Q term.

Here’s the Fallacy of Four Terms:

  • Manhattan’s streets (A) have a grid pattern (B).
  • A waffle (X) is made with a gridded iron (Y).
  • Manhattan is a waffle.

This fallacy rests on the assumption that a grid and a gridded iron are the same term, but they’re distinct. You thus arrive at an incorrect conclusion because you’ve made a random comparison between completely unalike ideas.

Here’s another example, to further illustrate the point, as well as to show how subtle this fallacy can be:

  • Nothing (A) beats a cold glass of water on a hot day (B).
  • A warm glass of water (X) is better than nothing (Y).
  • A warm glass of water is better than a cold glass of water.

“Nothing” is being used in multiple colloquial senses, which creates a really confusing argument here. It seems like there are only 3 terms, but “nothing” is employed in two different senses (there is nothing superior vs. something is better than nothing). As a result, you get a conclusion that, well, some people might agree with, but ultimately isn’t grounded in any meaningful logic.

The common logical fallacies above all rely in some way on faulty syllogistic reasoning, whether the fallacy is in the logic or in the premises themselves. The following fallacies are different errors in logic and reasoning, which can contribute to faulty arguments, but are not necessarily syllogistic.

Correlation Vs Causation

Correlation Vs Causation occurs when you assume that a correlation implies an actual relationship between two things. For example, you might notice that people who get spray tans often wear flip flops. If you assume that getting a spray tan encourages you to wear flip flops, you’re committing this logical fallacy—there are plenty of reasons why this correlation might occur, but spray tans do not cause flip flop wearing.

Hypothesis Contrary to Fact

A Hypothesis Contrary to Fact is, simply, speculation without concrete evidence. It is an argument that, under different circumstances or historical events, the present or the future would certainly look a certain way. For example, “if you had gotten a job in finance, you’d be making loads of money right now.” This claim doesn’t take into account any number of factors: the state of the finance industry, your ability to perform finance-related work, etc.

“I’m Entitled to My Opinion”

This logical fallacy conflates opinion with fact. It is ultimately a kind of red herring. Let’s say I argue “it always rains when it’s sunny.” This is wrong; you call me out on this. I might reply saying “you can tell me I’m wrong, but I’m entitled to my opinion.” As a result, I’ve evaded the work of defending my argument or responding to yours, but the issue in question is not a matter of opinion.

Loaded Question

A Loaded Question inserts an unfounded claim into a question in an attempt to make the audience assume something untrue. I might ask you “Are you really going to eat strawberry ice cream when artificial strawberry flavoring gives you cancer?” I’ve stated a claim as though it were true, offering no justification and ultimately coercing you into believing something false.

Middle Ground

The Middle Ground fallacy assumes that the truth lies somewhere between two opposing sides. Let’s say two people are arguing about the color of Kirkjubøargarður , a farm in the Faroe Islands. One person argues it’s black; the other says it’s white. The person who says it’s white then argues “well, it must be somewhere in the middle. Let’s say it’s steel gray.” Yet the house is undeniably black.

This logical fallacy makes use of the existence of the False Dilemma; some things simply are black and white. Many politicians will use this argument to gain some concessions in their favor even when their position is ultimately and entirely wrong.

No True Scotsman

The No True Scotsman argument is an appeal to “purity,” in which a person argues that a true example of something doesn’t perform a certain behavior. See it played out in this conversation:

  • Person 1: All New Yorkers work multiple jobs.
  • Person 2: My uncle lives in New York, and only works one job.
  • Person 1: Only real New Yorkers work multiple jobs.

This logical fallacy creates an arbitrary purity test, and often makes unfair arguments about a certain identity. You can imagine how this argument can be wielded much more perniciously: “only true Americans eat meat. Since you’re a vegan, you must be a Communist.”

Single Cause

The Single Cause fallacy assumes something occurs because of only one cause. A topical example of this is inflation in the year 2023. Some people argue inflation is because of supply chain issues; others argue it’s because of poor trade policy; others argue it’s because of corporate greed; others argue it’s because of rising wages and low unemployment. In truth, all of these are causes of inflation, as well as other causes not mentioned here.

Slothful Induction

Slothful Induction can also be called an Appeal to Coincidence. Instead of acknowledging the likely relationship between two things, you argue that something keeps happening because of coincidence. “Sure, I keep drinking while driving, but all of my DUIs are because people keep slowing their cars in front of me.” It is an abnegation of accountability.

Texas Sharpshooter

The Texas Sharpshooter fallacy occurs when you draw a conclusion from a limited amount of data. It is a process of shooting a gun at a wall and then painting a bullseye around the bullet hole. As a result, you exclude the information that actually negates or challenges your argument.

For example, you might argue “I got into Harvard because I studied hard, did athletics and extracurriculars, and wrote a good essay.” What you failed to mention is the $5,000,000 donation your dad gave to the school.

Or, “Brian and Sally were made for each other: they both like ice cream, Russian novels, knitting, long walks on the beach, and they both dislike hypocrisy.” Perhaps you didn’t know this: Brian is also gay.

Write Without Logical Fallacies at Writers.com

Good arguments rarely happen in a vacuum: they develop out of a process of feedback, debate, and collaboration. If you’re writing for periodicals, news outlets, or any other form of CNF, our upcoming creative nonfiction classes might be for you.

' src=

Sean Glatch

Leave a comment cancel reply.

Save my name, email, and website in this browser for the next time I comment.

8 Hypothesis Testing Examples in Real Life

Hypothesis testing refers to the systematic and scientific method of examining whether the hypothesis set by the researcher is valid or not. Hypothesis testing verifies that the findings of an experiment are valid and the particular results did not happen by chance. If the particular results have occurred by chance then that experiment can not be repeated and its findings won’t be reliable. For example, if you conduct a study that finds that a particular drug is responsible for the blood pressure problem in diabetic patients. But, when you repeat this experiment and it does not give the same results, no one would trust this experiment’s findings. Hence, hypothesis testing is a very crucial step to verify the experimental findings. The main criterion of hypothesis testing is to check whether the null hypothesis is rejected or retained. The null hypothesis assumes that there does not exist any relationship between the variables under investigation, while the alternate hypothesis confirms the association between the variables under investigation. If the null hypothesis is rejected, it means that alternative hypotheses (research hypothesis) are accepted, and if the null hypothesis is accepted, the alternate hypothesis is rejected automatically. In this article, we’ll learn about hypothesis testing and various real-life examples of hypothesis testing.

Understanding Hypothesis Testing

The hypothesis testing broadly involves the following steps,

  • Step 1 : Formulate the research hypothesis and the null hypothesis of the experiment.
  • Step 2: Set the characteristics of the comparison distribution.
  • Step3: Set the criterion for decision making, i.e., cut off sample score for the comparison to reject or retain the null hypothesis.
  • Step:4 Set the outcome of the sample on the comparison distribution.
  • Step 5: Make a decision whether to reject or retain the null hypothesis.

Let us understand these steps through the following example,

Suppose the researcher wants to examine whether the memorizing power of students improves after consuming caffeine or not. To examine this he conducts experiments, the experiment involves two groups say group A (experimental group) and group B (control group). Group A consumed the coffee before the memory test, while group B consumed the water before the memory test. The average normally distributed score of the people of the experimental group has a standard deviation of 4 and a mean of 19. On the basis of the score, the researcher can state that there is an association between the two variables, i.e., the memory power and the caffeine, but the researcher can not predict any particular direction, i.e., which out of the experimental group and the control group had performed better in the memory tests. Hence, the level of significance value, i.e., 5 per cent will help to draw the conclusion. Following is the stepwise hypothesis testing of this example,

Step 1: Formulating Null hypothesis and alternate hypothesis 

There exist two sample populations, i.e., group A and group B.

Group A: People who consumed coffee before the experiment

Group B: People who consumed water before the experiment.

On the basis of this, the null hypothesis and the alternative hypothesis would be as follows.

Alternate Hypothesis: Group A will perform differently from Group B, i.e., there exists an association between the two variables.

Null Hypothesis: There will not be any difference between the performance of both groups, i.e., Group A and Group B both will perform similarly.

Step 2: Characteristics of the comparison distribution 

The characteristics of the comparison distribution in this example are given below,

Population Mean = 19

Standard Deviation= 4, normally distributed.

Step 3: Cut off score

In this test the direction of effect is not stated, i.e., it is a two-tailed test. In the case of a two-tailed test, the cut off sample scores is equal to +1.96 and -1.99 at the 5 per cent level.

Step 4: Outcome of Sample Score

The sample score is then converted into the Z value. Using the appropriate method of conversion this value is turned out to be equal to 2.

Step 5: Decision Making

The Z score value of 2 is far more than the cut off Z value, ie., +1.96, hence the result is significant, ie., rejection of the null hypothesis, i.e., there exists an association between the memory power and the consumption of the coffee before the test.

Click here , to understand hypothesis testing in detail.

Hypothesis Testing Real Life Examples

Following are some real-life examples of hypothesis testing.

1. To Check the Manufacturing Processes

Hypothesis testing finds its application in the manufacturing processes such as in determining whether the implication of the new technique or process in the manufacturing plant caused the anomalies in the quality of the product or not. Let us suppose, that manufacturing plant X decides to verify that the particular method results in an increase in the defective products per quarter, say this number to be 200. Now, to verify this the researcher needs to calculate the mean of the number of defective products produced before the start and the end of the quarter.

Following is the representation of the Hypothesis testing of this example,

Null Hypothesis (Ho) :  The average of the defective products produced is the same before and after the implementation of the new manufacturing method, i.e., μ after = μ before

Alternative Hypothesis (Ha) : The average number of defective products produced are different before and after the implementation of the new manufacturing method, i.e., μ after ≠ μ before

If the resultant p-value of the hypothesis testing comes lesser than the significant value, i.e., α = .05, then the null hypothesis is rejected and it can be concluded that the changes in the method of production lead to the rise in the number of defective products production per quarter.

2. To Plan the Marketing Strategies

Many businesses often use hypothesis testing to determine the impact of the newly implemented marketing techniques, campaigns or other tactics on the sales of the product. For example, the marketing department of the company assumed that if they spend more the digital advertisements it would lead to a rise in sales. To verify this assumption, the marketing department may raise the digital advertisement budget for a particular period, and then analyse the collected data at the end of that period. They have to perform hypothesis testing to verify their assumption. Here,

Null Hypothesis (Ho) : The average sales are the same before and after the rise in the digital advertisement budget, i.e., μafter = μbefore

Alternative Hypothesis (Ha) : The average sales increase after the rise in the digital advertisement budget, i.e., μafter > μbefore

If the P-value is smaller than the significant value (say .05), then the null hypothesis can be rejected by the marketing department, and they can conclude that the rise in the digital advertisement budget can result in a rise in the sales of the product.

3. In Clinical Trials

Many pharmacists and doctors use hypothesis testing for clinical trials. The impact of the new clinical methods, medicines or procedures on the condition of the patients is analysed through hypothesis testing. For example, a pharmacist believes that the new medicine is resulting in the rise of blood pressure in diabetic patients. To test this assumption, the researcher has to measure the blood pressure of the sample patients (patients under investigation) before and after the intake of the new medicine for nearly a particular period say one month. The following procedure of the hypothesis testing is then followed,

Null Hypothesis (H0) : The average blood pressure is the same after and before the consumption of the medicine, i.e., μafter = μbefore

Alternative Hypothesis (Ha): The average blood pressure after the consumption of the medicine is less than the average blood pressure before the consumption of the medicine, i.e., μafter < μbefore

If the p-value of the hypothesis test is less than the significance value (say .o5), the null hypothesis is rejected, i.e.,  it can be concluded that the new drug is responsible for the rise in the blood pressure of diabetic patients.

4. In Testing Effectiveness of Essential Oils

Essential oils are gaining popularity nowadays due to their various benefits. Various essential oils such as ylang-ylang, lavender, and chamomile claim to reduce anxiety. You might like to test the true healing powers of all these essential oils. Suppose you assume that the lavender essential oil has the ability to reduce stress and anxiety. To check this assumption you may conduct the hypothesis testing by restating the hypothesis as follows,

Null Hypothesis (Ho) : Lavender essential has no effect on reducing anxiety.

Alternative Hypothesis (Ha): Lavender oil helps in reducing anxiety.

In this experiment, group A, i.e., the experimental group are provided with the lavender oil, while group B, i.e., the control group is provided with the placebo. The data is then collected using the various statistical tools and the stress level of both the groups, i.e., the experimental and the control group is then analysed. After the calculation, the significance level, and the p-value are found to be 0.25, and 0.05 respectively. The p values are less than the significance values, hence the null hypothesis is rejected, and it can be concluded that the lavender oil helps in reducing the stress among the people.

5. In Testing Fertilizer’s Impact on Plants

Nowadays, hypothesis testing is also used to examine the impact of pesticides, fertilizers, and other chemicals on the growth of plants or animals. Let us suppose a researcher wants to check his assumption that the particular fertilizer may result in the faster growth of the plant in a month than its usual growth of 10 inches. To verify this assumption he consistently gave that fertilizer to the plant for nearly a month. Following is the mathematical procedure of the hypothesis testing in this case,

Null Hypothesis (H0): The fertilizer does not have any influence on the growth of the plant. i.e., μ = 20 inches

Alternative Hypothesis (Ha): The fertiliser results in the faster growth of the plant, i.e., μ > 20 inches

Now, if the p-value of the hypothesis testing comes smaller than the level of significance, say .05, then the null hypothesis can be rejected, and you can conclude that the particular fertilizer is responsible for the faster plant growth.

6. In Testing the Effectiveness of Vitamin E

Suppose the researcher assumes that Vitamin E helps in the faster growth of the Hair. He conduct an experiment in which the experimental group is provided with vitamin E for three months while the controlled group is provided with the placebo. The results are then analysed after the duration of three months. To verify his assumption he restates the hypothesis as follows,

Null Hypothesis (H0) : There is no association between the Vitamin E and the hair growth of the sample group, i.e., μafter = μbefore

Alternative Hypothesis (Ha) : The group of people who consumed the vitamin E shows faster hair growth than the average hair growth of them before the consumption of the Vitamin E provided other variables remains constant. Here, μafter > μbefore.

After performing the statistical analysis, the significance level and the p-value in this scenario are o.o5, and 0.20 respectively. Hence, the researcher can conclude that the consumption of vitamin E results in faster hair growth.

7. In Testing the Teaching Strategy

Suppose the two teachers say Mr X and Mr Y argue about the best teaching strategy. Mr X says that children will perform better in the annual exams if they are given the weekly tests, while Mr Y argues that the weekly test would not impact the performance of the children in the annual exams and it is waste of time. Now, to verify who is right between the both, we may conduct hypothesis testing. The researcher may formulate the hypothesis as follows,

Null Hypothesis (Ho): There is no association between the weekly tests on the performance of the children in the annual exams, i.e., the average marks scored by the children when they were given the weekly exams and when not, were the same. (μafter = μbefore)

Alternative Hypothesis (Ha): The children will perform better in the annual exams, when they have to give the weekly tests, rather than just giving the annual exams, i.e., μafter > μbefore.

Now, if the p-value of the hypothesis testing comes smaller than the level of significance, say .05, then the null hypothesis can be rejected, and the researcher can conclude that the children will perform better in the annual exams if the weekly examination system would be implemented.

8. In Verifying the Assumption related to Intelligence

Suppose a principle states that the students studying in her school have an IQ level of above average. To support her statement, the researcher may take a sample of around 50 random students from that school. Let’s say the average IQ score of those children is around 110, and the average IQ score of the mean population is 100 with a standard deviation of 15. The hypothesis testing is given as follows,

Null Hypothesis (Ho) : The population mean IQ score of 100 is a general fact, i.e., μ = 100.

Alternative Hypothesis (Ha): The average IQ score of the students is above average, i.e., μ > 100

It’s a one-tailed test as we are aiming for the ‘greater than’ assumption. Let us suppose the alpha level or we can say the significance level, in this case, is 5 per cent, i.e., 0.05, and this corresponds to the Z score equal to 1.645. The Z score is found by the statistical formula given by (112.5 – 100) / (15/√30) = 4.56. Now, the final step is to compare the values of the expected z score and the calculated z score. Here, the calculated Z score is lesser than the expected Z score, hence, the Null Hypothesis is rejected, i.e., the average IQ score of the children belonging to that school is above average.

Related Posts

Conformity & Asch Experiment

Conformity & Asch Experiment

8 Everyday Life Examples Of Priming

8 Everyday Life Examples Of Priming

13 Examples Of Operant Conditioning in Everyday Life

13 Examples Of Operant Conditioning in Everyday Life

Hypothesis Testing

Hypothesis Testing

Social Cognitive Theory

Social Cognitive Theory

Freud’s Psychoanalytic Theories Explained

Freud’s Psychoanalytic Theories Explained

Add comment cancel reply.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Educ Psychol Meas
  • v.77(4); 2017 Aug

Hypothesis Testing in the Real World

Jeff miller.

1 University of Otago, Dunedin, New Zealand

Critics of null hypothesis significance testing suggest that (a) its basic logic is invalid and (b) it addresses a question that is of no interest. In contrast to (a), I argue that the underlying logic of hypothesis testing is actually extremely straightforward and compelling. To substantiate that, I present examples showing that hypothesis testing logic is routinely used in everyday life. These same examples also refute (b) by showing circumstances in which the logic of hypothesis testing addresses a question of prime interest. Null hypothesis significance testing may sometimes be misunderstood or misapplied, but these problems should be addressed by improved education.

One important goal of statistical analysis is to find real patterns in data. This is difficult when the data are subject to random noise, because random noise can produce illusory patterns “just by chance.” Given the difficulty of separating real patterns from coincidental ones within noisy data, it is important for researchers to use all of the appropriate tools and models to make inferences from their data (e.g., Gigerenzer & Marewski, 2015 ).

Null hypothesis significance testing (NHST) is one of the most commonly used types of statistical analysis, but it has been criticized severely (e.g., Kline, 2004 ; Ziliak & McCloskey, 2008 ). According to Cohen (1994) , for example, “NHST has not only failed to support the advance of psychology as a science but also has seriously impeded it” (p. 997). There have been calls for it to be supplemented with other types of analysis (e.g., Wilkinson & the Task Force on Statistical Inference, 1999 ), and at least one journal has banned its use outright ( Trafimow & Marks, 2015 ).

This note reviews the basic logic of NHST and responds to some criticisms of it. I argue that the basic logic is straightforward and compelling—so much so that it is commonly used in everyday reasoning. It is suitable for answering certain types of research questions, and of course it can be supplemented with additional techniques to address other questions. Criticisms of NHST’s logic either distort it or implicitly deny the possibility of ever finding patterns in data. The major problem with NHST is that some aspects of the method can be misunderstood, but the solution to that problem is to improve education—not to adopt new methods that address a different set of questions but are incapable of answering the question addressed by NHST. I conclude that it would be a mistake to throw out NHST.

The Common Sense Logic of NHST

Critics of NHST assert that it uses arcane, twisted, and ultimately flawed probabilistic logic (e.g., Cohen, 1994 ; Hubbard & Lindsay, 2008 ). To the contrary, the heart of NHST is a simple, intuitive, and familiar “common sense” logic that most people routinely use when they are trying to decide whether something they observe might have happened by coincidence (a.k.a., “randomly,” “by accident,” or “by chance”).

For example, suppose that you and five colleagues attend a departmental picnic. An hour after eating, three of you start to feel queasy. It comes out in discussion that those feeling queasy ate potato salad and that those not feeling queasy did not eat the potato salad. What could be more natural than to conclude that there was something wrong with the potato salad?

It is important to realize that this nonstatistical example fully embodies the underlying logic of hypothesis testing. First, a pattern is observed. In this example, the pattern is that people who ate potato salad felt queasy. Second, it is acknowledged that the pattern might have arisen just by chance. In this example, for instance, exactly those people who ate the potato salad—and no one else—might coincidentally all have been coming down with the flu, and the flu might have caused their queasiness. Third, there is reason to believe that the observed coincidence—while possible—would be very unlikely. In the example, real-world experience suggests that coming down with flu is a rare event, so it would be quite unlikely for several people to do so at just the same time, and it would of course be even more unlikely that those were exactly the people who ate the potato salad. Fourth, it is concluded that the observed pattern did not arise by chance. In this example, the “not by chance” conclusion suggests that there was something wrong with the potato salad.

To further clarify the analogy between NHST and the potato salad example, consider how a standard coin-flipping “statistical” data analysis situation could be described in parallel terms. Suppose a coin is flipped 50 times and it comes up heads 48 of them (pattern). This quite strong pattern could happen by coincidence, but elementary probability theory says that such a coincidence would be extremely unlikely. It therefore seems reasonable to conclude that the pattern was not just a coincidence; instead, the coin appears to be biased to come up heads. This is exactly the same line of reasoning used in the potato salad example: The observed pattern would be very unlikely to occur by chance, so it is reasonable to conclude that it arose for some other reason.

There are many other nonstatistical examples of the reasoning used in NHST. For instance, if you see an unusually large number of cars parked on the street where you live (pattern), you will probably conclude that something special is going on nearby. It is logically possible for all those cars to be there at the same time just by coincidence, but you know from your experience that this would be unlikely, so you reject the “just by chance” idea. Analogously, if two statistics students make an identical series of calculation errors on a homework problem (pattern), their instructor might well conclude that they had not done the homework independently. Although it is logically possible that the two students made the same errors by chance, that would seem so unlikely—at least for some types of errors—that the instructor would reject that explanation. These and many similar examples show that people often use the logic of hypothesis testing in the real world; essentially, they do so every time they conclude “that could not just be a coincidence.” Statistical hypothesis testing differs only in that laws of probability—rather than every-day experiences with various coincidences—are used to assess the likelihood that an observed pattern would occur by chance.

Criticisms of NHST’s Logic

According to Berkson (1942) , “There is no logical warrant for considering an event known to occur in a given hypothesis, even if infrequently, as disproving the hypothesis” (p. 326). In terms of our examples, Berkson is saying that it is illogical to consider 3/6 queasy friends as proving that there was something wrong with the potato salad, because it could be just a coincidence. Taken to its logical extreme, his statement implies that observing 48/50 heads should also not be regarded as disproving the hypothesis of a fair coin, because that too could happen by chance. To be sure, Berkson is mathematically correct that the suggested conclusions about the quality of the potato salad and the fairness of the coin do not follow from the observed patterns with the same 100% certainty that implications have in propositional logic (e.g., modus ponens ). On the other hand, it is unrealistic to demand that level of certainty before reaching conclusions from noisy data, because such data will almost never support any interesting conclusions with 100% certainty. In practice, 48/50 heads seems like ample evidence to conclude—with no further assumptions—that a coin must be biased, and the “logical” objection that this could have happened by chance seems rather intransigent. Given that logical certainty is unattainable due to the presence of noise in the data, one can only consider the probabilities of various correct and incorrect decisions (e.g., Type I error rates, power) under various hypothesized conditions, which is exactly what NHST does.

Another long-standing objection to NHST is that its conclusions depend on the probabilities of events that did not actually occur (e.g., Cox, 1958 ; Wagenmakers, 2007 ). For example, in deciding whether 3/6 people feeling queasy was too much of a coincidence, people might be influenced by how often they had seen 4/6, 5/6, or 6/6 people in a group feel queasy by chance, even though only 3/6 had actually been observed. It is difficult to see much practical force to this objection, however. In trying to decide whether a particular pattern is too strong to be observed by chance, it seems quite relevant to consider all of the different patterns that might be observed by chance—especially the patterns that are even stronger. Proponents of this objection generally support it with artificial probability distributions in which stronger patterns are at least as likely to occur by chance as weaker patterns, but such distributions rarely if ever arise in actual research scenarios.

Critics of NHST sometimes claim that its logical form is parallel to that of the argument shown in Table 1 (e.g., Cohen, 1994 ; Pollard & Richardson, 1987 ). There is obviously something wrong with the argument in this table, and NHST must be flawed if it uses the same logic. This criticism is unfounded, however, because the logic illustrated in Table 1 is not parallel to that of NHST.

A Misleading Caricature of Null Hypothesis Significance Testing’s Logical Form.

The argument given in Table 1 suggests that a null hypothesis—in this case, that a person is an American—should be rejected whenever the observed results are unlikely under that hypothesis. NHST requires more than that, however. Implicitly, in order to reject a null hypothesis, NHST requires that the observed results must be more likely under an alternative hypothesis than under the null. In the potato salad example, for instance, rejecting the coincidence explanation requires not only that the observed pattern is unlikely by chance when the potato salad is good, but also that this pattern is more likely when the potato salad is bad (i.e., more likely when the null hypothesis is false than when it is true).

Figure 1 shows how this additional requirement arises within NHST using the Z test as an example. The null hypothesis predicts that the outcome is a draw from the depicted standard normal distribution, and Region A (i.e., the cross-hatched tails) of this distribution represent the Z values for which the null would be rejected at p < .05. Critically, Region B in the middle of the distribution also depicts an area of 5%. If NHST really only required that the rejection region had a probability of 5% under the null hypothesis, as implied by the argument in Table 1 , then rejecting the null for an observation in Region B would be just as appropriate as rejecting it for an observation in Region A. This is not all that NHST requires, however, and in fact outcomes in Region B would not be considered evidence against the null hypothesis. The null hypothesis is rejected for outcomes in A but not for those in B, because of the requirement that an outcome in the rejection region must have higher probability when the null hypothesis is false than when it is true. Region B of Figure 1 clearly does not satisfy this additional requirement, because this area will have a higher probability when the null hypothesis is true than when it is not.

An external file that holds a picture, illustration, etc.
Object name is 10.1177_0013164416667984-fig1.jpg

A standard normal ( Z ) distribution of observed scores under the null hypothesis.

Note . Region A: The two cross-hatched areas indicate the standard two-tailed rejection region—that is, the 5% of the distribution most discrepant from the mean. Region B: The dark shaded area in the middle of the distribution also represents an area of 5%. Under NHST, only observations in the tails are taken as evidence that the null hypothesis should be rejected, even though the probability of an observation in Region B is just as low (i.e., 5%).

Likewise, the example of Table 1 clearly does not satisfy the additional requirement that the observed results should be more likely under some alternative to the null hypothesis. The probability that a person is a member of Congress is lower—not higher—if the person is not an American. In fact, the logic of NHST actually requires a first premise of the form:

  • 1′. If a person is an American, then he is probably not a member of Congress; on the other hand, if he is not an American, then he is more likely to be a member of Congress.

Premise 1′ is obviously false, so the conclusion (3) is obviously not supported within NHST.

Finally, critics of NHST often complain that its conclusions can depend on the sampling methods used to collect the data as well as on the data themselves (e.g., Wagenmakers, 2007 ). This dependence arises because NHST’s assessment of “how likely is such an extreme pattern by chance” depends on the exact probabilities of various outcomes, and these in turn depend on the details of how the sampling was carried out. This is thought to be a problem for NHST, because—according to critics—the conclusion from a data set should depend only on what the data are, but not on the sampling plan used to collect them. This argument begs the question, however. Of course, the assessment of what will happen “by chance” can only be done within a well-defined set of possible outcomes. These outcomes are necessarily determined by the sampling plan, so the plan must influence the assessment of the various patterns’ probabilities. Viewed in this manner, it seems quite reasonable that any conclusion about the presence of an unusual pattern would depend on the sampling plan as well as on the observations themselves.

Ancillary Criticisms of NHST

Additional criticisms have been directed at aspects of NHST other than its logic. For example, it is sometimes claimed that NHST does not address the question of main interest. Critics often assert that researchers “really” want to know the probability that a pattern is coincidental given the data (e.g., Berger & Berry, 1988 ; Cohen, 1994 ; Kline, 2004 ). Within the current examples, then, the claim is that people really want to know “the probability that these 3/6 picnic-goers feel sick by coincidence” or “the probability that the coin is biased towards heads.”

It is clear that NHST does not provide such probabilities, but it is not so clear that everyone always wants them. In many cases, people simply want to decide whether the pure chance explanation is tenable; for example, it is difficult to imagine a picnic-goer asking for a precise probability that the potato salad was bad. In any case, to obtain such probabilities requires knowing all of the other possible explanations, plus their prior probabilities (e.g., Efron, 2013 ). In many situations where NHST is used, the complete set of other possible explanations and their probabilities are simply unknown. In these situations, no statistical method can compute the probability that researchers supposedly want, and it seems unfair to criticize NHST for failing to provide something that cannot be determined with any other technique either.

Surely the most frequent and justified criticisms of NHST revolve around the idea that researchers do not completely understand it (e.g., Batanero, 2000 ; Wainer & Robinson, 2003 ). A number of findings suggest that one aspect of NHST in particular—the so-called “ p value”—is widely misunderstood (e.g., Gelman, 2013 ; Haller & Kraus, 2002 ; Hubbard & Lindsay, 2008 ; Kline, 2004 ). Explicitly or implicitly, such findings are taken as evidence that NHST should be abandoned because it is too difficult to use properly (e.g., Cohen, 1994 ).

Unfortunately, similar data suggest that many other concepts in probability and statistics are also poorly understood (e.g., Campbell, 1974 ). If we abandon all methods based on misunderstood statistical concepts, then almost all statistically based methods will have to go, including some apparently quite practical and important ones (e.g., diagnostic testing in medicine; Gigerenzer, Gaissmaier, Kurz-Milcke, Schwartz, & Woloshin, 2008 ). Within this difficult context, there seems to be no reason to abandon NHST selectively, because there is “no evidence that NHST is misused any more often than any other procedure” ( Wainer & Robinson, 2003 , p. 22). Moreover, if one accepted the argument that all poorly understood methods should be abandoned, then some useful but poorly understood nonstatistical methods would presumably also have to go (e.g., propositional logic; Rips & Marcus, 1977 ; Wason, 1968 ). Surely it would be a mistake to abandon a valuable tool or technique simply because considerable training and effort are required to use it correctly.

The current discussion of frequent false positives and low replicability in research areas using NHST (e.g., Francis, 2012 ; Nosek, Spies, & Motyl, 2012 ; Simmons, Nelson, & Simonsohn, 2011 ) also suggests that there are misunderstandings and misuse of this technique. Specifically, there is evidence that researchers capitalize on flexibility in the selection of their data and in the application of their analyses (i.e., “ p -hacking”) in order to obtain statistically significant and therefore publishable results (e.g., Bakker, Van Dijk, & Wicherts, 2012 ; John, Loewenstein, & Prelec, 2012 ; Tsilidis et al., 2013 ). Such practices are a misuse of NHST, and they inflate positive rates, especially in combination with existing biases toward publication of surprising new findings and with the relative scarcity of such findings within well-studied areas (e.g., Ferguson & Heene, 2012 ; Ioannidis, 2005 ). The false positive problem is not specific to NHST, however; it would arise analogously within any statistical framework. Whatever statistical methods are used to detect new patterns in noisy data, the rate of reporting imaginary patterns (i.e., false positives) will be inflated by flexibility in the selection of the data, flexibility in the application of the methods, and flexibility in the choice of what findings are reported.

To the extent that misunderstanding of NHST presents a problem, better education of researchers seems like the best path toward a solution (e.g., Holland, 2007 ; Kalinowski, Fidler, & Cumming, 2008 ; Leek & Peng, 2015 ). Although the underlying logic of NHST has considerable common sense appeal—as shown by the real-world examples described earlier—this logic is often obscured when the methods are taught to beginners. This is partly because of the specialized and unintuitive terminology that has been developed for NHST (e.g., “null hypothesis,” “Type I error,” “Type II error,” “power”). Another problem is that introductions to NHST nearly always focus primarily on the mathematical formulas used to compute the probabilities of observing various patterns by chance (i.e., “distributions under the null hypothesis”). Students can easily be so confused about the workings of these formulas that they fail to appreciate the simplicity of the underlying logic.

Conclusions

NHST is a useful heuristic for detecting nonrandom patterns, and abandoning it would be counterproductive. Its underlying logic—both in scientific research and in everyday life—is that chance can be rejected as an explanation of observed patterns that would rarely occur by coincidence. It is true that the conclusion of a biased coin does not follow with 100% certainty, and it will be wrong when an unlikely pattern really does occur by chance. Researchers should certainly keep this possibility in mind and resist the tendency to believe that every pattern documented statistically—whether by NHST or any other technique—necessarily reflects the true state of the world. As a practical strategy for detecting non-random patterns in a noisy world, however, it seems quite a reasonable heuristic to conclude tentatively that something other than chance is responsible for systematic observed patterns.

While NHST is extremely useful for deciding whether patterns might have arisen by chance, it is, of course, not the only useful statistical technique. In fact, when NHST is employed, “the answer to the significance test is rarely the only thing we should consider” ( Cox, 1958 , p. 367), so it is not sufficient for researchers to try to answer all research questions entirely within the NHST framework. For example, NHST is not appropriate for evaluating how strongly a data set supports a null hypothesis (e.g., Grant, 1962 ). For that purpose, it is better to use confidence intervals or Bayesian techniques (e.g., Cumming & Fidler, 2009 ; Rouder, Speckman, Sun, Morey, & Iverson, 2009 ; Wainer & Robinson, 2003 ; Wetzels, Raaijmakers, Jakab, & Wagenmakers, 2009 ). Fortunately, there is no fundamental limit on the number of statistical tools that researchers can use. Researchers should always use the set of tools most suitable for the questions under consideration. In many cases, that set will include NHST.

Acknowledgments

I thank Scott Brown, Patricia Haden, Wolf Schwarz, and two anonymous reviewers for constructive comments on earlier versions of the article.

Declaration of Conflicting Interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Preparation of this article was supported by a research award from the Alexander von Humboldt Foundation.

  • Bakker M., Van Dijk A., Wicherts J. M. (2012). The rules of the game called psychological science . Perspectives on Psychological Science , 7 , 543-554. doi: 10.1177/1745691612459060 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Batanero C. (2000). Controversies around the role of statistical tests in experimental research . Mathematical Thinking and Learning , 2 , 75-97. doi: 10.1207/S15327833MTL0202_4 [ CrossRef ] [ Google Scholar ]
  • Berger J. O., Berry D. A. (1988). Statistical analysis and the illusion of objectivity . American Scientist , 76 , 159-165. [ Google Scholar ]
  • Berkson J. (1942). Tests of significance considered as evidence . Journal of the American Statistical Association , 37 , 325-335. doi: 10.1080/01621459.1942.10501760 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Campbell S. K. (1974). Flaws and fallacies in statistical thinking . Englewood Cliffs, NJ: Prentice-Hall. [ Google Scholar ]
  • Cohen J. (1994). The earth is round ( p < .05) . American Psychologist , 49 , 997-1003. doi: 10.1037//0003-066X.49.12.997 [ CrossRef ] [ Google Scholar ]
  • Cox D. R. (1958). Some problems connected with statistical inference . Annals of Mathematical Statistics , 29 , 357-372. doi: 10.1214/aoms/1177706618 [ CrossRef ] [ Google Scholar ]
  • Cumming G., Fidler F. (2009). Confidence intervals: Better answers to better questions . Zeitschrift für Psychologie , 217 , 15-26. doi: 10.1027/0044-3409.217.1.15 [ CrossRef ] [ Google Scholar ]
  • Efron B. (2013). Bayes’ theorem in the 21st century . Science , 340 , 1177-1178. doi: 10.1126/science.1236536 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ferguson C. J., Heene M. (2012). A vast graveyard of undead theories: Publication bias and psychological science’s aversion to the null . Perspectives on Psychological Science , 7 , 555-561. doi: 10.1177/1745691612459059 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Francis G. (2012). Publication bias and the failure of replication in experimental psychology . Psychonomic Bulletin & Review , 19 , 975-991. doi: 10.3758/s13423-012-0322-y [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gelman A. (2013). Commentary: P values and statistical practice . Epidemiology , 24 , 69-72. [ PubMed ] [ Google Scholar ]
  • Gigerenzer G., Gaissmaier W., Kurz-Milcke E., Schwartz L. M., Woloshin S. (2008). Helping doctors and patients make sense of health statistics . Psychological Science in the Public Interest , 8 , 53-96. doi: 10.1111/j.1539-6053.2008.00033.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gigerenzer G., Marewski J. N. (2015). Surrogate science: The idol of a universal method for scientific inference . Journal of Management , 41 , 421-440. doi: 10.1177/0149206314547522 [ CrossRef ] [ Google Scholar ]
  • Grant D. A. (1962). Testing the null hypothesis and the strategy and tactics of investigating theoretical models . Psychological Review , 69 , 54-61. doi: 10.1037/h0038813 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Haller H., Kraus S. (2002). Misinterpretations of significance: A problem students share with their teachers? Methods of Psychological Research , 7 , 1-20. [ Google Scholar ]
  • Holland B. K. (2007). A classroom demonstration of hypothesis testing . Teaching Statistics , 29 , 71-73. doi: 10.1111/j.1467-9639.2007.00269.x [ CrossRef ] [ Google Scholar ]
  • Hubbard R., Lindsay R. M. (2008). Why p values are not a useful measure of evidence in statistical significance testing . Theory & Psychology , 18 , 69-88. doi: 10.1177/0959354307086923 [ CrossRef ] [ Google Scholar ]
  • Ioannidis J. P. A. (2005). Why most published research findings are false . PLoS Medicine , 2 ( 8 ), e124. doi: 10.1371/journal.pmed.0020124 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • John L. K., Loewenstein G., Prelec D. (2012). Measuring the prevalence of questionable research practices with incentives for truth-telling . Psychological Science , 23 , 524-532. doi: 10.1177/0956797611430953 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kalinowski P., Fidler F., Cumming G. (2008). Overcoming the inverse probability fallacy: A comparison of two teaching interventions . Methodology: European Journal of Research Methods for the Behavioral and Social Sciences , 4 , 152-158. [ Google Scholar ]
  • Kline R. B. (2004). Beyond significance testing: Reforming data analysis methods in behavioral research . Washington, DC: American Psychological Association. [ Google Scholar ]
  • Leek J. T., Peng R. D. (2015). P values are just the tip of the iceberg . Nature , 520 , 612. [ PubMed ] [ Google Scholar ]
  • Nosek B. A., Spies J. R., Motyl M. (2012). Scientific utopia II. Restructuring incentives and practices to promote truth over publishability . Perspectives on Psychological Science , 7 , 615-631. [ Google Scholar ]
  • Pollard P., Richardson J. T. E. (1987). On the probability of making Type I errors . Psychological Bulletin , 102 , 159-163. doi: 10.1037/0033-2909.102.1.159 [ CrossRef ] [ Google Scholar ]
  • Rips L. J., Marcus S. L. (1977). Suppositions and the analysis of conditional sentences . In Just M. A., Carpenter P. A. (Eds.), Cognitive processes in comprehension (pp. 185-220). Hillsdale, NJ: Lawrence Erlbaum. [ Google Scholar ]
  • Rouder J. N., Speckman P. L., Sun D., Morey R. D., Iverson G. (2009). Bayesian t tests for accepting and rejecting the null hypothesis . Psychonomic Bulletin & Review , 16 , 225-237. doi: 10.3758/PBR.16.2.225 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Simmons J. P., Nelson L. D., Simonsohn U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant . Psychological Science , 22 , 1359-1366. doi: 10.1177/0956797611417632 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Trafimow D., Marks M. (2015). Editorial . Basic and Applied Social Psychology , 37 ( 1 ), 1-2. doi: 10.1080/01973533.2015.1012991 [ CrossRef ] [ Google Scholar ]
  • Tsilidis K. K., Panagiotou O. A., Sena E. S., Aretouli E., Evangelou E., Howells D. W., … Ioannidis J. P. A. (2013). Evaluation of excess significance bias in animal studies of neurological diseases . PLoS Biology , 11 ( 7 ), e1001609. doi: 10.1371/journal.pbio.1001609 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wagenmakers E. J. (2007). A practical solution to the pervasive problems of p values . Psychonomic Bulletin & Review , 14 , 779-804. [ PubMed ] [ Google Scholar ]
  • Wainer H., Robinson D. H. (2003). Shaping up the practice of null hypothesis significance testing . Educational Researcher , 32 , 22-30. doi: 10.3102/0013189X032007022 [ CrossRef ] [ Google Scholar ]
  • Wason P. C. (1968). Reasoning about a rule . Quarterly Journal of Experimental Psychology , 20 , 273-281. doi: 10.1080/14640746808400161 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wetzels R., Raaijmakers J. G. W., Jakab E., Wagenmakers E. J. (2009). How to quantify support for and against the null hypothesis: A flexible WinBUGS implementation of a default Bayesian t test . Psychonomic Bulletin & Review , 16 , 752-760. doi: 10.3758/PBR.16.4.752 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wilkinson L., & the Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations . American Psychologist , 54 , 594-604. doi: 10.1037/0003-066X.54.8.594 [ CrossRef ] [ Google Scholar ]
  • Ziliak S. T., McCloskey D. (2008). The cult of statistical significance: How the standard error costs us jobs, justice, and lives . Ann Arbor: University of Michigan Press. [ Google Scholar ]

Sample details

  • Philosophy,
  • Critical Thinking,
  • Hypothesis,
  • Critical Thinking Skills,
  • Views: 1,744

Related Topics

  • Thomas Hobbes
  • Peter Singer
  • Materialism
  • Galileo Galilei
  • Family therapy
  • Measurement
  • Learning styles

Fallacy – Hypothesis Contrary to Fact

Fallacy – Hypothesis Contrary to Fact

Since the first man walked this earth, we have been trying to understand why everything is the way it is. Why do things work the way they do? What happens if you put this with that? Humans have always been determined to learn and understand everything they can. As a result, the knowledge that beings have acquired and the patterns in subjects have come to be facts. These facts show the definite ways that objects, persons, and events function. After studying, one can determine results and learn to comprehend things that happen every day such as choices made and their outcomes.

But what if the alternative would have been selected? Can one identify what the outcome for that selection would have been? In other words, can we determine the “ifs” and “maybes” through the knowledge we have acquired? Hypothesis contrary to fact, the fallacy, questions claims made with certainty about what would have happened if a past event or condition would have been different from what is actually was. Fallacies are errors in logical reasoning, or when an arguments language is wrong or vague.

ready to help you now

Without paying upfront

However, many of these errors aren’t determined in the argument until they are analyzed because they appear to “look good”. There are numerous types of fallacies: informal fallacies, formal fallacies, fallacies of ambiguity, fallacies of presumption, and fallacies of relevance. There are ways to think about these fallacies: speculatively, analytically/critically, and normatively. Under hypothesis contrary to fact, hypothetical situations are treated as facts although it is a poorly supported claim.

An example of this fallacy is “If my dad hadn’t won the lottery ticket, my parents would have been divorced” or “If I had bitten into that jawbreaker, my tooth would have fallen out”. In both of these examples, an alternate outcome is being determined through supposition s through the use of prior knowledge and experience. For example, perhaps the person’s parents were fighting over financial issues and the lottery resolved their disputes. Also, the person possibly has already bitten into a jaw breaker and there tooth had fallen out.

These hypothetical situations are concluded through prior experiences but are they enough to assume what did not happen? Does one need to experience everything in order to understand? Regardless of how much knowledge one has it is impossible to determine what might have happened, however knowledge does help assess the possible outcomes and there likelihood of occurring through prior knowledge and experience. But one here can easily detect the fallacy’s inconsistency; there will never be enough evidence to see what may have happened because there is never any way of knowing.

A soccer athlete might argue that if he were to kick the ball he would score a goal because he’s never missed in his life but if he never shoots how can one really know if he would have made it? Maybe there was a strong breeze or maybe he tripped before kicking the ball or someone interfered with his shot. Yes, the soccer player probably wouldn’t miss the goal but there is no way of actually identifying if the statement is true hence why it is a speculative fallacy. Speculative fallacies are guesses or hypotheses. It deals with implications and the consequences of things such as “what are the consequences of thinking in a certain way”.

For example, hypothesis contrary to fact can often be misinterpreted in a situation or a person may be “misjudged” for its use which may lead to serious consequences. Max might say about his enemy “if he would have touched me I would have killed him”. This could be misinterpreted in two ways; one, in a literal sense and people would be concerned, or two that he is an aggressive person. However, this is usually not the case. Like all other fallacies, hypothesis contrary to fact is taken lightly and can be “innocent fun”.

One usually uses it when teasing someone else saying “if you had one more cookie you would have exploded” or “if you had a few more drinks he would have been good looking”. It is often used by people because it is a different form of expressing what they are trying to convey. Sometimes it is used to help exaggerate a situation such as the cookies one or the drinks one for a sense of humor. These fallacies are usually not taken literally and become a form of expressing themselves for the person using them by asserting what would have happened if what had happened had not happened.

Cite this page

https://graduateway.com/fallacy-essay-hypothesis-contrary-to-fact/

You can get a custom paper by one of our expert writers

  • Responsibility
  • Personality
  • Observational learning
  • Functionalism
  • Focus group
  • Hidden curriculum
  • Determinism
  • Conversation
  • Observation
  • Language Learning

Check more samples on your topics

Cause to the contrary defective notice.

The first circumstances where the aggrieved person can say that the chargee fails to meet the conditions to seek the order for sale is when the statutory notice of demand served was defective or improper. The notice can be said to be defective when the chargee included default interest which was more than the prescribed

Animal Experimentation: the Fallacy of Our Ethical System

Animal welfare

Norman led Jennie into the laboratory and had her sit on a metal table near the windows. She sat quietly while Norman fitted her with a helmet containing electrial monitors and couplings for attaching the helmet to other devices. She was watching people walking across the lawn. When Norman finished, she had to lie down

Fallacy of Complex Question

Critical Thinking Skills

Yes, it is true that the fallacy of complex question is one of the fallacies of presumption. Complex question is also known as many questions, loaded question or presupposition. Complex question is firstly logical fallacy belonging to the fallacy of presumption. This fallacy is committed when a person asks a question presupposing something unproven and

Fallacy Summary and Application

Common sense

            One would thin that as we become more sophisticated in our technology, we would also become more sophisticated in our thinking. This is not really the case, and our everyday life reflects this fact, in looking at critical thinking and arguments in daily life.             In this essay we will examine three fallacies that exist

Max Schulman’s Essay Love Is a Fallacy Analysis

It is likely that some would read Max Schulman’s essay entitled “Love Is a Fallacy,” and view it as ‘anti-women. ’ Others would be just as likely to see it as ‘anti-men. ’ Objectively speaking, neither view is entirely correct. This is because, equally strong arguments can be made for both cases. A more accurate conclusion

Fallacy in Car Advertisement: A Woman and a Car Equal Sex?

The idea of a man's desire is both fascinating and powerful, particularly in car industry advertisements. Whether in magazines or on television, we are exposed to ads promoting various products and services. Companies often employ fallacies to influence people's decision-making, making it challenging to discern the creator's intended message. Fallacy in advertisement can be an effective

Criticism on Love Is a Fallacy Analysis

Unreliable Narrator From the perspective of how figures of speech help to characterize in Love is a Fallacy An unreliable narrator is a narrator whose credibility has been seriouly compromised in fictions (as implemented in literature, film, theatre, etc). It is a narrator whose account of events appears to be faulty, misleadingly biased, or otherwise

The American Dream: Fact or Myth

American Dream

The American dream that one can become something from nothing is the main reason why America is the fastest growing country. It is often seen as a melting pot encompassing many different religions and nationalities. People move to America with dreams of becoming wealthy, but many of the ideologies that have existed within the country

The Science of Psychology: Separating Fact from Fiction

Critical Thinking

Chapter 1: The Science of Psychology Chapter one of our text begins by discussing psuedoscience, or as the authors call it "psychobabble". Basically they discuss how it is common that people are often misled by false psychology in our culture and quite often in the media. The authors compare and contrast true psychological practices with psuedopsychology,

hypothesis contrary to fact examples in real life

Hi, my name is Amy 👋

In case you can't find a relevant example, our professional writers are ready to help you write a unique paper. Just talk to our smart assistant Amy and she'll connect you with the best match.

ORIGINAL RESEARCH article

Investigating conversational dynamics in triads: effects of noise, hearing impairment, and hearing aids.

Eline Borch Petersen

  • WS Audiology, Lynge, Denmark

Communication is an important part of everyday life and requires a rapid and coordinated interplay between interlocutors to ensure a successful conversation. Here, we investigate whether increased communication difficulty caused by additional background noise, hearing impairment, and not providing adequate hearing-aid (HA) processing affected the dynamics of a group conversation between one hearing-impaired (HI) and two normal-hearing (NH) interlocutors. Free conversations were recorded from 25 triads communicating at low (50 dBC SPL) or high (75 dBC SPL) levels of canteen noise. In conversations at low noise levels, the HI interlocutor was either unaided or aided. In conversations at high noise levels, the HI interlocutor either experienced omnidirectional or directional sound processing. Results showed that HI interlocutors generally spoke more and initiated their turn faster, but with more variability, than the NH interlocutors. Increasing the noise level resulted in generally higher speech levels, but more so for the NH than for the HI interlocutors. Higher background noise also affected the HI interlocutors’ ability to speak in longer turns. When the HI interlocutors were unaided at low noise levels, both HI and NH interlocutors spoke louder, while receiving directional sound processing at high levels of noise only reduced the speech level of the HI interlocutor. In conclusion, noise, hearing impairment, and hearing-aid processing mainly affected speech levels, while the remaining measures of conversational dynamics (FTO median, FTO IQR, turn duration, and speaking time) were unaffected. Hence, although experiencing large changes in communication difficulty, the conversational dynamics of the free triadic conversations remain relatively stable.

1 Introduction

Living with hearing loss affects not only the ability to hear but also the way a person interacts with others in communication situations. Communication is a core activity in our everyday lives and relies on the ability to switch rapidly and continuously between listening and talking. Conversations are complex interactions consisting of linguistic, auditory, and visual components that can be adapted to overcome communication challenges. For example, it has been known for more than 100 years that humans adapt their speech when communicating in noise (Lombard speech) by increasing the intensity, pitch, and duration of words ( Lombard, 1911 ; Junqua, 1996 ). Similarly, it has been observed that when communicating with an elder hearing impaired (HI) interlocutor, younger normal-hearing (NH) interlocutors speak louder in quiet and noisy situations, reduce their articulation rate, and alter the spectral content of their speech ( Hazan and Tuomaine, 2019 ; Sørensen et al., 2019 ; Beechey et al., 2020b ; Petersen et al., 2022 ). These changes suggest that the NH interlocutors adapt their speech to alleviate the communication difficulty experienced by their HI communication partner. Furthermore, it has been observed that when providing the HI interlocutors with hearing aids (HAs), the HI interlocutors reduce the duration of their utterances (inter-pausal units), speak faster (higher articulation rate), and decrease their speech level ( Beechey et al., 2020a ; Petersen et al., 2022 ). Additionally, when the HI interlocutor is aided, the NH interlocutor also decreases their speech level despite not directly experiencing any alteration in the communication difficulty ( Beechey et al., 2020a ; Petersen et al., 2022 ).

Another aspect of a conversation is the interactive turn-taking between interlocutors and the timing of the turn-starts, denoted as floor-transfer offsets (FTOs). Despite taking at least 600 ms to physically produce a verbal response ( Indefrey and Levelt, 2004 ; Magyari et al., 2014 ), turns are generally initiated after a short pause of around 200 ms ( Stivers et al., 2009 ), indicating that turn-ends must be predicted to initiate a fast response ( Bögels et al., 2015 ; Gisladottir et al., 2015 ; Levinson and Torreira, 2015 ; Barthel et al., 2016 ; Corps et al., 2018 ). Impaired hearing causes the talker to initiate their turns in a less well-timed manner, evident from a larger variability in their FTOs compared to NH interlocutors ( Sørensen, 2021 ; Petersen et al., 2022 ). When receiving HA amplification, the FTOs of HI interlocutors become less variable, indicating that some of their communication difficulty is relieved. This allows them to provide more well-timed verbal responses ( Petersen et al., 2022 ).

From the studies referred to above, results showed that the added communication difficulty experienced by HI interlocutors affected not only the dynamics of their own speech but also of their NH conversational partner ( Hazan and Tuomaine, 2019 ; Beechey et al., 2020b ; Sørensen, 2021 ; Petersen et al., 2022 ). However, in the above studies, conversations were initiated using communication tasks (Diapix or puzzle), which required active participation and interactive exchange of information between two interlocutors ( Baker and Hazan, 2011 ; Beechey et al., 2018 ). In the current study, we investigated whether the conversational dynamics are affected in a similar manner if the conversation is less task-bound and occurs between three interlocutors. Conversations between two NH and one HI interlocutors were conducted at two different noise levels. At the low level of noise, the HI interlocutor was either unaided or aided with an HA, while at the high level of noise, they received omnidirectional or directional sound processing. This unbalanced study design was chosen because previous studies suggested that HA amplification affected the conversational dynamics, specifically the speech levels, when communicating in quiet ( Petersen et al., 2022 ). At high levels of background noise, HA amplification ensures audibility, but not intelligibility, for the HI interlocutor. Hence, the effect of reducing background noise through directional sound processing is investigated at high levels of background noise.

In the current study, increased communication difficulty, caused by hearing impairment, higher noise levels, or suboptimal HA signal processing, is expected to result in 1) longer and 2) more variable FTO values (median and interquartile range), 3) longer turn durations, 4) higher speech levels, and 5) increased speaking time for the HI interlocutor specifically. By asking the interlocutors to subjectively evaluate their active participation in the conversation and their perceived use of listening/talking strategies, it was investigated whether any alterations in the conversational dynamics were perceived or deliberately used by the interlocutors.

Focusing on the conversational dynamics of a group, rather than a two-person conversation, posed some methodological considerations on how to determine the communication states of the conversation and how to account for pauses made within a talker’s own turn. Some of these considerations and post-processing steps applied before extracting the five features of the conversational dynamics listed above are described in the section Quantifying Turn-taking in Group Conversations.

2.1 Participants

Conversations were recorded from 25 groups of three interlocutors fluent in Danish: One older hearing-impaired (HI) interlocutor, one older normal-hearing (ONH) interlocutor, and one younger normal-hearing (YNH) interlocutor. The HI participants were recruited from an internal database of HI test subjects, while all the NH interlocutors were recruited internally among employees at WS Audiology, Lynge, Denmark. All normal-hearing interlocutors passed a hearing screening at 20 dB HL at 500, 1000, 2000, and 4,000 Hz, except two ONH with a 30-dB HL on one ear at 4000 Hz. All YNH were below 35 years of age (mean = 27.2, sd = 5.2, 14 female participants). All older participants (ONH and HI) were required to be older than 50 years of age, but the HI participants (mean = 75.8, sd = 6.5, 9 female participants) were significantly older than the ONH (mean = 54.8, sd = 3.7, 15 female participants, t (48) = −14.1, p  < 0.001). The YNH participants were significantly younger than the ONH and HI participants ( p  < 0.001).

The HI participants had mild-to-moderate symmetrical hearing loss ( Figure 1A , pure-tone average across 500, 1,000, 2000, and 4,000 of 48.9 dB HL, sd = 6.1 dB HL) and were experienced hearing-aid users (>1 year of hearing-aid usage).

www.frontiersin.org

Figure 1 . Participants’ audiogram and experimental setup. (A) Individual pure-tone hearing thresholds for all HI participants averaged across ears (thin gray lines), participants (bold purple line), and the standard deviation (shaded purple area). (B) An experimental setup with the three participants seated equally spaced around a table with a diameter of 1.2 m. A loudspeaker is placed 2.2 m directly in front of each participant. (C) Attenuation of white noise when applying the directional sound processing experienced by the HI interlocutor at a high level of background noise (75 dBC, dir condition). The attenuation, in dB, indicated by concentric circles for different frequencies (line types and shading of gray), is shown for different azimuth angles. Note that the attenuations depicted for the negative azimuth angles were recorded from the left HA, while the attenuations at positive angles were recorded from the right HA.

The triads were grouped at random, ensuring that the YNH and ONH did not work closely together at WS Audiology. Across the 25 triads, 19 had interlocutors of mixed genders, while 2 had only male and 4 only female participants.

All participants gave their written informed consent, and the study was approved by the regional ethics committee (Board of Copenhagen, Denmark, reference H-20068621).

2.2 Experimental setup

The experiment was conducted in a meeting room at WS Audiology, with the participants seated at a round table ( Figure 1B ). The positions of the YNH, ONH, and HI at the table were balanced across triads. Three loudspeakers were placed 2.2 m directly in front of each participant ( Figure 1B ). The background noise presented by the loudspeakers was spatially recorded noise from the canteen of WS Audiology, which was presented at either 50 or 75 dBC SPL.

The conversations were individually recorded by each interlocutor using a directional headset microphone (DPA 4088, Allerød, Denmark). All sounds were presented and recorded via customized Matlab scripts (2018a) at a sampling frequency of 44.1 kHz. As the headsets were not easily calibrated, individual 5-s speech signals were recorded from the headset, as well as from a calibrated omnidirectional reference microphone (B-5, Behringer, Willich, Germany) placed at the center of the table. Combining the attenuation of the speech signal recorded from the headset to the reference microphone with a calibrated reference signal recorded from the reference microphone, it was possible to compute the conversational speech levels recorded from the headset in dB SPL.

2.3 Conversational task

To ensure a natural and free conversation between the three previously unacquainted participants in each triad, two conversational types were used: Consensus questions (e.g., Can you come up with a three-course dinner consisting only of dishes none of you likes) and picture cards with three keywords (e.g., a picture of a crowd at a festival with the keywords festivals, music, and summer). These two ways of initiating a conversation have previously been tested and found to spark natural and balanced conversations between interlocutors ( Petersen et al., 2022 ). The test leader showed the picture or read the consensus question aloud before each 5-min conversation. The pictures and questions could be used to guide the upcoming conversation, but the triads were instructed that deviation from the topic/question was allowed. The participants were not instructed to behave or speak in a particular manner but to act as naturally as possible.

2.4 Hearing-aid fitting

The HI participants were equipped with Signia Pure 312 7X receiver-in-the-canal HAs with M-receivers and closed-sleeve instant domes fitted with the NAL-NL2 prescription rule ( Keidser et al., 2011 ). No further fine-tuning, feedback tests, or real-ear measurements were performed. The frequency-based noise reduction system was disabled in the fitting software (Connexx version 9.6.6.488, WS Audiology), and two programs were made: One with omnidirectional and another with directional sound processing.

During conversations with a low level of background noise (50 dBC), the HI interlocutor was either not wearing HAs (denoted the unaided condition) or wearing HAs with omnidirectional sound processing (denoted the aided condition). Before the unaided conversations, the HAs were removed by the test leader in a discrete fashion to avoid notifying the NH conversational partners. Furthermore, the HI participants had been instructed not to notify the NH conversational partners that they were unaided.

During conversations with a high level of background noise (75 dBC), the HAs worn by the HI interlocutors were either providing omnidirectional sound processing (denoted omni, settings identical to the aided condition) or directional sound processing (denoted dir) designed to suppress noise sources based on their spatial position. The directional attenuation pattern was fixed using the Signia App (provided by Sivantos Pte. Ltd), controlled by the test leader, in which the pattern was set to the narrowest beam possible, providing 10–15 dB attenuation of white noise presented from directions beyond +/−45 degrees azimuth ( Figure 1C ).

2.5 Experimental procedure

Before the actual experiment, the triads did two 5-min training conversations. The training served to introduce the two conversational types, to acquaint the participants with each other, and to introduce the background noise used during the experiment. During the first training round, participants were given a consensus question to discuss in quiet; during the second training round, they discussed a picture card in canteen noise presented at 60 dBC, a noise level between the low and high levels of noise used during the actual experiment.

A total of 12 experimental conversations were recorded from each triad in the four different experimental conditions (unaided and aided in low noise and omnidirectional and directional processing in high noise), each repeated three times. The order of the four conditions was balanced within three blocks, while the conversational types were balanced across conditions within each triad. The participants had a mandatory break after six experimental conversations.

After each conversation, all participants provided individual subjective ratings of their active participation and the perceived usage of listening/talking strategies. All participants answered the question, ‘ If the conversation would have taken place in quiet, I would have participated: Put a cross on the scale’ , with the scale ranging from 0 (a lot less active) to 10 (a lot more active), with 5 indicating the same perceived activity level as if the conversation was being held in quiet. The formulation of the second question differed depending on hearing status: Both questions started with ‘ In comparison to a conversation in quiet, to which degree do you feel the noise made you … ’, with the ONH and YNH being asked ‘ change the way you communicated , e.g. , by changing the way you expressed yourself, used your voice, or body language?’ , while the formulation to the HI was ‘use listening tactics, such as asking for repeats, asking to speak up or turning your better ear to the speaker?’ For both questions, the scale ranged from 0 (no change) to 10 (a lot of change).

2.6 Statistical analysis

The effects of the experimental contrasts on the measures of conversational dynamics were investigated through Linear-Mixed Effects Models (LMERs) using the lm4 package for R ( Bates et al., 2015 ). The experimental design of the current study cannot be treated as a 2×2 design because the HA conditions differ between the noise conditions (low noise levels: unaided and aided/omni; high noise levels: omni/aided and dir). For this reason, it was chosen to test each of the three experimental contrasts (background noise level, providing HA amplification at low noise levels, and providing directional processing at high noise levels) in three separate LMER models.

All models included the fixed effects hearing status (HI, YNH, and ONH), experimental contrasts (two conditions for each contrast, see details below), and their interaction effect, with a random intercept of triad and person varying within the triad, i.e., x ~ hearing + conditions + hearing:conditions + (1 | triad/person) . When testing the effect of the experimental contrast background noise (low vs. high levels of noise), the two conditions included were aided and omni. When testing the effect of providing HA amplification during low levels of noise, the conditions were unaided and aided, and finally, the effect of the experimental contrast directional sound processing was investigated by comparing the conditions omni and dir during high levels of background noise. The predicted variable x in the statistical model will be the five measures of the conversational dynamics and two subjective ratings of the conversation. The extraction of the five measures of conversational dynamics is described in detail in the following section.

3 Quantifying turn-taking in group conversations

The focus of the following section is on the methodological considerations of how to perform voice activity detection, determine the communication states when three interlocutors, instead of two, are interacting, and how to deal with pauses made within one talker’s own turn. The final part of this chapter will provide a detailed description of the features of the conversational dynamics used in the current study.

3.1 Voice activity detection of individual interlocutors

Quantifying the conversational dynamics requires knowing when each interlocutor is speaking, e.g., by performing individual voice activity detection (VAD). VAD can be done automatically, either using simple methods based on short-term energy changes and thresholding or using more advanced neural network implementation ( Sharma et al., 2022 ). Accurate VADs are important when computing the features characterizing conversational dynamics to reliably identify the beginning and end of all utterances.

One major issue in the application of automatic VADs is crosstalk, i.e., speech from the conversational partners is audible in the recording of the targeted interlocutor. Due to the distance between talkers and the directionality of the headsets worn by the interlocutors, the amplitude level of the crosstalk is generally lower than speech from the targeted talker. However, natural speech has a large dynamic range. At low noise levels (50 dBC), the speech volume of single utterances ranged from 25.9 to 84.1 dB SPL (across all talkers); however, an average of 12.4% of all intervals without speech (background noise, crosstalk, and artifacts) exceeded the minimum speech level. At the high level of background noise (75 dBC), the speech volume ranged from 33.5 to 88.0 dB SPL, but a significantly lower percentage of the background noise exceeding the minimum speech level (7.8%, F (1,877) = 29.3, p  < 0.001). When testing the performance of various energy-based VAD approaches on data from the current study, this ~10% overlap between targeted interlocutor speech and non-speech caused unreliable VAD detections, including false positives and false negative detections.

For the current study, no automatic algorithm was identified that could provide a reliable VAD detection without erroneously labeling crosstalk as speech or vice versa. Hence, the VAD was performed manually based on the following rules: 1) All utterances should be labeled, including laughing, but excluding breaths and sighs; 2) Pauses between utterances shorter than 180 ms should be marked as speech to avoid cutting off stop closures ( Heldner and Edlund, 2010 ); 3) Utterances shorter than 90 ms should not be marked as it is not assumed to be speech ( Heldner and Edlund, 2010 ).

3.2 Determining the conversational states

From the binary output of the individual VADs (1 = interlocutor speech, 0 = not interlocutor speech), the conversational states, i.e., the organization of turns between interlocutors, must be determined before extracting the features of the conversational dynamics.

Before determining the conversational states, all instances of laughter were removed from the output-VAD because laughing does not constitute a wish from the interlocutor to “take the floor” ( Heldner and Edlund, 2010 ). Across all interlocutors, between 0 and 16 instances of laughter were removed per conversation (on average 0.60 laughs/min). Note that since laughing often manifests as short bursts separated by unvoiced silence, consecutive bursts of laughter were grouped into one instance of laughing.

Following the procedure proposed by Heldner and Edlund for two-talker conversations ( Heldner and Edlund, 2010 ), it is possible to categorize conversations into the following states (see Figure 2A ): A break in a talker’s utterance without a change of turn is called a pause , while a turn-taking between talkers (a floor-transfer) can either happen after a gap or in an overlap between (overlapB) speech. Finally, an utterance can happen simultaneously with an ongoing turn creating an overlap within (overlapW) other interlocutors’ turns. This general procedure can also be applied to triadic conversations when only two of the three interlocutors are active. However, if all three interlocutors are active at the same time, the resulting multiple overlaps will cause one utterance to be assigned to multiple conversational states (see text below and Figure 2 for more detail). In the current study, we wish to determine the conversational states of the entire conversation, meaning that each utterance should only have a single conversational state. As detailed below, this requires adding a few exceptions to the procedure proposed by Heldner and Edlund.

www.frontiersin.org

Figure 2 . Illustration of conversational states of a three-talker conversation (T1–T3). (A) Most states occur between two of the three talkers and are identical to the states observed in two-talker conversations, i.e., the turn-taking happens in an overlap between (overlapB) T1 and T2, in a gap between T2 and T3, or speech (T3) can completely overlap within (overlapW) the turn of another talker (T1). Dotted vertical lines indicate which talker the floor is transferred to and the floor-transfer offset (turn-taking) times of gaps or overlapBs used in the analysis. (B) When three talkers consecutively take turns in overlap (overlapBs), it can create instances where one talker (T3) has an overlapB between both remaining talkers (T1, dotted gray area, and T2, gray area). An utterance (T1) can also overlapW speech of remaining talkers (T2 and T3). (C) An utterance (T3) can overlapB one talker (T1) but overlapW another (T2) at the same time. In the three examples of (B,C) , the final conversational state of an utterance is determined by which talker initiated their utterance first (see details in the text).

Figure 2 illustrates the three examples where overlapping utterances fall into two conversational states. In Figure 2B , Talker 2 and Talker 3 (T2 and T3) start an utterance that overlaps T1 (overlapB). As T3 initiates the utterance later than T2, an additional overlapB between T2 and T3 occurs (indicated with light gray in Figure 2B ). In this case, the turn should be transferred from T1 to T2 and then from T2 to T3. The resulting duration of the overlapB included in the analysis of the floor-transfer offsets is indicated with dotted vertical lines in Figure 2 . Figure 2B also shows the example of an utterance made by T1 in overlapW with the speech of both T2 and T3. This utterance is classified as one overlapW and is always said to overlap within the speech of the talker who first initiated their turn, in this case, T3 ( Figure 2B ). It is also possible for an utterance to be classified as both an overlapB and overlapW, as illustrated for T3 in Figure 2C . As T2 initiates a turn first (in an overlapB T1), the utterance by T3 ends up overlapping within (overlapW) the turn of T2 and between (overlapB) the turn of T1. In this case, the utterance of T3 is classified as an overlapW of the speech of T2, as T2 initiated the speech before T3.

Across all the conversations, an average of 1.28 utterances/conversation was corrected for having two overlapBs (illustrated in Figure 2B , range 0–6/conversation). An average of 0.04 utterance/conversation was corrected for multiple overlapWs (range 0–1/conversation, Figure 2B ), while an average of 1.24 utterance/conversation was corrected for being overlapB and overlapW (range 0–7/conversation, Figure 2C ).

3.3 Correcting pauses within turns

Upon inspecting the conversational states and turn-taking resulting from the procedure described in the previous paragraph, it was evident that further processing was needed to capture the dynamics of the conversations. Figure 3 illustrates a typical exchange observed in the triadic conversation: T1 is speaking but receives verbal feedback (denoted backchannels) from both conversational partners (T2 and T3) within natural pauses occurring within the turn of T1. When following the rules for determining the conversational states (see previous section), the example provided in Figure 3 results in six turn-takings (solid orange line). However, considering that the definition of a backchannel is that it does not signal a wish from the talker to take the turn ( Yngve, 1970 ), the timing of backchannels does not have to follow the same social rules as the timing of a turn. Indeed, it has been observed that for utterances made in overlap (overlapW and overlapB), 73% of them are backchannels ( Levinson and Torreira, 2015 ).

www.frontiersin.org

Figure 3 . Example of post-processing of turn-taking from a conversation between three talkers (T1–T3). Individual VADs from an excerpt of a conversation (transcription on top) are indicated with fully colored blocks. Based on these, there are six resulting turn-takings (full orange line) between the interlocutors. After post-processing the VADs by bridging pauses within a talker’s own speech shorter than 1 s (dotted blue area), the number of turn-takings is reduced to one, as indicated by the dotted orange line.

To get a better estimate of the true number of turns and their timing in the triadic conversations, post-processing of the output of the VADs was performed to connect utterances constituting a single turn. To this avail, any pauses within a talker’s speech shorter than 1 s were bridged such that the pauses were considered speech. This was done under the assumption that if a talker pauses for less than 1 s, the intention was not to end but to continue the turn. In the example provided in Figure 3 , the bridging of pauses reduces the number of turn-takings from six to one.

An average of 7.4 pauses/min were bridged per interlocutor. The conversational states of the post-processed output of the VADs were determined. As expected, bridging the pauses increased the number of utterances overlapping the ongoing turn (overlapW) by an average of 0.60 more overlapW per minute conversation relative to the output of the original VAD.

3.4 Features of the conversational dynamics

The dynamics of a conversation can be described by different measures extracted from the individual utterances and conversational states. In the current study, a total of five measures were extracted:

From the individual utterances, the 1) speech levels, defined as the RMS of all utterances, were extracted and scaled using a calibration recording to get the level in dB SPL ( Petersen et al., 2022 ). To avoid including periods of pauses within turns, the speech level was extracted by concatenating utterances of the original VADs, i.e., prior to performing the post-processing described above. From the post-processed individual VADs, the 2) median turn duration was extracted, while 3) the percentage speaking time was extracted as the percentage of the 5-min recording where the interlocutor was talking. As such, the percentage speaking time across the three interlocutors of the conversation can exceed 100% due to overlapB and overlapW.

From the conversational states, the FTOs were extracted by combining gaps and overlapBs to generate the FTO distribution. From the FTO distribution, the 4) median and 5) variability, quantified by the interquartile range (IQR), were extracted as measures of the turn-taking timing.

Furthermore, two subjective evaluations were made for each interlocutor after each conversation regarding 6) the level of activity (participation) and 7) the application of listening (for HI) or talking (YNH and ONH) strategies.

The fixed effect of hearing status (HI, ONH, and YNH) and experimental contrasts (noise level, HA amplification, and HA directionality) were investigated for the five measures of conversational dynamics and the two subjective ratings made by each interlocutor after each conversation. All statistical results are presented in Table 1 . In the visualizations of results, the main effects of conditions and interactions between hearing status and conditions are shown using lines and asterisks, respectively, indicating the level of significance (*** p  < 0.001, ** p  < 0.01, * p  < 0.05).

www.frontiersin.org

Table 1 . Statistically significant effects are highlighted in bold writing. The relevant post-hoc results are presented in italics below the significant fixed effect, indicating the contrasts, estimated difference, and p -values.

4.1 Floor-transfer offsets

Across all interlocutors and conditions, the FTO distribution peaked at 208 ms ( Figure 4A ), i.e., interlocutors tended to start their turn after a short gap. For each interlocutor and conversation, the FTO distribution was formed, and the median and the interquartile range (IQR) were extracted. For all experimental contrasts, a significant effect of hearing status was observed on the median FTO ( Table 1 ; Figure 4B ). The HI interlocutors initiated their turns on average 79 ms faster than the YNH and ONH interlocutors at high noise levels. At low levels of noise, the HI interlocutors initiated their turns faster than the YNH (124 ms), but the 79 ms difference between HI and ONH was not significant ( p  = 0.08).

www.frontiersin.org

Figure 4 . Floor-transfer offset (FTO) distribution and measures. (A) FTO distributions for HI (red), ONH (green), and YNH (blue) for all four conditions. Positive FTO values indicate turns initiated after a gap, while a negative value indicates an overlap between turns. The dotted vertical line indicates an FTO of 0 ms, i.e., neither gap nor overlap. (B) Median of the FTO distribution extracted for each interlocutor and conditions (averaged across repetitions). (C) Variability of the FTO distribution extracted as the interquartile range (IQR) for each interlocutor and conditions (averaged across repetitions). Here and in the following, the boxes indicate the 25th to 75th percentile, and the horizontal lines are the median. Whiskers extend the range of the data, and the dots highlight the outliers.

The FTO variability also showed an effect of hearing status ( Table 1 ; Figure 4C ), indicating that the spread of the HI interlocutors’ FTO distribution was ~130 ms larger than that of the YNH and ONH interlocutors at both high and low high noise levels, although the difference between HI and ONH in the latter only approached significance ( p  = 0.054).

4.2 Turn duration and speaking time

The median overall turn duration was 3.2 s. The main effect of hearing status in the model testing effect of increasing the noise level ( p  = 0.02, Table 1 ; Figure 5A ) suggests that HI interlocutors spoke in longer turns in general; however, this effect is driven by the significant interaction between noise and hearing ( p  = 0.03), revealing that the HI interlocutors only differ from the YNH and ONH at high levels of noise. This is confirmed by the main effect of hearing status in the conditions with high levels of noise where the HI interlocutors spoke 1.3 s longer than the ONH ( p  < 0.01) and 0.9 s longer than the YNH interlocutors ( p  = 0.02).

www.frontiersin.org

Figure 5 . Median turn duration, speaking time, and speech levels. (A) Median turn duration resulting from the post-processed VADs across conditions and interlocutor hearing status. (B) Percentage of the speaking time of the total conversation duration of 5 min. (C) Speech levels in dB SPL. Background noise levels (50 and 75 dBC) of the different conditions are indicated with dotted gray. Asterisks colored according to hearing status indicate the statistically significant results of the post-hoc testing of the interaction effect between hearing status and experimental condition.

No effect of hearing status was found on the turn duration at low background noise, although the result was close to significance ( p  = 0.06). In lower noise, the main effect of HA amplification on turn duration ( p  = 0.03) indicated that all interlocutors’ turns were on average 357 ms longer when the HI interlocutors were aided relative to unaided.

The percentage speaking time was affected by hearing status for all experimental contrasts, indicating that HI interlocutors spoke around 5% more than the ONH and around 8% more than the YNH interlocutors across all conditions ( Table 1 ; Figure 5B ). It should be noted that at a low level of background noise, the difference between HI and ONH only approached significance ( p  = 0.06). No difference in the speaking time between the YNH and ONH interlocutors was seen, although there was a non-significant tendency for the ONH to speak more than the YNH in high levels of noise ( p  = 0.06).

4.3 Speech level

The conversations held in 50 dBC noise were conducted at an SNR of +11.1 dB on average, while in 75 dBC noise, the SNR was reduced to −6.3 dB when averaging across interlocutors, repetitions, and HA settings.

The speech levels were affected by hearing status at high noise levels but not at low levels of noise ( Table 1 ; Figure 5C ). When increasing the noise level (aided vs. omni), the HI interlocutors increased their speech by around 1 dB, which is less than the ONH and YNH interlocutors. Consequently, during the high level of background noise, the ONH spoke 2.4 dB louder than the HI interlocutors ( p  < 0.01), while there was a non-significant trend for the YNH to speak 1.3 dB louder than the HI interlocutor in noise ( p  = 0.07). In terms of SNR, the HI interlocutors talked at −8.0 dB SNR on average at the highest level of noise, while the YNH was speaking at −5.8 dB SNR and the ONH at −5.1 dB SNR.

Significant effects of altering the HA processing were observed. At a low background noise level, all interlocutors spoke 0.8 dB louder when the HI interlocutor was unaided ( p  < 0.001, Table 1 ). Similarly, all interlocutors spoke on average 0.58 dB louder when the HI interlocutors were listening to the unprocessed omnidirectional sound input ( p  < 0.01). However, a significant interaction effect between hearing status and directional sound processing revealed that while the NH interlocutors generally spoke louder than the HI interlocutors at high noise levels, providing directional sound processing caused the HI interlocutors to reduce their speech level further by 1.3 dB ( p  < 0.001), while the speech levels of the NH interlocutors were unaffected (both p ’s < 0.09). As a result, the HI interlocutors reduced the SNR experienced by the NH interlocutors from −7.2 dB when listening to omnidirectional sound processing to −8.5 dB when receiving directional sound processing. The HI interlocutors experienced an SNR of −5.5 dB produced by the NH interlocutors in both conditions with high levels of background noise.

4.4 Subjective evaluations

After each conversation, the interlocutors were asked to subjectively rate their level of participation as well as their application of listening (for HI interlocutors) and talking (YNH and OHN interlocutors) strategies.

The subjective ratings of the level of participation showed no significant effect on hearing status ( Table 1 , data not shown, all p’s > 0.2), while increasing the noise level reduced their participation rating by 0.4 points (p < 0.001).

Similarly, the subjective evaluation of the application of listening/talking strategies increased by 4.2 points when the noise level was increased ( Table 1 ; Figures 6 , p  < 0.001). The HI interlocutors rated increasing their usage of listening strategies compared to the application of talking strategies rated by the NH interlocutors when increasing the noise level ( Table 1 ; Figure 6 , both p ’s < 0.05), but the significant interaction effect between hearing status and background noise indicated that hearing status only affected the ratings at the low level of background noise (both p ’s < 0.001), whereas no differences were observed between HI and NH interlocutors at high noise level (all p ’s < 0.2).

www.frontiersin.org

Figure 6 . Subjective ratings. The subjective rating of how much the HI rated applying listening strategies relative to whether the conversation had been held in quiet. The NH rated how much they applied communication strategy relative to whether the conversation had been held in quiet. Ratings were performed on a continuous 11-point visual analog scale.

In low-level noise, the HI interlocutors rated using 3.0 points more strategy on average compared to NH listeners ( Table 1 ; Figures 6 , p  < 0.001). Although a significant main effect of HA amplification ( p  < 0.001) indicated a general 0.6-point decrease in applying strategies when the HI interlocutor was aided, the significant interaction between hearing status and HA amplification ( p  < 0.001) revealed that the effect is driven by the HI interlocutors rating using 1.6 points less strategy when receiving HA amplification ( p  < 0.001), whereas the YNH and ONH rated no changes in their application of talking strategies (both p ’s > 0.2).

5 Discussion

The current study investigated the effect of hearing status and three different experimental contrasts (background noise level, HA amplification, and HA directionality) on the dynamics of a group conversation between one HI and two NH interlocutors. We observed that being hearing impaired affected all measures of conversational dynamics, HA processing, and noise level, which primarily affected speech levels. The following discussion will focus on why the experimental contrasts did not affect the conversational dynamics as hypothesized.

5.1 Effect of noise and HA processing on the conversational dynamics

Only a few effects were observed when altering the three experimental contrasts: Increasing the background noise or altering the HI interlocutor’s auditory perception by providing either HA amplification or directional processing.

Beyond increases in speech levels ( Figure 5C ), the 25 dB increase in the level of the canteen noise did not have any effect on the conversational dynamics (no main effects of aided vs. omni, Table 1 ). The increased noise level caused interlocutors to speak on average 8.2 dB louder, resulting in a reduction in the communication SNR across interlocutors, from +11.1 dB in 50 dBC background noise to −6.3 dB SNR in 75 dBC background noise. This communication SNR is much in line with a previous study finding that dialogs between an HI and an NH interlocutor happened at −5 dB SNR in 77.3 dBA café noise ( Beechey et al., 2020b ). For comparison, the standardized Danish speech-in-noise tests find sentence intelligibility (without visual cues) to be lower than 50% for NH listens at −5 dB SNR ( Nielsen and Dau, 2009 ; Bo Nielsen et al., 2014 ). It should be noted that in realistic everyday listening situations, communication SNRs below +5 dB SNR are rarely observed ( Smeds et al., 2015 ). Nevertheless, the result of the current study suggests that communication at −6.3 dB SNR was possible for both HI and NH interlocutors. This is evident from the fact that the overall percentage speaking time did not alter when increasing the noise level, and the subjective participation ratings only decreased by 0.4 points on the 11-point scale. The neglectable effect of increasing the noise level on the conversational dynamics could be caused by the access to visual cues, the spatial separation of the noise and interlocutors, and/or predictability of the conversational topic. The interlocutors not being able to increase their vocal intensity more, to improve the SNR beyond −6.3 dB, could be caused by the additional physical strain on the vocal cords associated with speaking at higher levels, causing a reduction in voice quality ( Södersten et al., 2005 ). Hence, the SNR of a conversation is likely a balance between speaking loud enough for communication to be successful while at the same time reducing the vocal effort.

As two of the three experimental contrasts (HA amplification and directional processing) were only experienced by the HI interlocutors, it is noteworthy that providing HA amplification affected turn duration and speech level for both the HI and NH interlocutors ( Table 1 ). All interlocutors shortened their turns by 357 ms on average, when the HI interlocutor was unaided ( Figure 5A ). This observation contradicts the hypothesis that communication difficulty would cause longer turns, as observed with the increased turn duration of the HI interlocutors. The effect of HA amplification on speech level will be discussed in detail in the section Speech Levels are Sensitive to all Experimental Contrasts.

5.2 Effects of hearing impairment on conversational dynamics

HI interlocutors were hypothesized to initiate their turns slower and with more variability because their impairment makes them worse at predicting turn-ends than the NH interlocutors ( Sørensen et al., 2019 ; Petersen et al., 2022 ). Although the HI interlocutors were found to initiate their turns with more variability than the NH interlocutors (higher FTO IQR, Figure 4C ), they were also observed to do that faster, not slower, than the NH interlocutors (lower medina FTO, Figure 4B ). Previous studies have focused on turn-taking in dyadic conversations; however, the presence of an additional interlocutor adds an element of competition to the conversation. Indeed, the many minds problem describes how the complexity and uncertainty of the turn-taking system increase when more than two interlocutors are conversing ( Cooney et al., 2020 ). To ensure getting the turn, interlocutors might be forced to initiate turns earlier in overlapBs. This could explain why the broader FTO distributions in the current study skewed toward negative values ( Figure 4A ) relative to FTO distributions of the dyadic conversations of previous studies ( Figure 2A of Petersen et al., 2022 , Figure 4 left in Sørensen et al., 2019 ). However, it should be noted that although the many minds problem can affect turn-taking, the post-processing of the VADs by bridging pauses also has a substantial effect on the turn-taking timing by occasionally causing utterances classified as overlaps within (overlapW) to be bridged with later utterances, resulting in larger negative FTO values (Section 3.2 Correcting Pauses Within Turns). Despite the influence of the post-processing step, it is nevertheless interesting to note that the peak of the overall FTO distribution, at 208 ms, is comparable to that of previous studies (~230 ms in Petersen et al., 2022 , ~275 ms in Sørensen et al., 2019 ), lending more emphasis on the stability of the average turn being taken with a 200-ms gap ( Levinson and Torreira, 2015 ).

When facing difficult communication situations, it has been reported that HI interlocutors can adopt a face-saving strategy of speaking more to avoid listening ( Stephens and Zhao, 1996 ). The HI interlocutors in the current study generally took up around 5% more speaking time relative to the NH interlocutors. Increasing the noise level did not affect the speaking time of the HI interlocutors. This suggests that although the HI interlocutors took up more speaking time, they did not seem to deliberately use the strategy of dominating the conversation to avoid listening when the background noise level increased.

The HI interlocutors also produced longer turns ( Figure 5A ), although the effect seemed to be largest at higher levels of background noise, as the effect of hearing status was only near-significant at the low noise level ( Tables 1 , p  = 0.06). Overall, HI interlocutors spoke for around 1 s longer per turn, which must be considered a substantial increase relative to the overall average turn duration of 3.2 s. The HI interlocutors could have prolonged their turns by edge speaking slower, adding more pauses, or including more filler words such as “ um ” or “ uh ” in their speech. Non-informative filler words play an important role in coordinating turn-taking by helping the interlocutor take the floor fast, or keep the floor, while planning an upcoming utterance ( Clark and Fox Tree, 2002 ). Indeed, it might be speculated that if the longer turn durations observed for the HI interlocutors are caused by uttering filler words, these might cause the faster turn-taking timing (lower FTO median) observed for the HI interlocutor.

It should be noted that the HI interlocutors were significantly older than the two NH groups, which could lead to speculation on whether the observed effect of hearing status was driven by the difference in age between the groups. However, as the ONH interlocutors were also significantly older than the YNH participants, it would be expected that any potential age effects would have resulted in significant differences between the YNH and OHN groups, which was not observed.

5.3 Speech levels are sensitive to all experimental contrasts

Similar to a previous study ( Petersen et al., 2022 ), speech level was the measure most affected by alterations in communication difficulty ( Table 1 and Figure 5C ). At low background noise, hearing status had no differential effect on the speech level; however, when the HI interlocutor did not receive HA amplification (unaided), all interlocutors spoke louder. The observed decrease in speech level of 0.8 dB upon providing HA amplification is comparable to the 1.1 dB decrease in speech level observed when providing amplification to HI interlocutors in dialogs held in quiet ( Petersen et al., 2022 ).

When increasing the level of background noise, all interlocutors increased their speech level. However, the increase was around 2 dB larger for the NH interlocutors than for the HI interlocutors. Again, a similar effect was observed when adding 70 dB background noise to a dialog, in which NH interlocutors increased their speech level by 3.2 dB more than the HI interlocutors ( Petersen et al., 2022 ). Hearing status was found to affect speech level differentially, suggesting that the NH interlocutors made up for the added communication difficulty experienced by the HI interlocutors when communicating in noise by speaking louder. Interestingly, when providing directional sound processing, thereby reducing the noise level experienced by the HI interlocutors, the HI interlocutors reduced their speech level by 1.3 dB, further reducing the SNR experienced by the NH interlocutors. Hence, directional sound processing increased the communication difficulty experienced by the NH interlocutors.

The subjective evaluation of the use of talking strategies during the conversations, including speaking louder, revealed that although speaking louder, the NH interlocutors did not perceive using additional talking strategies when the HI interlocutors were unaided ( Table 1 ; Figure 6 ). However, the HI interlocutors reported applying more listening strategies when communicating unaided, despite the small increase in speech level made by all interlocutors relative to when the HI interlocutors were aided. At higher levels of background noise, interlocutors reported using more talking/listening strategies. However, it is interesting to note that the additional application of listening strategies in noise rated by the HI interlocutors seemed to match the increase in applied talking strategies made by the NH interlocutors.

The Lombard effect describes the increase in speech level when talking in the presence of noise; however, the effect has rarely been investigated in interactive communication situations. As the findings of the current study highlight, the speech level of interlocutors depends not only on the noise level but also on the communication difficulty experienced by the (HI) conversational partner. Through requests to repeat utterances, statements of not being able to hear, miscommunications, or subtle alterations in facial expressions, gestures, or body posture/movements, such as leaning in or turning the better ear, an interlocutor can influence the conversational partners to increase their speech level. However, the current study also suggests that HI interlocutors alter their speech level according to their own perceived communication difficulty, as evident from the reduced speech level of the HI interlocutors when receiving directional sound processing in a high level of background noise. However, when receiving HA amplification at the lower noise level, the speech levels increased not only for the HI interlocutor but for all interlocutors. During the experiment, the test leader physically removed the HAs as discretely as possible (see section hearing-aid fitting); however, the removed and missing HAs during the unaided condition were visible to the NH interlocutors. It is, therefore, likely that all interlocutors were aware that the HI interlocutor was going to experience communication difficulties in the unaided conditions, potentially causing interlocutors to alter their speech levels going into the conversation. This is contrary to the change in directional sound processing, which was changed through an app, thereby not prompting the interlocutors that the auditory experience of the HI interlocutor was altered.

Altogether, the result of the current study shows that the conversational dynamics of free triadic conversation are relatively stable in response to changes in communication difficulties. This is contrary to previous studies of task-bound dyadic conversations, where researchers found changes in many different measures of conversational dynamics ( Hazan et al., 2018 ; Beechey et al., 2020a , b ; Sørensen, 2021 ; Petersen et al., 2022 ). We can only speculate what caused the observed stability of the conversational dynamics in the current study: Perhaps the interpersonal coordination of a triadic conversation, caused by the many minds problem, influences the dynamics of the triadic conversation more than, e.g., altering the background noise. Perhaps the free conversations allowed the interlocutors to utilize and modify their word usage, linguistics, or body language to help overcome the increased communication difficulty. It is also possible that the conversational dynamics are determined by the fact that two out of three interlocutors were NH, who are potentially less affected by changes in the noise level. Unfortunately, we cannot know which, if any, of the reasons listed above cause the insensitivity of the features of conversational dynamics to the changes in communication difficulties.

6 Conclusion

The current study explored whether the dynamics of a free group conversation were affected by the impaired hearing experienced by one of the three interlocutors and whether noise and hearing-aid signal processing would influence it to the same extent as observed in dyadic conversations. It was hypothesized that any alteration of the communication difficulty (noise level, hearing loss, and HA processing) experienced by one or all interlocutors would affect the five measures of the conversational dynamics (FTO median, FTO IQR, turn duration, speech level, and speaking time). This hypothesis could not be uniformly confirmed: Interlocutors with hearing loss showed the expected larger variability in turn-taking timing (FTO IQR), taking up more speaking time, having longer turn-durations at high noise levels, and resulted in the NH interlocutors speaking louder, especially at low noise levels. However, contrary to the expectations, it was also observed that the HI interlocutors initiated their turns faster (FTO median), not slower, than the NH interlocutors. An overall increase in the noise level of 25 dB SPL caused an increase in the speech levels but did not affect the turn-taking timing, turn duration, or distribution of speaking time. Furthermore, improving listening for the HI interlocutors by providing HA amplification at low noise levels and directional sound processing at high noise levels had no effect on the conversational dynamics beyond the speech level: At low noise levels, providing HA amplification to the HI interlocutors cause all conversation partners to speak at a lower volume. At high noise levels, providing directional sound processing caused the HI interlocutor to speak at a lower volume.

From the current results, the speech levels were observed to be a measure of the conversational dynamics most sensitive to alterations in the communication difficulty experienced by the group (background noise), as well as the HI interlocutor when providing HA amplification and directional sound processing.

Data availability statement

The original contributions presented in the study are included in the article/supplementary materials, further inquiries can be directed to the corresponding author.

Ethics statement

The studies involving humans were approved by Board of Copenhagen, Denmark, reference H-20068621. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author contributions

EP: Writing – original draft, Writing – review & editing.

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Acknowledgments

The author would like to thank Clinical Research Audiologist Els Walravens for her detailed and vigorous work in collecting data for the current study.

Conflict of interest

EP is an employee at the hearing-aid manufacturing company WS Audiology.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Baker, R., and Hazan, V. (2011). DiapixUK: task materials for the elicitation of multiple spontaneous speech dialogs. Behav. Res. Methods 43, 761–770. doi: 10.3758/s13428-011-0075-y

PubMed Abstract | Crossref Full Text | Google Scholar

Barthel, M., Sauppe, S., Levinson, S. C., and Meyer, A. S. (2016). The timing of utterance planning in task-oriented dialog: evidence from a novel list-completion paradigm. Front. Psychol. 7:1858. doi: 10.3389/fpsyg.2016.01858

Bates, D., Mächler, M., Bolker, B., and Walker, S. (2015). Fitting linear mixed-effects models using lme4. J. Stat. Softw. 67, 1–48. doi: 10.18637/jss.v067.i01

Crossref Full Text | Google Scholar

Beechey, T., Buchholz, J. M., and Keidser, G. (2018). Measuring communication difficulty through effortful speech production during conversation. Speech Comm. 100, 18–29. doi: 10.1016/j.specom.2018.04.007

Beechey, T., Buchholz, J. M., and Keidser, G. (2020a). Hearing aid amplification reduces communication effort of people with hearing impairment and their conversation partners. J. Speech Lang. Hear. Res. 63, 1299–1311. doi: 10.1044/2020_JSLHR-19-00350

Beechey, T., Buchholz, J. M., and Keidser, G. (2020b). Hearing impairment increases communication effort during conversations in noise. J. Speech Lang. Hear. Res. 63, 305–320. doi: 10.1044/2019_JSLHR-19-00201

Bo Nielsen, J., Dau, T., and Neher, T. (2014). A Danish open-set speech corpus for competing-speech studies. J. Acoust. Soc. Am. 135, 407–420. doi: 10.1121/1.4835935

Bögels, S., Magyari, L., and Levinson, S. C. (2015). ‘Neural signatures of response planning occur midway through an incoming question in conversation’, scientific reports. Nat. Publ. Group 5, 1–11. doi: 10.1038/srep12881

Clark, H. H., and Fox Tree, J. E. (2002). Using uh and um in spontaneous speaking. Cognition 84, 73–111. doi: 10.1016/S0010-0277(02)00017-3

Cooney, G., Mastroianni, A. M., Abi-Esber, N., and Brooks, A. W. (2020). The many minds problem: disclosure in dyadic versus group conversation. Curr. Opin. Psychol. 31, 22–27. doi: 10.1016/j.copsyc.2019.06.032

Corps, R. E., Gambi, C., and Pickering, M. J. (2018). Coordinating utterances during turn-taking: the role of prediction, response preparation, and articulation. Discourse Process. 55, 230–240. doi: 10.1080/0163853X.2017.1330031

Gisladottir, R., Chwilla, D., and Levinson, S. (2015). Conversation electrified: ERP correlates of speech act recognition in underspecified utterances. PLoS One 10, 1–24. doi: 10.1371/journal.pone.0120068

Hazan, V., Tuomainen, O., Tu, L., Kim, J., Davis, C., Brungart, D., et al. (2018). How do aging and age-related hearing loss affect the ability to communicate effectively in challenging communicative conditions? Hear. Res. 369, 33–41. doi: 10.1016/j.heares.2018.06.009

Hazan, V., and Tuomaine, O. (2019) The effect of visual cues on speech characteristics of older and younger adults in an interactive task. 19th International Congress of the Phonetic Sciences.

Google Scholar

Heldner, M., and Edlund, J. (2010). Pauses, gaps and overlaps in conversations. J. Phon. 38, 555–568. doi: 10.1016/j.wocn.2010.08.002

Indefrey, P., and Levelt, W. (2004). The spatial and temporal signatures of word production components. Cognition 92, 101–144. doi: 10.1016/j.cognition.2002.06.001

Junqua, J. C. (1996). The influence of acoustics on speech production: a noise-induced stress phenomenon known as the Lombard reflex. Speech Comm. 20, 13–22. doi: 10.1016/S0167-6393(96)00041-6

Keidser, G., Dillon, H., Flax, M., Ching, T., and Brewer, S. (2011). The NAL-NL2 prescription procedure. Audiol. Res. 1, 88–90. doi: 10.4081/audiores.2011.e24

Levinson, S. C., and Torreira, F. (2015). Timing in turn-taking and its implications for processing models of language. Front. Psychol. 6, 1–17. doi: 10.3389/fpsyg.2015.00731

Lombard, E. (1911). Le signe de le Televation de la voix. Ann. Malad. 27, 101–119.

Magyari, L., Bastiaansen, M. C. M., de Ruiter, J. P., and Levinson, S. C. (2014). Early anticipation lies behind the speed of response in conversation. J. Cogn. Neurosci. 26, 2530–2539. doi: 10.1162/jocn_a_00673

Nielsen, J. B., and Dau, T. (2009). Development of a Danish speech intelligibility test. Int. J. Audiol. 48, 729–741. doi: 10.1080/14992020903019312

Petersen, E. B., Macdonald, E. N., and Sørensen, A. (2022). The effects of hearing aid amplification and noise on conversational dynamics between Normal-hearing and hearing-impaired talkers. Trends Heari. 26, 233121652211033–233121652211018. doi: 10.1177/23312165221103340

Petersen, E. B., Walravens, E., and Pedersen, A. K. (2022). Real-life listening in the lab: does wearing hearing aids affect the dynamics of a group conversation?. Conference: Proceedings of the 26th workshop on the semantics and pragmatics of dialog.

Sharma, M., Joshi, S., Chatterjee, T., and Hamid, R. (2022). A comprehensive empirical review of modern voice activity detection approaches for movies and TV shows. Neurocomputing 494, 116–131. doi: 10.1016/j.neucom.2022.04.084

Smeds, K., Wolters, F., and Rung, M. (2015). Estimation of signal-to-noise ratios in realistic sound scenarios. J. Am. Acad. Audiol. 26, 183–196. doi: 10.3766/jaaa.26.2.7

Södersten, M., Ternström, S., and Bohman, M. (2005). Loud speech in realistic environmental noise: Phonetogram data, perceptual voice quality, subjective ratings, and gender differences in healthy speakers. J. Voice 19, 29–46. doi: 10.1016/j.jvoice.2004.05.002

Sørensen, A. (2021) The effects of noise and hearing loss on conversational dynamics. Technical University of Denmark. Available at: https://www.hea.healthtech.dtu.dk/-/media/centre/hea_hearing_systems/hea/english/research/phd-thesis-pdf/00_47_-sorensen.pdf?la=da&hash=BA1FD9620387CA7A7C0F284C1089554CDFD10331

Sørensen, A., MacDonald, E., and Lunner, T. (2019). Timing of turn taking between normal-hearing and hearing-impaired interlocutors. Proceedings of the International Symposium on Auditory and Audiological Research (ISAAR).

Stephens, D., and Zhao, F. (1996). Hearing impairment: special needs of the elderly. Folia Phoniatr. Logop. 48, 137–142. doi: 10.1159/000266400

Stivers, T., Enfield, N. J., Brown, P., Englert, C., Hayashi, M., Heinemann, T., et al. (2009). Universals and cultural variation in turn-taking in conversation. Proc. Natl. Acad. Sci. USA 106, 10587–10592. doi: 10.1073/pnas.0903616106

Yngve, V. (1970) On getting a word in edgewise. Sixth regional meeting Chicago linguistic society.

Keywords: hearing loss, communication, hearing aids, noise, conversational dynamics

Citation: Petersen EB (2024) Investigating conversational dynamics in triads: Effects of noise, hearing impairment, and hearing aids. Front. Psychol . 15:1289637. doi: 10.3389/fpsyg.2024.1289637

Received: 13 September 2023; Accepted: 04 March 2024; Published: 12 April 2024.

Reviewed by:

Copyright © 2024 Petersen. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Eline Borch Petersen, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

IMAGES

  1. DeBokton Logic Hypothesis Contrary to Fact

    hypothesis contrary to fact examples in real life

  2. PPT

    hypothesis contrary to fact examples in real life

  3. PPT

    hypothesis contrary to fact examples in real life

  4. What is a Hypothesis

    hypothesis contrary to fact examples in real life

  5. Hypothesis Contrary to Fact

    hypothesis contrary to fact examples in real life

  6. What Is a Real Hypothesis A hypothesis is

    hypothesis contrary to fact examples in real life

VIDEO

  1. Fact, Hypothesis and Neutrality

  2. Proportion Hypothesis Testing, example 2

  3. Abiogenesis: What Is the Probability Life Arose from Inorganic Chemicals?

  4. Simulation Hypothesis Theory #facts #shortsvideo #sciencefacts

  5. DeBokton Logic Hypothesis Contrary to Fact

  6. Types of Hypothesis in Research Methodology with examples

COMMENTS

  1. Hypothesis Contrary to Fact

    The fallacy also entails treating future hypothetical situations as if they are fact. Logical Form: If event X did happen, then event Y would have happened (based only on speculation). Example #1: If you took that course on CD player repair right out of high school, you would be doing well and gainfully employed right now.

  2. Exploring Hypothesis Contrary To Fact: A Hidden Trap in Our Thinking

    Real World Examples. 1. Sports Scenario: Imagine a basketball fan saying, "If Michael Jordan had not retired in 1993, the Chicago Bulls would have won eight consecutive NBA championships instead of six." This statement is an example of a hypothesis contrary to fact. It assumes a hypothetical scenario where Jordan didn't retire and then predicts ...

  3. Examples of the Valid Fallacy is Hypothesis Contrary at Fact

    Your contrary to fact, adenine form of axiomatical thinking, is the of these three unhappy possibilities. The logical fallacy are hypothesis contrary to subject occurs when a hypothesis is put forwards but the hypothesis cannot be true. This is common connected with a meaningless question about an impossible hypothetical situation.

  4. Hypothesis Contrary to Fact

    The fallacy of Hypothesis Contrary to Fact appears to follow the same general pattern of reasoning, but it does not. In the fallacy of Hypothesis Contrary to Fact, the conclusion is a hypothetical statement, while the premiss is a statement of fact. We are inferring a connection between an antecendent and a consequent from the fact stated in ...

  5. Counterfactual

    Definition and explanation. Counterfactual reasoning means thinking about alternative possibilities for past or future events: what might happen/ have happened if…? In other words, you imagine the consequences of something that is contrary to what actually happened or will have happened ("counter to the facts"). For instance, "if Lee Harvey ...

  6. Today's Logical Fallacy is...Hypothesis Contrary to Fact!

    For example, the claim, "If only you had learned how to play the piano as a child, you would be a concert pianist today" is untenable; there's no way to guarantee that they would have continued playing throughout their life, have the talent and skill to perform at a professional level, or would not suffer a serious injury that would ...

  7. Counterfactuals

    Counterfactuals are not really conditionals with contrary-to-fact antecedents. For example ... Joplin had to have come from the particular sperm and egg she in fact came from. b. If there had been no life on Earth, then Joplin would have come from the particular sperm and egg she in fact came from. ... William G., 2001, Real Conditionals ...

  8. The Dicto Simpliciter Fallacy: Definition and Examples

    Everybody Should Exercise. "' Dicto Simpliciter means an argument based on an unqualified generalization. For example: 'Exercise is good. Therefore everybody should exercise.'. "'I agree,' said Polly earnestly. 'I mean exercise is wonderful. I mean it builds the body and everything.'. "'Polly,' I said gently. 'The argument is a fallacy.

  9. Logical Fallacies Handlist

    Perhaps the listener had an abusive home-life or school-life, suffered from a chemical imbalance leading to depression and paranoia, or made a bad choice in his companions. ... Hypothesis Contrary to Fact ... but it is simply useless when it comes to actually proving anything about the real world. A common example is the idea that one "owes ...

  10. Counterfactual Thinking

    Kahneman and Tversky's work on the simulation heuristic spearheaded experimental research on the psychology of counterfactual thinking.In this work, Kahneman and Tversky surveyed a handful of results from vignette-based studies and suggest that people tend to counterfactually mutate events that antecede a certain outcome according to three "directions": Uphill changes introduce nonactual ...

  11. PDF List of Fallacies Dicto Simpliciter- assuming that something true in

    Hypothesis Contrary to Fact- arguing from something that might have happened, but didn't ... Not necessarily an exact example of this fallacy, but it does show that interviewers can manipulate ... fire him because his life will be terrible and he won't be able to get another job, it's a fallacy. The fact is

  12. A List Of Fallacious Arguments

    Hypothesis Contrary To Fact: arguing from something that might have happened, but didn't. Internal Contradiction: saying two contradictory things in the same argument. For example, claiming that Archaeopteryx is a dinosaur with hoaxed feathers, and also saying in the same book that it is a "true bird". Or another author who said on page 59 ...

  13. ARGUMENTS FALLACIES

    Hypothesis Contrary to Fact: argument of the patterns if P was in fact related to Q. then if P had not occurred, Q could not have occurred. If event X did happen, then event Y would have happened (based only on speculation). ... For example, *Are you going to listen to that effect bleeding heart liberal?" Negative Proof, fallacy of: ...

  14. Refraction (Bent Thinking) # 34 Hypothesis Contrary to Fact

    Hypothesis Contrary to Fact: Beginning an argument with a hypothetical statement in which the antecedent (the historically possible situation) is generally known to be false (it never happened).

  15. Counterfactual fallacy

    Alternative names []. argumentum ad speculum; hypothesis contrary to fact "what if" wouldchuck; Form [] P1: A causes B. P2: A is true. C1: Therefore, B is true. C2 (fallacious): Therefore, if-counterfactual A was false, then-counterfactual B would be false. Or even more egregiously: P1: A is true. P2: B is true. C: Therefore, if-counterfactual A was false, then-counterfactual B would be false.

  16. Logical Fallacies

    False Analogy: the two objects or events being compared are relevantly dissimilar. Slothful Induction: the conclusion of a strong inductive argument is denied despite the evidence to the contrary. Fallacy of Exclusion: evidence which would change the outcome of an inductive argument is excluded from consideration.

  17. Examples of the Logical Fallacy of Hypothesis Contrary to Fact

    Hypothesis contrary to fact, a form of axiomatic thinking, is one of these three unhappy possibilities. The logical fallacy of hypothesis contrary to fact occurs when a hypothesis is put forward but the hypothesis cannot be true. This is often connected with a meaningless question about an impossible hypothetical situation.

  18. 26 Common Logical Fallacies To Avoid When Making an Argument

    Here are common fallacies of relevance: 1. Ad hominem attack. An ad hominem, or personal, attack is a form of rhetoric that criticizes or praises the person making an argument instead of the actual argument. It tries to reason that someone's claim is factual or wrong based on the person's reputation instead of the facts they present.

  19. DeBokton Logic Hypothesis Contrary to Fact

    This is a great fallacy to know, as people constantly make statements about "What if?" Watch our video series so you will be armed with logic to combat outra...

  20. Logical Fallacy Definition: List of Logical Fallacies

    Hypothesis Contrary to Fact. A Hypothesis Contrary to Fact is, simply, speculation without concrete evidence. It is an argument that, under different circumstances or historical events, the present or the future would certainly look a certain way. For example, "if you had gotten a job in finance, you'd be making loads of money right now."

  21. 8 Hypothesis Testing Examples in Real Life

    The hypothesis testing broadly involves the following steps, Step 1: Formulate the research hypothesis and the null hypothesis of the experiment. Step 2: Set the characteristics of the comparison distribution. Step3: Set the criterion for decision making, i.e., cut off sample score for the comparison to reject or retain the null hypothesis.

  22. Hypothesis Testing in the Real World

    Abstract. Critics of null hypothesis significance testing suggest that (a) its basic logic is invalid and (b) it addresses a question that is of no interest. In contrast to (a), I argue that the underlying logic of hypothesis testing is actually extremely straightforward and compelling. To substantiate that, I present examples showing that ...

  23. Fallacy

    Under hypothesis contrary to fact, hypothetical situations are treated as facts although it is a poorly supported claim. An example of this fallacy is "If my dad hadn't won the lottery ticket, my parents would have been divorced" or "If I had bitten into that jawbreaker, my tooth would have fallen out". In both of these examples, an ...

  24. Frontiers

    Figure 2.Illustration of conversational states of a three-talker conversation (T1-T3). (A) Most states occur between two of the three talkers and are identical to the states observed in two-talker conversations, i.e., the turn-taking happens in an overlap between (overlapB) T1 and T2, in a gap between T2 and T3, or speech (T3) can completely overlap within (overlapW) the turn of another ...