Confirmation Bias In Psychology: Definition & Examples

Julia Simkus

Editor at Simply Psychology

BA (Hons) Psychology, Princeton University

Julia Simkus is a graduate of Princeton University with a Bachelor of Arts in Psychology. She is currently studying for a Master's Degree in Counseling for Mental Health and Wellness in September 2023. Julia's research has been published in peer reviewed journals.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

On This Page:

Confirmation Bias is the tendency to look for information that supports, rather than rejects, one’s preconceptions, typically by interpreting evidence to confirm existing beliefs while rejecting or ignoring any conflicting data (American Psychological Association).

One of the early demonstrations of confirmation bias appeared in an experiment by Peter Watson (1960) in which the subjects were to find the experimenter’s rule for sequencing numbers.

Its results showed that the subjects chose responses that supported their hypotheses while rejecting contradictory evidence, and even though their hypotheses were incorrect, they became confident in them quickly (Gray, 2010, p. 356).

Though such evidence of confirmation bias has appeared in psychological literature throughout history, the term ‘confirmation bias’ was first used in a 1977 paper detailing an experimental study on the topic (Mynatt, Doherty, & Tweney, 1977).

Confirmation bias as psychological objective attitude issue outline diagram. Incorrect information checking or aware of self interpretation vector illustration. Tendency to approve existing opinion.

Biased Search for Information

This type of confirmation bias explains people’s search for evidence in a one-sided way to support their hypotheses or theories.

Experiments have shown that people provide tests/questions designed to yield “yes” if their favored hypothesis is true and ignore alternative hypotheses that are likely to give the same result.

This is also known as the congruence heuristic (Baron, 2000, p.162-64). Though the preference for affirmative questions itself may not be biased, there are experiments that have shown that congruence bias does exist.

For Example:

If you were to search “Are cats better than dogs?” in Google, all you would get are sites listing the reasons why cats are better.

However, if you were to search “Are dogs better than cats?” google will only provide you with sites that believe dogs are better than cats.

This shows that phrasing questions in a one-sided way (i.e., affirmative manner) will assist you in obtaining evidence consistent with your hypothesis.

Biased Interpretation

This type of bias explains that people interpret evidence concerning their existing beliefs by evaluating confirming evidence differently than evidence that challenges their preconceptions.

Various experiments have shown that people tend not to change their beliefs on complex issues even after being provided with research because of the way they interpret the evidence.

Additionally, people accept “confirming” evidence more easily and critically evaluate the “disconfirming” evidence (this is known as disconfirmation bias) (Taber & Lodge, 2006).

When provided with the same evidence, people’s interpretations could still be biased.

For example:

Biased interpretation is shown in an experiment conducted by Stanford University on the topic of capital punishment. It included participants who were in support of and others who were against capital punishment.

All subjects were provided with the same two studies.

After reading the detailed descriptions of the studies, participants still held their initial beliefs and supported their reasoning by providing “confirming” evidence from the studies and rejecting any contradictory evidence or considering it inferior to the “confirming” evidence (Lord, Ross, & Lepper, 1979).

Biased Memory

To confirm their current beliefs, people may remember/recall information selectively. Psychological theories vary in defining memory bias.

Some theories state that information confirming prior beliefs is stored in the memory while contradictory evidence is not (i.e., Schema theory). Some others claim that striking information is remembered best (i.e., humor effect).

Memory confirmation bias also serves a role in stereotype maintenance. Experiments have shown that the mental association between expectancy-confirming information and the group label strongly affects recall and recognition memory.

Though a certain stereotype about a social group might not be true for an individual, people tend to remember the stereotype-consistent information better than any disconfirming evidence (Fyock & Stangor, 1994).

In one experimental study, participants were asked to read a woman’s profile (detailing her extroverted and introverted skills) and assess her for either a job of a librarian or real-estate salesperson.

Those assessing her as a salesperson better recalled extroverted traits, while the other group recalled more examples of introversion (Snyder & Cantor, 1979).

These experiments, along with others, have offered an insight into selective memory and provided evidence for biased memory, proving that one searches for and better remembers confirming evidence.

social media bias

Social Media

Information we are presented on social media is not only reflective of what the users want to see but also of the designers’ beliefs and values. Today, people are exposed to an overwhelming number of news sources, each varying in their credibility.

To form conclusions, people tend to read the news that aligns with their perspectives. For instance, new channels provide information (even the same news) differently from each other on complex issues (i.e., racism, political parties, etc.), with some using sensational headlines/pictures and one-sided information.

Due to the biased coverage of topics, people only utilize certain channels/sites to obtain their information to make biased conclusions.

Religious Faith

People also tend to search for and interpret evidence with respect to their religious beliefs (if any).

For instance, on the topics of abortion and transgender rights, people whose religions are against such things will interpret this information differently than others and will look for evidence to validate what they believe.

Similarly, those who religiously reject the theory of evolution will either gather information disproving evolution or hold no official stance on the topic.

Also, irreligious people might perceive events that are considered “miracles” and “test of faiths” by religious people to be a reinforcement of their lack of faith in a religion.

when Does The Confirmation Bias Occur?

There are several explanations why humans possess confirmation bias, including this tendency being an efficient way to process information, protect self-esteem, and minimize cognitive dissonance.

Information Processing

Confirmation bias serves as an efficient way to process information because of the limitless information humans are exposed to.

To form an unbiased decision, one would have to critically evaluate every piece of information present, which is unfeasible. Therefore, people only tend to look for information desired to form their conclusions (Casad, 2019).

Protect Self-esteem

People are susceptible to confirmation bias to protect their self-esteem (to know that their beliefs are accurate).

To make themselves feel confident, they tend to look for information that supports their existing beliefs (Casad, 2019).

Minimize Cognitive Dissonance

Cognitive dissonance also explains why confirmation bias is adaptive.

Cognitive dissonance is a mental conflict that occurs when a person holds two contradictory beliefs and causes psychological stress/unease in a person.

To minimize this dissonance, people adapt to confirmation bias by avoiding information that is contradictory to their views and seeking evidence confirming their beliefs.

Challenge avoidance and reinforcement seeking to affect people’s thoughts/reactions differently since exposure to disconfirming information results in negative emotions, something that is nonexistent when seeking reinforcing evidence (“The Confirmation Bias: Why People See What They Want to See”).

Implications

Confirmation bias consistently shapes the way we look for and interpret information that influences our decisions in this society, ranging from homes to global platforms. This bias prevents people from gathering information objectively.

During the election campaign, people tend to look for information confirming their perspectives on different candidates while ignoring any information contradictory to their views.

This subjective manner of obtaining information can lead to overconfidence in a candidate, and misinterpretation/overlooking of important information, thus influencing their voting decision and, eventually country’s leadership (Cherry, 2020).

Recruitment and Selection

Confirmation bias also affects employment diversity because preconceived ideas about different social groups can introduce discrimination (though it might be unconscious) and impact the recruitment process (Agarwal, 2018).

Existing beliefs of a certain group being more competent than the other is the reason why particular races and gender are represented the most in companies today. This bias can hamper the company’s attempt at diversifying its employees.

Mitigating Confirmation Bias

Change in intrapersonal thought:.

To avoid being susceptible to confirmation bias, start questioning your research methods, and sources used to obtain their information.

Expanding the types of sources used in searching for information could provide different aspects of a particular topic and offer levels of credibility.

  • Read entire articles rather than forming conclusions based on the headlines and pictures. – Search for credible evidence presented in the article.
  • Analyze if the statements being asserted are backed up by trustworthy evidence (tracking the source of evidence could prove its credibility). – Encourage yourself and others to gather information in a conscious manner.

Alternative hypothesis:

Confirmation bias occurs when people tend to look for information that confirms their beliefs/hypotheses, but this bias can be reduced by taking into alternative hypotheses and their consequences.

Considering the possibility of beliefs/hypotheses other than one’s own could help you gather information in a more dynamic manner (rather than a one-sided way).

Related Cognitive Biases

There are many cognitive biases that characterize as subtypes of confirmation bias. Following are two of the subtypes:

Backfire Effect

The backfire effect occurs when people’s preexisting beliefs strengthen when challenged by contradictory evidence (Silverman, 2011).

  • Therefore, disproving a misconception can actually strengthen a person’s belief in that misconception.

One piece of disconfirming evidence does not change people’s views, but a constant flow of credible refutations could correct misinformation/misconceptions.

This effect is considered a subtype of confirmation bias because it explains people’s reactions to new information based on their preexisting hypotheses.

A study by Brendan Nyhan and Jason Reifler (two researchers on political misinformation) explored the effects of different types of statements on people’s beliefs.

While examining two statements, “I am not a Muslim, Obama says.” and “I am a Christian, Obama says,” they concluded that the latter statement is more persuasive and resulted in people’s change of beliefs, thus affirming statements are more effective at correcting incorrect views (Silverman, 2011).

Halo Effect

The halo effect occurs when people use impressions from a single trait to form conclusions about other unrelated attributes. It is heavily influenced by the first impression.

Research on this effect was pioneered by American psychologist Edward Thorndike who, in 1920, described ways officers rated their soldiers on different traits based on first impressions (Neugaard, 2019).

Experiments have shown that when positive attributes are presented first, a person is judged more favorably than when negative traits are shown first. This is a subtype of confirmation bias because it allows us to structure our thinking about other information using only initial evidence.

Learning Check

When does the confirmation bias occur.

  • When an individual only researches information that is consistent with personal beliefs.
  • When an individual only makes a decision after all perspectives have been evaluated.
  • When an individual becomes more confident in one’s judgments after researching alternative perspectives.
  • When an individual believes that the odds of an event occurring increase if the event hasn’t occurred recently.

The correct answer is A. Confirmation bias occurs when an individual only researches information consistent with personal beliefs. This bias leads people to favor information that confirms their preconceptions or hypotheses, regardless of whether the information is true.

Take-home Messages

  • Confirmation bias is the tendency of people to favor information that confirms their existing beliefs or hypotheses.
  • Confirmation bias happens when a person gives more weight to evidence that confirms their beliefs and undervalues evidence that could disprove it.
  • People display this bias when they gather or recall information selectively or when they interpret it in a biased way.
  • The effect is stronger for emotionally charged issues and for deeply entrenched beliefs.

Agarwal, P., Dr. (2018, October 19). Here Is How Bias Can Affect Recruitment In Your Organisation. https://www.forbes.com/sites/pragyaagarwaleurope/2018/10/19/how-can-bias-during-interviewsaffect-recruitment-in-your-organisation

American Psychological Association. (n.d.). APA Dictionary of Psychology. https://dictionary.apa.org/confirmation-bias

Baron, J. (2000). Thinking and Deciding (Third ed.). Cambridge University Press.

Casad, B. (2019, October 09). Confirmation bias . https://www.britannica.com/science/confirmation-bias

Cherry, K. (2020, February 19). Why Do We Favor Information That Confirms Our Existing Beliefs? https://www.verywellmind.com/what-is-a-confirmation-bias-2795024

Fyock, J., & Stangor, C. (1994). The role of memory biases in stereotype maintenance. The British journal of social psychology, 33 (3), 331–343.

Gray, P. O. (2010). Psychology . New York: Worth Publishers.

Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37 (11), 2098–2109.

Mynatt, C. R., Doherty, M. E., & Tweney, R. D. (1977). Confirmation bias in a simulated research environment: An experimental study of scientific inference. Quarterly Journal of Experimental Psychology, 29 (1), 85-95.

Neugaard, B. (2019, October 09). Halo effect. https://www.britannica.com/science/halo-effect

Silverman, C. (2011, June 17). The Backfire Effect . https://archives.cjr.org/behind_the_news/the_backfire_effect.php

Snyder, M., & Cantor, N. (1979). Testing hypotheses about other people: The use of historical knowledge. Journal of Experimental Social Psychology, 15 (4), 330–342.

Further Information

  • What Is Confirmation Bias and When Do People Actually Have It?
  • Confirmation Bias: A Ubiquitous Phenomenon in Many Guises
  • The importance of making assumptions: why confirmation is not necessarily a bias
  • Decision Making Is Caused By Information Processing And Emotion: A Synthesis Of Two Approaches To Explain The Phenomenon Of Confirmation Bias

Confirmation bias occurs when individuals selectively collect, interpret, or remember information that confirms their existing beliefs or ideas, while ignoring or discounting evidence that contradicts these beliefs.

This bias can happen unconsciously and can influence decision-making and reasoning in various contexts, such as research, politics, or everyday decision-making.

What is confirmation bias in psychology?

Confirmation bias in psychology is the tendency to favor information that confirms existing beliefs or values. People exhibiting this bias are likely to seek out, interpret, remember, and give more weight to evidence that supports their views, while ignoring, dismissing, or undervaluing the relevance of evidence that contradicts them.

This can lead to faulty decision-making because one-sided information doesn’t provide a full picture.

Print Friendly, PDF & Email

Related Articles

Automatic Processing in Psychology: Definition & Examples

Cognitive Psychology

Automatic Processing in Psychology: Definition & Examples

Controlled Processing in Psychology: Definition & Examples

Controlled Processing in Psychology: Definition & Examples

How Ego Depletion Can Drain Your Willpower

How Ego Depletion Can Drain Your Willpower

What is the Default Mode Network?

What is the Default Mode Network?

Theories of Selective Attention in Psychology

Availability Heuristic and Decision Making

Availability Heuristic and Decision Making

Confirmation Bias: Seeing What We Want to Believe

Confirmation Bias

Confirmation bias is a widely recognized phenomenon and refers to our tendency to seek out evidence in line with our current beliefs and stick to ideas even when the data contradicts them (Lidén, 2023).

Evolutionary and cognitive psychologists agree that we naturally tend to be selective and look for information we already know (Buss, 2016).

This article explores this tendency, how it happens, why it matters, and what we can do to get better at recognizing it and reducing its impact.

Before you continue, we thought you might like to download our three Positive CBT Exercises for free . These science-based exercises will provide you with detailed insight into positive Cognitive-Behavioral Therapy (CBT) and give you the tools to apply it in your therapy or coaching.

This Article Contains

Understanding confirmation bias, fascinating confirmation bias examples, 10 reasons we fall for it, 10 steps to recognizing and reducing confirmation bias, how confirmation bias impacts research, can confirmation bias be good, resources from positivepsychology.com, a take-home message.

We can understand the confirmation bias definition as the human tendency “to seek out, to interpret, to favor, and to selectively recall information that confirms beliefs they already hold, while avoiding or ignoring information that disconfirms these beliefs” (Gabriel & O’Connor, 2024, p. 1).

While it has been known and accepted since at least the 17th century that humans are inclined to form and hold on to ideas and beliefs — often tenaciously — even when faced with contradictory evidence, the term “confirmation bias” only became popular in the 1960s with the work of cognitive psychologist Peter Cathcart Wason (Lidén, 2023).

Wason’s (1960) famous 2–4–6 experiment was devised to investigate the nature of hypothesis testing.

Participants were given the numbers 2, 4, and 6 and told the numbers adhered to a rule.

They were then asked to arrive at a hypothesis explaining the sequence and try a new three-number series to test their rule (Wason, 1960; Lidén, 2023).

For example, if a participant thought the second number was twice that of the first and the third number was three times greater, they might suggest the numbers 10, 20, and 30.

However, if another participant thought it was a simple series increasing by two each time, they might suggest 13, 15, and 17 (Wason, 1960; Lidén, 2023).

The actual rule is more straightforward; the numbers are in ascending order. That’s all.

As we typically offer tests that confirm our initial beliefs, both example hypotheses appear to work, even if they are not the answer (Wason, 1960; Lidén, 2023).

The experiment demonstrates our confirmation bias; we seek information confirming our existing beliefs or hypotheses rather than challenging or disproving them (Lidén, 2023).

In the decades since, and with developments in cognitive science, we have come to understand that people don’t typically have everything they need, “and even if they did, they would not be able to use all the information due to constraints in the environment, attention, or memory” (Lidén, 2023, p. 8).

Instead, we rely on heuristics. Such “rules of thumb” are easy to apply and fairly accurate, yet they can potentially result in systematic and serious biases and errors in judgment (Lidén, 2023; Eysenck & Keane, 2015).

Confirmation bias in context

Confirmation bias is one of several cognitive biases ’(Lidén, 2023).

They are important because researchers have recognized that “vulnerability to clinical anxiety and depression depends in part on various cognitive biases” and that mental health treatments such as CBT  should support the goals of reducing them (Eysenck & Keane, 2015, p. 668).

Cognitive biases include (Eysenck & Keane, 2015):

  • Attentional bias Attending to threat-related stimuli more than neutral stimuli
  • Interpretive bias Interpreting ambiguous stimuli, situations, and events as threatening
  • Explicit memory bias The likelihood of retrieving mostly unpleasant thoughts rather than positive ones
  • Implicit memory bias The tendency to perform better for negative or threatening information on memory tests

Individuals possessing all four biases focus too much on environmental threats, interpret most incidents as concerning, and identify themselves as having experienced mostly unpleasant past events (Eysenck & Keane, 2015).

Similarly, confirmation bias means that individuals give too much weight to evidence that confirms their preconceptions or hypotheses, even incorrect and unhelpful ones. It can lead to poor decision-making because it limits their ability to consider alternative viewpoints or evidence that contradicts their beliefs (Lidén, 2023).

Unsurprisingly, such a negative outlook or bias will lead to unhealthy outcomes, including anxiety and depression (Eysenck & Keane, 2015).

Check out Tali Sharot’s video for a deeper dive.

Confirmation bias is commonplace and typically has a low impact, yet there are times when it is significant and newsworthy (Eysenck & Keane, 2015; Lidén, 2023).

Limits of information

In 2005, terrorists detonated four bombs in London (three on the London Underground and one on a bus), killing 52 and injuring 700 civilians. In the chaotic weeks that followed, a further attempt failed to detonate a suicide bomb, and the individual got away (Lidén, 2023).

Unsurprisingly, a mass hunt was launched to capture the escaped bomber, and many suspects came under surveillance. Yet, the security services made several significant mistakes.

On July 22, 2005, a man living in the same house as two suspects and bearing a resemblance to one of them was shot dead on an Underground train by officers.

“The context with the previous bombings, the available intelligence, and the pre-operation briefings, created expectations that the surveillance team would spot a suicide bomber leaving the doorway” (Lidén, 2023, p. 37).

The wrong man died because the officers involved failed to see the limits of the information available to them at the time.

Witness identification

In 1976, factory worker John Demjanjuk from Cleveland, Ohio, was identified as a Nazi war criminal known as Ivan the Terrible, perpetrator of many killings within prison camps in the Second World War (Lidén, 2023).

Due to the individual’s denial and limited evidence, the case rested on proof of identity via a photo line-up. However, it became known that “Ivan the Terrible” had a round face and was bald.

As the defendant was the only individual who matched the description, he was chosen by all the witnesses (Lidén, 2023).

Whether or not the witnesses were genuinely able to identify the factory worker as the criminal became irrelevant. The case centered around the unfairness of the line-up and the confirmation bias that resulted from the information they had been given (Lidén, 2023).

Years later, in 2012, following continuing challenges to his identity, John Demjanjuk died pending an appeal for his conviction in a German court. His identity remained unclear as the confirmation bias remained (“Ivan the Terrible,” 2024).

confirmation bias problem solving psychology

Download 3 Free Positive CBT Exercises (PDF)

These detailed, science-based exercises will equip you or your clients with tools to find new pathways to reduce suffering and more effectively cope with life stressors.

Download 3 Free Positive CBT Tools Pack (PDF)

By filling out your name and email address below.

Confirmation bias can significantly impact our own and others’ lives (Lidén, 2023; Kappes et al., 2020).

For that reason, it is helpful to understand why it happens and the psychological factors involved. Research confirms that people (Lidén, 2023; Kappes et al., 2020; Eysenck & Keane, 2015):

  • Don’t like to let go of their initial hypothesis
  • Prefer to use as much information as is initially available, often resulting in a too specific hypothesis
  • Show confirmation bias more on their hypothesis than others
  • Are more likely to adopt a confirmation bias when under high cognitive load
  • With a lower degree of intelligence are more likely to engage in confirmation bias (most likely due to being less able to manage higher cognitive loads and see the overall picture)
  • With cognitive impairments are more impacted by confirmation bias
  • Are often unable to actively consider and understand all relevant information to challenge the existing hypothesis or make a new one
  • Are influenced by their emotions and motivations and potentially “blinded” to the facts
  • Are biased by existing thoughts and beliefs (sometimes cultural), even if incorrect
  • Are influenced by the beliefs and arguments of those around them

Recognize confirmation bias

  • Recognize that confirmation bias exists and understand its impact on decision-making and how you interpret information. ​
  • Actively seek out and consider different viewpoints, opinions, and sources of information that challenge your existing beliefs and hypotheses. ​
  • Develop critical thinking skills that evaluate evidence and arguments objectively without favoring preconceived notions or desired outcomes.
  • Be aware of your biases and open to questioning your beliefs and assumptions.
  • Explore alternative explanations or hypotheses that may contradict your initial beliefs or interpretations.
  • Welcome feedback and criticism from others, even if they challenge your ideas; recognize it as an opportunity to learn and grow.
  • Apply systematic and rigorous methods to gather and analyze data, ensuring your conclusions are evidence-based rather than a result of personal biases.
  • Engage in collaborative discussions and debates with individuals with different perspectives to help see other viewpoints and challenge your biases.
  • Continuously seek new information and update your knowledge base to avoid becoming entrenched and support more-informed decision-making.
  • Practice analytical thinking, questioning assumptions, evaluating evidence objectively, and considering alternate explanations.

As far back as 1968, Karl Popper recognized that falsifiability (being able to prove that something can be incorrect or false) is crucial to all scientific inquiry, impacting researchers’ behavior and experimental outcomes.

As scientists, Popper argued, we should focus on looking for examples of why a theory does not work instead of seeking confirmation of its correctness. More recently, researchers have also considered that when findings suggest a theory is false, it may be due to issues with the experimental design or data accuracy (Eysenck & Keane, 2015).

Yet, confirmation bias has been an issue for a long time in scientific discovery and remains a challenge.

When researchers looked back at the work of Alexander Graham Bell in developing the telephone, they found that, due to confirmation bias, he ignored promising new approaches in favor of his tried-and-tested ones. It ultimately led to Thomas Edison being the first to develop the forerunner of today’s telephone (Eysenck & Keane, 2015).

More recently, a study showed that 88% of professional scientists working on issues in molecular biology responded to unexpected and inconsistent findings by blaming their experimental methods; they ignored the suggestion that they may need to modify, or even replace, their theories (Eysenck & Keane, 2015).

However, when those same scientists changed their approach yet obtained similarly inconsistent results, 61% revisited their theoretical assumptions (Eysenck & Keane, 2015).

Failure to report null research findings is also a problem. It is known as the “file drawer problem” because data remains unseen in the bottom drawer as the researcher does not attempt to get findings published or because journals show no interest in them (Lidén, 2023).

Positive confirmation bias

Researchers have recognized several potential benefits that arise from our natural inclination to seek out confirmation that we are right, including (Peters, 2022; Gabriel & O’Connor, 2024; Bergerot et al., 2023):

  • Assisting in the personal development of individuals by reinforcing their positive self-conceptions and traits
  • Helping individuals shape social structures by persuading others to adopt their viewpoints
  • Supporting increased confidence by reinforcing individuals’ beliefs and ignoring contradictory evidence
  • Contributing to social conformity and stability by reinforcing shared beliefs and values within a group, potentially boosting cooperation and coordination
  • Encouraging decision-making by removing uncertainty and doubt
  • Increasing the knowledge-producing capacity of a group by supporting a deeper exploration of individual members’ perspectives

It’s vital to note that the possible benefits also have their limitations. They potentially favor the individual at the cost of others’ needs while potentially distorting and hindering the formation of well-founded beliefs (Peters, 2022).

confirmation bias problem solving psychology

World’s Largest Positive Psychology Resource

The Positive Psychology Toolkit© is a groundbreaking practitioner resource containing over 500 science-based exercises , activities, interventions, questionnaires, and assessments created by experts using the latest positive psychology research.

Updated monthly. 100% Science-based.

“The best positive psychology resource out there!” — Emiliya Zhivotovskaya , Flourishing Center CEO

We have many resources for coaches and therapists to help individuals and groups understand and manage their biases.

Why not download our free 3 Positive CBT Exercises Pack and try out the powerful tools contained within? Some examples include the following:

  • Re-Framing Critical Self-Talk  Self-criticism typically involves judgment and self-blame regarding our shortcomings (real or imagined), such as our inability to accomplish personal goals and meet others’ expectations. In this exercise, we use self-talk to help us reduce self-criticism and cultivate a kinder, compassionate relationship with ourselves.
  • Solution-Focused Guided Imagery Solution-focused therapy assumes we have the resources required to resolve our issues. Here, we learn how to connect with our strengths and overcome the challenges we face.

Other free resources include:

  • The What-If Bias We often get caught up in our negative biases, thinking about potentially dire outcomes rather than adopting rational beliefs. This exercise helps us regain a more realistic and balanced perspective.
  • Becoming Aware of Assumptions We all bring biases into our daily lives, particularly conversations. In this helpful exercise , we picture how things might be in five years to put them into context.

More extensive versions of the following tools are available with a subscription to the Positive Psychology Toolkit© , but they are described briefly below.

  • Increasing Awareness of Cognitive Distortions

Cognitive distortions refer to our biased thinking about ourselves and our environment. This tool helps reduce the effect of the distortions by dismantling them.

  • Step one – Begin by exploring cognitive distortions, such as all-or-nothing thinking, jumping to conclusions, and catastrophizing .
  • Step two – Next, identify the cognitive distortions relevant to your situation.
  • Step three – Reflect on your thinking patterns, how they could harm you, and how you interact with others.
  • Finding Silver Linings

We tend to dwell on the things that go wrong in our lives. We may even begin to think our days are filled with mishaps and disappointments.

Rather than solely focusing on things that have gone wrong, it can help to look on the bright side. Try the following:

  • Step one – Create a list of things that make you feel life is worthwhile, enjoyable, and meaningful.
  • Step two – Think of a time when things didn’t go how you wanted them to.
  • Step three – Reflect on what this difficulty cost you.
  • Step four – Finally, consider what you may have gained from the experience. Write down three positives.

If you’re looking for more science-based ways to help others through CBT, check out this collection of 17 validated positive CBT tools for practitioners. Use them to help others overcome unhelpful thoughts and feelings and develop more positive behaviors.

We can’t always trust what we hear or see because our beliefs and expectations influence so much of how we interact with the world.

Confirmation bias refers to our natural inclination to seek out and focus on what confirms our beliefs, often ignoring anything that contradicts them.

While we have known of its effect for over 200 years, it still receives considerable research focus because of its impact on us individually and as a society, often causing us to make poor decisions and leading to damaging outcomes.

Confirmation bias has several sources and triggers, including our unwillingness to relinquish our initial beliefs (even when incorrect), preference for personal hypotheses, cognitive load, and cognitive impairments.

However, most of us can reduce confirmation bias with practice and training. We can become more aware of such inclinations and seek out challenges or alternate explanations for our beliefs.

It matters because confirmation bias can influence how we work, the research we base decisions on, and how our clients manage their relationships with others and their environments.

We hope you enjoyed reading this article. For more information, don’t forget to download our three Positive CBT Exercises for free .

  • Bergerot, C., Barfuss, W., & Romanczuk, P. (2023). Moderate confirmation bias enhances collective decision-making . biorXiv. https://www.biorxiv.org/content/10.1101/2023.11.21.568073v1.full
  • Buss, D. M. (2016). Evolutionary psychology: The new science of the mind . Routledge.
  • Eysenck, M. W., & Keane, M. T. (2015). Cognitive psychology: A student’s handbook . Psychology Press.
  • Gabriel, N., & O’Connor, C. (2024). Can confirmation bias improve group learning? PhilSci Archive. https://philsci-archive.pitt.edu/20528/
  • Ivan the Terrible (Treblinka guard). (2024). In Wikipedia . https://en.wikipedia.org/wiki/Ivan_the_Terrible_(Treblinka_guard)
  • Kappes, A., Harvey, A. H., Lohrenz, T., Montague, P. R., & Sharot, T. (2020). Confirmation bias in the utilization of others’ opinion strength. Nature Neuroscience , 23 (1), 130–137.
  • Lidén, M. (2023). Confirmation bias in criminal cases . Oxford University Press.
  • Peters, U. (2022). What is the function of confirmation bias? Erkenntnis , 87 , 1351–1376.
  • Popper, K. R. (1968). The logic of scientific discovery . Hutchinson.
  • Rist, T. (2023). Confirmation bias studies: Towards a scientific theory in the humanities. SN Social Sciences , 3 (8).
  • Wason, P. C. (1960). On the failure to eliminate hypotheses in a conceptual task. Quarterly Journal of Experimental Psychology , 12 (3), 129–140.

' src=

Share this article:

Article feedback

Let us know your thoughts cancel reply.

Your email address will not be published.

Save my name, email, and website in this browser for the next time I comment.

Related articles

FEA

Fundamental Attribution Error: Shifting the Blame Game

We all try to make sense of the behaviors we observe in ourselves and others. However, sometimes this process can be marred by cognitive biases [...]

Halo effect

Halo Effect: Why We Judge a Book by Its Cover

Even though we may consider ourselves logical and rational, it appears we are easily biased by a single incident or individual characteristic (Nicolau, Mellinas, & [...]

Sunk cost fallacy

Sunk Cost Fallacy: Why We Can’t Let Go

If you’ve continued with a decision or an investment of time, money, or resources long after you should have stopped, you’ve succumbed to the ‘sunk [...]

Read other articles by their category

  • Body & Brain (49)
  • Coaching & Application (58)
  • Compassion (25)
  • Counseling (51)
  • Emotional Intelligence (23)
  • Gratitude (18)
  • Grief & Bereavement (21)
  • Happiness & SWB (40)
  • Meaning & Values (26)
  • Meditation (20)
  • Mindfulness (44)
  • Motivation & Goals (45)
  • Optimism & Mindset (34)
  • Positive CBT (30)
  • Positive Communication (22)
  • Positive Education (47)
  • Positive Emotions (32)
  • Positive Leadership (19)
  • Positive Parenting (16)
  • Positive Psychology (34)
  • Positive Workplace (37)
  • Productivity (18)
  • Relationships (43)
  • Resilience & Coping (38)
  • Self Awareness (21)
  • Self Esteem (38)
  • Strengths & Virtues (32)
  • Stress & Burnout Prevention (34)
  • Theory & Books (46)
  • Therapy Exercises (37)
  • Types of Therapy (64)

3 Positive CBT Exercises (PDF)

7.3 Problem-Solving

Learning objectives.

By the end of this section, you will be able to:

  • Describe problem solving strategies
  • Define algorithm and heuristic
  • Explain some common roadblocks to effective problem solving

   People face problems every day—usually, multiple problems throughout the day. Sometimes these problems are straightforward: To double a recipe for pizza dough, for example, all that is required is that each ingredient in the recipe be doubled. Sometimes, however, the problems we encounter are more complex. For example, say you have a work deadline, and you must mail a printed copy of a report to your supervisor by the end of the business day. The report is time-sensitive and must be sent overnight. You finished the report last night, but your printer will not work today. What should you do? First, you need to identify the problem and then apply a strategy for solving the problem.

The study of human and animal problem solving processes has provided much insight toward the understanding of our conscious experience and led to advancements in computer science and artificial intelligence. Essentially much of cognitive science today represents studies of how we consciously and unconsciously make decisions and solve problems. For instance, when encountered with a large amount of information, how do we go about making decisions about the most efficient way of sorting and analyzing all the information in order to find what you are looking for as in visual search paradigms in cognitive psychology. Or in a situation where a piece of machinery is not working properly, how do we go about organizing how to address the issue and understand what the cause of the problem might be. How do we sort the procedures that will be needed and focus attention on what is important in order to solve problems efficiently. Within this section we will discuss some of these issues and examine processes related to human, animal and computer problem solving.

PROBLEM-SOLVING STRATEGIES

   When people are presented with a problem—whether it is a complex mathematical problem or a broken printer, how do you solve it? Before finding a solution to the problem, the problem must first be clearly identified. After that, one of many problem solving strategies can be applied, hopefully resulting in a solution.

Problems themselves can be classified into two different categories known as ill-defined and well-defined problems (Schacter, 2009). Ill-defined problems represent issues that do not have clear goals, solution paths, or expected solutions whereas well-defined problems have specific goals, clearly defined solutions, and clear expected solutions. Problem solving often incorporates pragmatics (logical reasoning) and semantics (interpretation of meanings behind the problem), and also in many cases require abstract thinking and creativity in order to find novel solutions. Within psychology, problem solving refers to a motivational drive for reading a definite “goal” from a present situation or condition that is either not moving toward that goal, is distant from it, or requires more complex logical analysis for finding a missing description of conditions or steps toward that goal. Processes relating to problem solving include problem finding also known as problem analysis, problem shaping where the organization of the problem occurs, generating alternative strategies, implementation of attempted solutions, and verification of the selected solution. Various methods of studying problem solving exist within the field of psychology including introspection, behavior analysis and behaviorism, simulation, computer modeling, and experimentation.

A problem-solving strategy is a plan of action used to find a solution. Different strategies have different action plans associated with them (table below). For example, a well-known strategy is trial and error. The old adage, “If at first you don’t succeed, try, try again” describes trial and error. In terms of your broken printer, you could try checking the ink levels, and if that doesn’t work, you could check to make sure the paper tray isn’t jammed. Or maybe the printer isn’t actually connected to your laptop. When using trial and error, you would continue to try different solutions until you solved your problem. Although trial and error is not typically one of the most time-efficient strategies, it is a commonly used one.

   Another type of strategy is an algorithm. An algorithm is a problem-solving formula that provides you with step-by-step instructions used to achieve a desired outcome (Kahneman, 2011). You can think of an algorithm as a recipe with highly detailed instructions that produce the same result every time they are performed. Algorithms are used frequently in our everyday lives, especially in computer science. When you run a search on the Internet, search engines like Google use algorithms to decide which entries will appear first in your list of results. Facebook also uses algorithms to decide which posts to display on your newsfeed. Can you identify other situations in which algorithms are used?

A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. A “rule of thumb” is an example of a heuristic. Such a rule saves the person time and energy when making a decision, but despite its time-saving characteristics, it is not always the best method for making a rational decision. Different types of heuristics are used in different types of situations, but the impulse to use a heuristic occurs when one of five conditions is met (Pratkanis, 1989):

  • When one is faced with too much information
  • When the time to make a decision is limited
  • When the decision to be made is unimportant
  • When there is access to very little information to use in making the decision
  • When an appropriate heuristic happens to come to mind in the same moment

Working backwards is a useful heuristic in which you begin solving the problem by focusing on the end result. Consider this example: You live in Washington, D.C. and have been invited to a wedding at 4 PM on Saturday in Philadelphia. Knowing that Interstate 95 tends to back up any day of the week, you need to plan your route and time your departure accordingly. If you want to be at the wedding service by 3:30 PM, and it takes 2.5 hours to get to Philadelphia without traffic, what time should you leave your house? You use the working backwards heuristic to plan the events of your day on a regular basis, probably without even thinking about it.

Another useful heuristic is the practice of accomplishing a large goal or task by breaking it into a series of smaller steps. Students often use this common method to complete a large research project or long essay for school. For example, students typically brainstorm, develop a thesis or main topic, research the chosen topic, organize their information into an outline, write a rough draft, revise and edit the rough draft, develop a final draft, organize the references list, and proofread their work before turning in the project. The large task becomes less overwhelming when it is broken down into a series of small steps.

Further problem solving strategies have been identified (listed below) that incorporate flexible and creative thinking in order to reach solutions efficiently.

Additional Problem Solving Strategies :

  • Abstraction – refers to solving the problem within a model of the situation before applying it to reality.
  • Analogy – is using a solution that solves a similar problem.
  • Brainstorming – refers to collecting an analyzing a large amount of solutions, especially within a group of people, to combine the solutions and developing them until an optimal solution is reached.
  • Divide and conquer – breaking down large complex problems into smaller more manageable problems.
  • Hypothesis testing – method used in experimentation where an assumption about what would happen in response to manipulating an independent variable is made, and analysis of the affects of the manipulation are made and compared to the original hypothesis.
  • Lateral thinking – approaching problems indirectly and creatively by viewing the problem in a new and unusual light.
  • Means-ends analysis – choosing and analyzing an action at a series of smaller steps to move closer to the goal.
  • Method of focal objects – putting seemingly non-matching characteristics of different procedures together to make something new that will get you closer to the goal.
  • Morphological analysis – analyzing the outputs of and interactions of many pieces that together make up a whole system.
  • Proof – trying to prove that a problem cannot be solved. Where the proof fails becomes the starting point or solving the problem.
  • Reduction – adapting the problem to be as similar problems where a solution exists.
  • Research – using existing knowledge or solutions to similar problems to solve the problem.
  • Root cause analysis – trying to identify the cause of the problem.

The strategies listed above outline a short summary of methods we use in working toward solutions and also demonstrate how the mind works when being faced with barriers preventing goals to be reached.

One example of means-end analysis can be found by using the Tower of Hanoi paradigm . This paradigm can be modeled as a word problems as demonstrated by the Missionary-Cannibal Problem :

Missionary-Cannibal Problem

Three missionaries and three cannibals are on one side of a river and need to cross to the other side. The only means of crossing is a boat, and the boat can only hold two people at a time. Your goal is to devise a set of moves that will transport all six of the people across the river, being in mind the following constraint: The number of cannibals can never exceed the number of missionaries in any location. Remember that someone will have to also row that boat back across each time.

Hint : At one point in your solution, you will have to send more people back to the original side than you just sent to the destination.

The actual Tower of Hanoi problem consists of three rods sitting vertically on a base with a number of disks of different sizes that can slide onto any rod. The puzzle starts with the disks in a neat stack in ascending order of size on one rod, the smallest at the top making a conical shape. The objective of the puzzle is to move the entire stack to another rod obeying the following rules:

  • 1. Only one disk can be moved at a time.
  • 2. Each move consists of taking the upper disk from one of the stacks and placing it on top of another stack or on an empty rod.
  • 3. No disc may be placed on top of a smaller disk.

confirmation bias problem solving psychology

  Figure 7.02. Steps for solving the Tower of Hanoi in the minimum number of moves when there are 3 disks.

confirmation bias problem solving psychology

Figure 7.03. Graphical representation of nodes (circles) and moves (lines) of Tower of Hanoi.

The Tower of Hanoi is a frequently used psychological technique to study problem solving and procedure analysis. A variation of the Tower of Hanoi known as the Tower of London has been developed which has been an important tool in the neuropsychological diagnosis of executive function disorders and their treatment.

GESTALT PSYCHOLOGY AND PROBLEM SOLVING

As you may recall from the sensation and perception chapter, Gestalt psychology describes whole patterns, forms and configurations of perception and cognition such as closure, good continuation, and figure-ground. In addition to patterns of perception, Wolfgang Kohler, a German Gestalt psychologist traveled to the Spanish island of Tenerife in order to study animals behavior and problem solving in the anthropoid ape.

As an interesting side note to Kohler’s studies of chimp problem solving, Dr. Ronald Ley, professor of psychology at State University of New York provides evidence in his book A Whisper of Espionage  (1990) suggesting that while collecting data for what would later be his book  The Mentality of Apes (1925) on Tenerife in the Canary Islands between 1914 and 1920, Kohler was additionally an active spy for the German government alerting Germany to ships that were sailing around the Canary Islands. Ley suggests his investigations in England, Germany and elsewhere in Europe confirm that Kohler had served in the German military by building, maintaining and operating a concealed radio that contributed to Germany’s war effort acting as a strategic outpost in the Canary Islands that could monitor naval military activity approaching the north African coast.

While trapped on the island over the course of World War 1, Kohler applied Gestalt principles to animal perception in order to understand how they solve problems. He recognized that the apes on the islands also perceive relations between stimuli and the environment in Gestalt patterns and understand these patterns as wholes as opposed to pieces that make up a whole. Kohler based his theories of animal intelligence on the ability to understand relations between stimuli, and spent much of his time while trapped on the island investigation what he described as  insight , the sudden perception of useful or proper relations. In order to study insight in animals, Kohler would present problems to chimpanzee’s by hanging some banana’s or some kind of food so it was suspended higher than the apes could reach. Within the room, Kohler would arrange a variety of boxes, sticks or other tools the chimpanzees could use by combining in patterns or organizing in a way that would allow them to obtain the food (Kohler & Winter, 1925).

While viewing the chimpanzee’s, Kohler noticed one chimp that was more efficient at solving problems than some of the others. The chimp, named Sultan, was able to use long poles to reach through bars and organize objects in specific patterns to obtain food or other desirables that were originally out of reach. In order to study insight within these chimps, Kohler would remove objects from the room to systematically make the food more difficult to obtain. As the story goes, after removing many of the objects Sultan was used to using to obtain the food, he sat down ad sulked for a while, and then suddenly got up going over to two poles lying on the ground. Without hesitation Sultan put one pole inside the end of the other creating a longer pole that he could use to obtain the food demonstrating an ideal example of what Kohler described as insight. In another situation, Sultan discovered how to stand on a box to reach a banana that was suspended from the rafters illustrating Sultan’s perception of relations and the importance of insight in problem solving.

Grande (another chimp in the group studied by Kohler) builds a three-box structure to reach the bananas, while Sultan watches from the ground.  Insight , sometimes referred to as an “Ah-ha” experience, was the term Kohler used for the sudden perception of useful relations among objects during problem solving (Kohler, 1927; Radvansky & Ashcraft, 2013).

Solving puzzles.

   Problem-solving abilities can improve with practice. Many people challenge themselves every day with puzzles and other mental exercises to sharpen their problem-solving skills. Sudoku puzzles appear daily in most newspapers. Typically, a sudoku puzzle is a 9×9 grid. The simple sudoku below (see figure) is a 4×4 grid. To solve the puzzle, fill in the empty boxes with a single digit: 1, 2, 3, or 4. Here are the rules: The numbers must total 10 in each bolded box, each row, and each column; however, each digit can only appear once in a bolded box, row, and column. Time yourself as you solve this puzzle and compare your time with a classmate.

How long did it take you to solve this sudoku puzzle? (You can see the answer at the end of this section.)

   Here is another popular type of puzzle (figure below) that challenges your spatial reasoning skills. Connect all nine dots with four connecting straight lines without lifting your pencil from the paper:

Did you figure it out? (The answer is at the end of this section.) Once you understand how to crack this puzzle, you won’t forget.

   Take a look at the “Puzzling Scales” logic puzzle below (figure below). Sam Loyd, a well-known puzzle master, created and refined countless puzzles throughout his lifetime (Cyclopedia of Puzzles, n.d.).

A puzzle involving a scale is shown. At the top of the figure it reads: “Sam Loyds Puzzling Scales.” The first row of the puzzle shows a balanced scale with 3 blocks and a top on the left and 12 marbles on the right. Below this row it reads: “Since the scales now balance.” The next row of the puzzle shows a balanced scale with just the top on the left, and 1 block and 8 marbles on the right. Below this row it reads: “And balance when arranged this way.” The third row shows an unbalanced scale with the top on the left side, which is much lower than the right side. The right side is empty. Below this row it reads: “Then how many marbles will it require to balance with that top?”

What steps did you take to solve this puzzle? You can read the solution at the end of this section.

Pitfalls to problem solving.

   Not all problems are successfully solved, however. What challenges stop us from successfully solving a problem? Albert Einstein once said, “Insanity is doing the same thing over and over again and expecting a different result.” Imagine a person in a room that has four doorways. One doorway that has always been open in the past is now locked. The person, accustomed to exiting the room by that particular doorway, keeps trying to get out through the same doorway even though the other three doorways are open. The person is stuck—but she just needs to go to another doorway, instead of trying to get out through the locked doorway. A mental set is where you persist in approaching a problem in a way that has worked in the past but is clearly not working now.

Functional fixedness is a type of mental set where you cannot perceive an object being used for something other than what it was designed for. During the Apollo 13 mission to the moon, NASA engineers at Mission Control had to overcome functional fixedness to save the lives of the astronauts aboard the spacecraft. An explosion in a module of the spacecraft damaged multiple systems. The astronauts were in danger of being poisoned by rising levels of carbon dioxide because of problems with the carbon dioxide filters. The engineers found a way for the astronauts to use spare plastic bags, tape, and air hoses to create a makeshift air filter, which saved the lives of the astronauts.

   Researchers have investigated whether functional fixedness is affected by culture. In one experiment, individuals from the Shuar group in Ecuador were asked to use an object for a purpose other than that for which the object was originally intended. For example, the participants were told a story about a bear and a rabbit that were separated by a river and asked to select among various objects, including a spoon, a cup, erasers, and so on, to help the animals. The spoon was the only object long enough to span the imaginary river, but if the spoon was presented in a way that reflected its normal usage, it took participants longer to choose the spoon to solve the problem. (German & Barrett, 2005). The researchers wanted to know if exposure to highly specialized tools, as occurs with individuals in industrialized nations, affects their ability to transcend functional fixedness. It was determined that functional fixedness is experienced in both industrialized and nonindustrialized cultures (German & Barrett, 2005).

In order to make good decisions, we use our knowledge and our reasoning. Often, this knowledge and reasoning is sound and solid. Sometimes, however, we are swayed by biases or by others manipulating a situation. For example, let’s say you and three friends wanted to rent a house and had a combined target budget of $1,600. The realtor shows you only very run-down houses for $1,600 and then shows you a very nice house for $2,000. Might you ask each person to pay more in rent to get the $2,000 home? Why would the realtor show you the run-down houses and the nice house? The realtor may be challenging your anchoring bias. An anchoring bias occurs when you focus on one piece of information when making a decision or solving a problem. In this case, you’re so focused on the amount of money you are willing to spend that you may not recognize what kinds of houses are available at that price point.

The confirmation bias is the tendency to focus on information that confirms your existing beliefs. For example, if you think that your professor is not very nice, you notice all of the instances of rude behavior exhibited by the professor while ignoring the countless pleasant interactions he is involved in on a daily basis. Hindsight bias leads you to believe that the event you just experienced was predictable, even though it really wasn’t. In other words, you knew all along that things would turn out the way they did. Representative bias describes a faulty way of thinking, in which you unintentionally stereotype someone or something; for example, you may assume that your professors spend their free time reading books and engaging in intellectual conversation, because the idea of them spending their time playing volleyball or visiting an amusement park does not fit in with your stereotypes of professors.

Finally, the availability heuristic is a heuristic in which you make a decision based on an example, information, or recent experience that is that readily available to you, even though it may not be the best example to inform your decision . Biases tend to “preserve that which is already established—to maintain our preexisting knowledge, beliefs, attitudes, and hypotheses” (Aronson, 1995; Kahneman, 2011). These biases are summarized in the table below.

Were you able to determine how many marbles are needed to balance the scales in the figure below? You need nine. Were you able to solve the problems in the figures above? Here are the answers.

The first puzzle is a Sudoku grid of 16 squares (4 rows of 4 squares) is shown. Half of the numbers were supplied to start the puzzle and are colored blue, and half have been filled in as the puzzle’s solution and are colored red. The numbers in each row of the grid, left to right, are as follows. Row 1: blue 3, red 1, red 4, blue 2. Row 2: red 2, blue 4, blue 1, red 3. Row 3: red 1, blue 3, blue 2, red 4. Row 4: blue 4, red 2, red 3, blue 1.The second puzzle consists of 9 dots arranged in 3 rows of 3 inside of a square. The solution, four straight lines made without lifting the pencil, is shown in a red line with arrows indicating the direction of movement. In order to solve the puzzle, the lines must extend beyond the borders of the box. The four connecting lines are drawn as follows. Line 1 begins at the top left dot, proceeds through the middle and right dots of the top row, and extends to the right beyond the border of the square. Line 2 extends from the end of line 1, through the right dot of the horizontally centered row, through the middle dot of the bottom row, and beyond the square’s border ending in the space beneath the left dot of the bottom row. Line 3 extends from the end of line 2 upwards through the left dots of the bottom, middle, and top rows. Line 4 extends from the end of line 3 through the middle dot in the middle row and ends at the right dot of the bottom row.

   Many different strategies exist for solving problems. Typical strategies include trial and error, applying algorithms, and using heuristics. To solve a large, complicated problem, it often helps to break the problem into smaller steps that can be accomplished individually, leading to an overall solution. Roadblocks to problem solving include a mental set, functional fixedness, and various biases that can cloud decision making skills.

References:

Openstax Psychology text by Kathryn Dumper, William Jenkins, Arlene Lacombe, Marilyn Lovett and Marion Perlmutter licensed under CC BY v4.0. https://openstax.org/details/books/psychology

Review Questions:

1. A specific formula for solving a problem is called ________.

a. an algorithm

b. a heuristic

c. a mental set

d. trial and error

2. Solving the Tower of Hanoi problem tends to utilize a  ________ strategy of problem solving.

a. divide and conquer

b. means-end analysis

d. experiment

3. A mental shortcut in the form of a general problem-solving framework is called ________.

4. Which type of bias involves becoming fixated on a single trait of a problem?

a. anchoring bias

b. confirmation bias

c. representative bias

d. availability bias

5. Which type of bias involves relying on a false stereotype to make a decision?

6. Wolfgang Kohler analyzed behavior of chimpanzees by applying Gestalt principles to describe ________.

a. social adjustment

b. student load payment options

c. emotional learning

d. insight learning

7. ________ is a type of mental set where you cannot perceive an object being used for something other than what it was designed for.

a. functional fixedness

c. working memory

Critical Thinking Questions:

1. What is functional fixedness and how can overcoming it help you solve problems?

2. How does an algorithm save you time and energy when solving a problem?

Personal Application Question:

1. Which type of bias do you recognize in your own decision making processes? How has this bias affected how you’ve made decisions in the past and how can you use your awareness of it to improve your decisions making skills in the future?

anchoring bias

availability heuristic

confirmation bias

functional fixedness

hindsight bias

problem-solving strategy

representative bias

trial and error

working backwards

Answers to Exercises

algorithm:  problem-solving strategy characterized by a specific set of instructions

anchoring bias:  faulty heuristic in which you fixate on a single aspect of a problem to find a solution

availability heuristic:  faulty heuristic in which you make a decision based on information readily available to you

confirmation bias:  faulty heuristic in which you focus on information that confirms your beliefs

functional fixedness:  inability to see an object as useful for any other use other than the one for which it was intended

heuristic:  mental shortcut that saves time when solving a problem

hindsight bias:  belief that the event just experienced was predictable, even though it really wasn’t

mental set:  continually using an old solution to a problem without results

problem-solving strategy:  method for solving problems

representative bias:  faulty heuristic in which you stereotype someone or something without a valid basis for your judgment

trial and error:  problem-solving strategy in which multiple solutions are attempted until the correct one is found

working backwards:  heuristic in which you begin to solve a problem by focusing on the end result

Creative Commons License

Share This Book

  • Increase Font Size

What Is the Function of Confirmation Bias?

  • Original Research
  • Open access
  • Published: 20 April 2020
  • Volume 87 , pages 1351–1376, ( 2022 )

Cite this article

You have full access to this open access article

confirmation bias problem solving psychology

  • Uwe Peters 1 , 2  

75k Accesses

43 Citations

183 Altmetric

19 Mentions

Explore all metrics

Confirmation bias is one of the most widely discussed epistemically problematic cognitions, challenging reliable belief formation and the correction of inaccurate views. Given its problematic nature, it remains unclear why the bias evolved and is still with us today. To offer an explanation, several philosophers and scientists have argued that the bias is in fact adaptive. I critically discuss three recent proposals of this kind before developing a novel alternative, what I call the ‘reality-matching account’. According to the account, confirmation bias evolved because it helps us influence people and social structures so that they come to match our beliefs about them. This can result in significant developmental and epistemic benefits for us and other people, ensuring that over time we don’t become epistemically disconnected from social reality but can navigate it more easily. While that might not be the only evolved function of confirmation bias, it is an important one that has so far been neglected in the theorizing on the bias.

Similar content being viewed by others

confirmation bias problem solving psychology

The Relationship Between Social Media Use and Beliefs in Conspiracy Theories and Misinformation

Conformity in groups: the effects of others’ views on expressed attitudes and attitude change.

confirmation bias problem solving psychology

An evolutionary perspective on Kohlberg’s theory of moral development

Avoid common mistakes on your manuscript.

In recent years, confirmation bias (or ‘myside bias’), Footnote 1 that is, people’s tendency to search for information that supports their beliefs and ignore or distort data contradicting them (Nickerson 1998 ; Myers and DeWall 2015 : 357), has frequently been discussed in the media, the sciences, and philosophy. The bias has, for example, been mentioned in debates on the spread of “fake news” (Stibel 2018 ), on the “replication crisis” in the sciences (Ball 2017 ; Lilienfeld 2017 ), the impact of cognitive diversity in philosophy (Peters 2019a ; Peters et al. forthcoming; Draper and Nichols 2013 ; De Cruz and De Smedt 2016 ), the role of values in inquiry (Steel 2018 ; Peters 2018 ), and the evolution of human reasoning (Norman 2016 ; Mercier and Sperber 2017 ; Sterelny 2018 ; Dutilh Novaes 2018 ).

Confirmation bias is typically viewed as an epistemically pernicious tendency. For instance, Mercier and Sperber ( 2017 : 215) maintain that the bias impedes the formation of well-founded beliefs, reduces people’s ability to correct their mistaken views, and makes them, when they reason on their own, “become overconfident” (Mercier 2016 : 110). In the same vein, Steel ( 2018 ) holds that the bias involves an “epistemic distortion [that] consists of unjustifiably favoring supporting evidence for [one’s] belief, which can result in the belief becoming unreasonably confident or extreme” (897). Similarly, Peters ( 2018 ) writes that confirmation bias “leads to partial, and therewith for the individual less reliable, information processing” (15).

The bias is not only taken to be epistemically problematic, but also thought to be a “ubiquitous” (Nickerson 1998 : 208), “built-in feature of the mind” (Haidt 2012 : 105), found in both everyday and abstract reasoning tasks (Evans 1996 ), independently of subjects’ intelligence, cognitive ability, or motivation to avoid it (Stanovich et al. 2013 ; Lord et al. 1984 ). Given its seemingly dysfunctional character, the apparent pervasiveness of confirmation bias raises a puzzle: If the bias is indeed epistemically problematic, why is it still with us today? By definition, dysfunctional traits should be more prone to extinction than functional ones (Nickerson 1998 ). Might confirmation bias be or have been adaptive ?

Some philosophers are optimistic, arguing that the bias has in fact significant advantages for the individual, groups, or both (Mercier and Sperber 2017 ; Norman 2016 ; Smart 2018 ; Peters 2018 ). Others are pessimistic. For instance, Dutilh Novaes ( 2018 ) maintains that confirmation bias makes subjects less able to anticipate other people’s viewpoints, and so, “given the importance of being able to appreciate one’s interlocutor’s perspective for social interaction”, is “best not seen as an adaptation” (520).

In the following, I discuss three recent proposals of the adaptationist kind, mention reservations about them, and develop a novel account of the evolution of confirmation bias that challenges a key assumption underlying current research on the bias, namely that the bias thwarts reliable belief formation and truth tracking. The account holds that while searching for information supporting one’s pre-existing beliefs and ignoring contradictory data is disadvantageous when that what one takes to be reality is and stays different from what one believes it to be, it is beneficial when, as the result of one’s processing information in that way, that reality is changed so that it matches one’s beliefs. I call this process reality matching and contend that it frequently occurs when the beliefs at issue are about people and social structures (i.e., relationships between individuals, groups, and socio-political institutions). In these situations, confirmation bias is highly effective for us to be confident about our beliefs even when there is insufficient evidence or subjective motivation available to us to support them. This helps us influence and ‘mould’ people and social structures so that they fit our beliefs, Footnote 2 which is an adaptive property of confirmation bias. It can result in significant developmental and epistemic benefits for us and other people, ensuring that over time we don’t become epistemically disconnected from social reality but can navigate it more easily.

I shall not argue that the adaptive function of confirmation bias that this reality-matching account highlights is the only evolved function of the bias. Rather, I propose that it is one important function that has so far been neglected in the theorizing on the bias.

In Sects.  1 and 2 , I distinguish confirmation bias from related cognitions before briefly introducing some recent empirical evidence supporting the existence of the bias. In Sect.  3 , I motivate the search for an evolutionary explanation of confirmation bias and critically discuss three recent proposals. In Sects.  4 and 5 , I then develop and support the reality-matching account as an alternative.

1 Confirmation Bias and Friends

The term ‘confirmation bias’ has been used to refer to various distinct ways in which beliefs and expectations can influence the selection, retention, and evaluation of evidence (Klayman 1995 ; Nickerson 1998 ). Hahn and Harris ( 2014 ) offer a list of them including four types of cognitions: (1) hypothesis-determined information seeking and interpretation, (2) failures to pursue a falsificationist strategy in contexts of conditional reasoning, (3) a resistance to change a belief or opinion once formed, and (4) overconfidence or an illusion of validity of one’s own view.

Hahn and Harries note that while all of these cognitions have been labeled ‘confirmation bias’, (1)–(4) are also sometimes viewed as components of ‘motivated reasoning’ (or ‘wishful thinking’) (ibid: 45), i.e., information processing that leads people to arrive at the conclusions they favor (Kunda 1990 ). In fact, as Nickerson ( 1998 : 176) notes, confirmation bias comes in two different flavors: “motivated” and “unmotivated” confirmation bias. And the operation of the former can be understood as motivated reasoning itself, because it too involves partial information processing to buttress a view that one wants to be true (ibid). Unmotivated confirmation bias, however, operates when people process data in one-sided, partial ways that support their predetermined views no matter whether they favor them. So confirmation bias is also importantly different from motivated reasoning, as it can take effect in the absence of a preferred view and might lead one to support even beliefs that one wants to be false (e.g., when one believes the catastrophic effects of climate change are unavoidable; Steel 2018 ).

Despite overlapping with motivated reasoning, confirmation bias can thus plausibly be (and typically is) construed as a distinctive cognition. It is thought to be a subject’s largely automatic and unconscious tendency to (i) seek support for her pre-existing, favored or not favored beliefs and (ii) ignore or distort information compromising them (Klayman 1995 : 406; Nickerson 1998 : 175; Myers and DeWall 2015 : 357; Palminteri et al. 2017 : 14). I here endorse this standard, functional concept of confirmation bias.

2 Is Confirmation Bias Real?

Many psychologists hold that the bias is a “pervasive” (Nickerson 1998 : 175; Palminteri et al. 2017 : 14), “ineradicable” feature of human reasoning (Haidt 2012 : 105). Such strong claims are problematic, however. For there is evidence that, for instance, disrupting the fluency in information processing (Hernandez and Preston 2013 ) or priming subjects for distrust (Mayo et al. 2014 ) reduces the bias. Moreover, some researchers have recently re-examined the relevant studies and found that confirmation bias is in fact less common and the evidence of it less robust than often assumed (Mercier 2016 ; Whittlestone 2017 ). These researchers grant, however, the weaker claim that the bias is real and often, in some domains more than in others, operative in human cognition (Mercier 2016 : 100, 108; Whittlestone 2017 : 199, 207). I shall only rely on this modest view here. To motivate it a bit more, consider the following two studies.

Hall et al. ( 2012 ) gave their participants (N = 160) a questionnaire, asking them about their opinion on moral principles such as ‘Even if an action might harm the innocent, it can still be morally permissible to perform it’. After the subjects had indicated their view using a scale ranging from ‘completely disagree’ to ‘completely agree’, the experimenter performed a sleight of hand, inverting the meaning of some of the statements so that the question then read, for instance, ‘If an action might harm the innocent, then it is not morally permissible to perform it’. The answer scales, however, were not altered. So if a subject had agreed with the first claim, she then agreed with the opposite one. Surprisingly, 69% of the study participants failed to detect at least one of the changes. Moreover, they subsequently tended to justify positions they thought they held despite just having chosen the opposite . Presumably, subjects accepted that they favored a particular position, didn’t know the reasons, and so were now looking for support that would justify their position. They displayed a confirmation bias. Footnote 3

Using a similar experimental set-up, Trouche et al. ( 2016 ) found that subjects also tend to exhibit a selective ‘laziness’ in their critical thinking: they are more likely to avoid raising objections to their own positions than to other people’s. Trouche et al. first asked their test participants to produce arguments in response to a set of simple reasoning problems. Directly afterwards, they had them assess other subjects’ arguments concerning the same problems. About half of the participants didn’t notice that by the experimenter’s intervention, in some trials, they were in fact presented with their own arguments again; the arguments appeared to these participants as if they were someone else’s. Furthermore, more than half of the subjects who believed they were assessing someone else’s arguments now rejected those that were in fact their own, and were more likely to do so for invalid than for valid ones. This suggests that subjects are less critical of their own arguments than of other people’s, indicating that confirmation bias is real and perhaps often operative when we are considering our own claims and arguments.

3 Evolutionary Accounts of the Bias

Confirmation bias is typically taken to be epistemically problematic, as it leads to partial and therewith for the individual less reliable information processing and contributes to failures in, for instance, perspective-taking with clear costs for social and other types of cognition (Mercier and Sperber 2017 : 215; Steel 2018 ; Peters 2018 ; Dutilh Novaes 2018 ). Prima facie , the bias thus seems maladaptive.

But then why does it still exist? Granted, even if the bias isn’t an adaptation, we might still be able to explain why it is with us today. We might, for instance, argue that it is a “spandrel”, a by-product of the evolution of another trait that is an adaptation (Gould and Lewontin 1979 ). Or we may abandon the evolutionary approach to the bias altogether and hold that it emerged by chance.

However, evolutionary explanations of psychological traits are often fruitful. They can create new perspectives on these traits that may allow developing means to reduce the traits’ potential negative effects (Roberts et al. 2012 ; Johnson et al. 2013 ). Evolutionary explanations might also stimulate novel, testable predictions that researchers who aren’t evolutionarily minded would overlook (Ketelaar and Ellis 2000 ; De Bruine 2009 ). Moreover, they typically involve integrating diverse data from different disciplines (e.g., psychology, biology, anthropology etc.), and thereby contribute to the development of a more complete understanding of the traits at play and human cognition, in general (Tooby and Cosmides 2015 ). These points equally apply when it comes to considering the origin of confirmation bias. They provide good reasons for searching for an evolutionary account of the bias.

Different proposals can be discerned in the literature. I will discuss three recent ones, what I shall call (1) the argumentative - function account, (2) the group - cognition account, and the (3) intention – alignment account. I won’t offer conclusive arguments against them here. The aim is just to introduce some reservations about these proposals to motivate the exploration of an alternative.

3.1 The Argumentative-Function Account

Mercier and Sperber ( 2011 , 2017 ) hold that human reasoning didn’t evolve for truth tracking but for making us better at convincing other people and evaluating their arguments so as to be convinced only when their points are compelling. In this context, when persuasion is paramount, the tendency to look for material supporting our preconceptions and to discount contradictory data allows us to accumulate argumentative ammunition, which strengthens our argumentative skill, Mercier and Sperber maintain. They suggest that confirmation bias thus evolved to “serve the goal of convincing others” ( 2011 : 63).

Mercier and Sperber acknowledge that the bias also hinders us in anticipating objections, which should make it more difficult for us to develop strong, objection–resistant arguments ( 2017 : 225f). But they add that it is much less cognitively demanding to react to objections than to anticipate them, because objections might depend on particular features of one’s opponents’ preferences or on information that only they have access to. It is thus more efficient to be ‘lazy’ in anticipating criticism and let the audience make the moves, Mercier and Sperber claim.

There is reason to be sceptical about their proposal, however. For instance, an anticipated objection is likely to be answered more convincingly than an immediate response from one’s audience. After all, “forewarned is forearmed”; it gives a tactical advantage (e.g., more time to develop a reply) (Sterelny 2018 : 4). And even if it is granted that objections depend on private information, they also often derive from obvious interests and public knowledge, making an anticipation of them easy (ibid). Moreover, as Dutilh Novaes ( 2018 : 519) notes, there is a risk of “looking daft” when producing poor arguments, say, due to laziness in scrutinizing one’s thoughts. Since individuals within their social groups depend on their reputation so as to find collaborators, anticipating one’s audience’s responses should be and have been more adaptive than having a confirmation bias (ibid). If human reasoning emerged for argumentative purposes, the existence of the bias remains puzzling.

3.2 The Group-Cognition Account

Even if confirmation bias is maladaptive for individual s, it might still be adaptive for groups . For instance, Smart ( 2018 ) and Peters ( 2018 ) hold that in groups with a sufficient degree of cognitive diversity at the outset of solving a particular problem, each individual’s confirmation bias might help the group as a whole conduct a more in-depth analysis of the problem space than otherwise. When each subject is biased towards a different particular proposal on how to solve the problem, the bias will push them to invest greater effort in defending their favored proposals and might, in the light of counterevidence, motivate them to consider rejecting auxiliary assumptions rather than the proposals themselves. This contributes to a thorough exploration of them that is less likely with less committed thinkers. Additionally, since individuals appear to have a particular strength in detecting flaws in others’ arguments (Trouche et al. 2016 ), open social criticism within the group should ensure that the group’s conclusions remain reliable even if some, or at times most, of its members are led astray by their confirmation bias (Smart 2018 : 4190; Peters 2018 : 20).

Mercier and Sperber ( 2011 : 65) themselves already float the idea of such a social “division of cognitive labor”. They don’t yet take its group-level benefits to explain why confirmation bias evolved, however (Dutilh Novaes 2018 : 518f). Smart ( 2018 ) and Peters ( 2018 ) also don’t introduce their views as accounts of the evolved function of the bias. But Dutilh Novaes ( 2018 : 519) and Levy ( 2019 : 317) gesture toward, and Smith and Wald ( 2019 ) make the case for, an evolutionary proposal along these lines, arguing that the bias was selected for making a group’s inquiry more thorough, effective, and reliable.

While I have sympathies with this proposal, several researchers have noted that the concept of ‘group selection’ is problematic (West et al. 2007 ; Pinker 2012 ). One of the issues is that since individuals reproduce faster than groups, a trait T that is an adaptation that is good for groups but bad for an individual’s fitness won’t spread, because the rate of proliferation of groups is undermined by the evolutionary disadvantage of T within groups (Pinker 2012 ). The point equally applies to the proposal that confirmation bias was selected for its group-level benefits.

Moreover, a group arguably only benefits from each individual’s confirmation bias if there is a diversity of viewpoints in the group and members express their views, as otherwise “group polarization” is likely to arise (Myers and Lamm 1976 ): arguments for shared positions will accumulate without being criticized, making the group’s average opinion more extreme and less reliable, which is maladaptive. Crucially, ancestral ‘hunter-gather’ groups are perhaps unlikely to have displayed a diversity of viewpoints. After all, their members traveled less, interacted less with strangers, and were less economically dependent on other groups (Simpson and Beckes 2010 : 37). This should have homogenized them with respect to race, culture, and background (Schuck 2001 : 1915). Even today groups often display such homogeneity, as calls for diversity in academia, companies etc. indicate. These points provide reasons to doubt that ancestral groups provided the kind of conditions in which confirmation bias could have produced the benefits that the group-cognition account highlights rather than maladaptive effects tied to group polarization.

3.3 The Intention–Alignment Account

Turning to a third and here final extant proposal on the evolution of confirmation bias, Norman ( 2016 ) argues that human reasoning evolved for facilitating an “intention alignment” between individuals: in social interactions, reasons typically ‘overwrite’ nonaligned mental states (e.g., people’s divergent intentions or beliefs) with aligned ones by showing the need for changing them. Norman holds that human reasoning was selected for this purpose because it makes cooperation easier. He adds that, in this context, “confirmation bias would have facilitated intention alignment, for a tribe of hunter-gatherers prone to [the bias] would more easily form and maintain the kind of shared outlook needed for mutualistic collaboration. The mythologies and ideologies taught to the young would accrue confirming evidence and tend to stick, thereby cementing group solidarity” ( 2016 : 700). Norman takes his view to be supported by the “fact that confirmation bias is especially pronounced when a group’s ideological preconceptions are at stake” (ibid).

However, the proposal seems at odds with the finding that the bias inclines subjects to ignore or misconstrue their opponents’ objections. In fueling one-sided information processing to support one’s own view, the bias makes people less able to anticipate and adequately respond to their interlocutor’s point of view (Dutilh Novaes 2018 : 520). Due to that effect, the bias arguably makes an intention alignment with others (especially with one’s opponents) harder, not easier. Moreover, since our ancesteral groups are (as noted above) likely to have been largely viewpoint homogenous, in supporting intention-alignment in these social environments, confirmation bias would have again facilitated group polarization, which is prima facie evolutionarily disadvantageous.

All three proposals of the adaptive role of confirmation bias considered so far thus raise questions. While the points mentioned aren’t meant to be fatal for the proposals and might be answerable within their frameworks, they do provide a motivation to explore an alternative.

4 Towards an Alternative

The key idea that I want to develop is the following. Confirmation bias is typically taken to work against an individual’s truth tracking (Mercier and Sperber 2017 : 215; Peters 2018 : 15), and indeed searching for information supporting one’s beliefs and ignoring contradictory data is epistemically disadvantageous when what one takes to be reality is and stays different from what one believes it to be. However, reality doesn’t always remain unchanged when we form beliefs about it. Consider social beliefs, that is, beliefs about people (oneself, others, and groups) and social structures (i.e., relationships between individuals, groups, and socio-political institutions). I shall contend that a confirmation bias pertaining to social beliefs reinforces our confidence in these beliefs, therewith strengthening our tendency to behave in ways that cause changes in reality so that it corresponds to the beliefs, turning them (when they are initially inaccurate) into self - fulfilling prophecies (SFPs) (Merton 1948 ; Biggs 2009 ). Due to its role in helping us make social reality match our beliefs, confirmation bias is adaptive, or so I will argue. I first introduce examples of SFPs of social beliefs. Then I explore the relevance of these beliefs in our species, before making explicit the adaptive role of confirmation bias in facilitating SFPs.

4.1 Social Beliefs and SFPs

Social beliefs often lead to SFPs with beneficial outcomes. Here are four examples.

S (false) believes he is highly intelligent. His self-view motivates him to engage with intellectuals, read books, attend academic talks, etc. This makes him increasingly more intelligent, gradually confirming his initially inaccurate self-concept (for relevant empirical data, see Swann 2012 ).

Without a communicative intention, a baby boy looking at a kitten produces a certain noise: ‘ma-ma’. His mother is thrilled, believing (falsely) that he is beginning to talk and wants to call her. She responds accordingly, rushing to him, attending to him, and indicating excitement. This leads the boy to associate ‘ma-ma’ with the arrival and attention of his mother. And so he gradually begins using the sounds to call her, confirming her initially false belief about his communicative intention (for relevant empirical data, see Mameli 2001 ).

A father believes his adolescent daughter doesn’t regularly drink alcohol, but she does. He acts in line with his beliefs, and expresses it in communication with other people. His daughter notices and likes his positive view of her, which motivates her to increasingly resist drinks, gradually fulfilling her father’s optimistic belief about her (for relevant empirical data; see Willard et al. 2008 ).

A teacher (falsely) believes that a student’s current academic performance is above average. She thus gives him challenging material, encourages him, and communicates high expectations. This leads the student to increase his efforts, which gradually results in above-average academic performance (for relevant evidence, see Madon et al. 1997 ).

SFPs of initially false positive trait ascriptions emerge in many other situations too. They also occurred, for instance, when adults ascribed to children traits such as being tidy (Miller et al. 1975 ), charitable (Jensen and Moore 1977 ), or cooperative (Grusec et al. 1978 ). Similarly, in adults, attributions of, for example, kindness (Murray et al. 1996 ), eco-friendliness (Cornelissen et al. 2007 ), military competence (Davidson and Eden 2000 ), athletic ability (Solomon 2016 ), and even physiological changes (Turnwald et al. 2018 ) have all had self-fulfilling effects. Moreover, these effects don’t necessarily take much time to unfold but can happen swiftly in a single interaction (e.g., in interview settings; Word et al. 1974 ) right after the ascription (Turnwald et al. 2018 : 49).

SFPs are, however, neither pervasive nor all-powerful (Jussim 2012 ), and there are various conditions for them to occur (Snyder and Klein 2007 ). For instance, they tend to occur only when targets are able to change in accordance with the trait ascriptions, when the latter are believable rather than unrealistic (Alfano 2013 : 91f), and when the ascriber holds more power than the ascribee (Copeland 1994 : 264f). But comprehensive literature reviews confirm that SFPs are “real, reliable, and occasionally quite powerful” (Jussim 2017 : 8; Willard and Madon 2016 ).

4.2 The Distribution of Social Beliefs and Role of Prosociality in Humans

Importantly, SFPs can be pernicious when the beliefs at the center of them capture negative social conceptions, for instance, stereotypes, anxious expectations, fear, or hostility (Darley and Gross 1983 ; Downey et al. 1998 ; Madon et al. 2018 ). In these cases, SFPs would be maladaptive. Given this, what do we know about the distribution of social beliefs, in general, and positive ones, in particular, in ancestral human groups?

Many researchers hold that our evolutionary success as a species relies on our being “ultra-social” and “ultra-cooperative” animals (e.g., Tomasello 2014 : 187; Henrich 2016 ). Human sociality is “spectacularly elaborate, and of profound biological importance” because “our social groups are characterized by extensive cooperation and division of labour” (Sterelny 2007 : 720). Since we live in an almost continuous flow of interactions with conspecifics, “solving problems of coordination with our fellows is [one of] our most pressing ecological tasks” (Zawidzki 2008 : 198). A significant amount of our beliefs are thus likely to be social ones (Tomasello 2014 : 190f).

Moreover, when it comes to oneself, to group or “tribe” members, and to collaborators, these beliefs often capture positive to overly optimistic ascriptions of traits (e.g., communicativeness, skills, etc.; Simpson and Beckes 2010 ). This is well established when it comes to one’s beliefs about oneself (about 70% of the general population has a positive self-conception; Talaifar and Swann 2017 : 4) and one’s family members (Wenger and Fowers 2008 ). The assumption that the point also holds for ‘tribe’ members and collaborators, more generally, receives support from the “tribal-instincts hypothesis” (Richerson and Boyd 2001 ), which holds that humans tend to harbor “ethnocentric attitudes in favor of [their] own tribe along with its members, customs, values and norms”, as this facilitates social predictability and cooperation (Kelly 2013 : 507). For instance, in the past as much as today, humans “talk differently about their in-groups than their out-groups, such that they describe the in-group and its members [but not out-groups] as having broadly positive traits” (Stangor 2011 : 568). In subjects with such ‘tribal instincts’, judgments about out-group members might easily be negative. But within the groups of these subjects, among in-group members, overly optimistic, cooperation-enhancing conceptions of others should be and have been more dominant particularly in “intergroup conflict, [which] is undeniably pervasive across human societies” (McDonald et al. 2012 : 670). Indeed, such conflicts are known to fuel in-group “glorification” (Leidner et al. 2010 ; Golec De Zavala 2011 ).

Given these points, in ‘ultra-cooperative’ social environments in which ‘tribe’ members held predominantly positive social conceptions and expectations about in-group subjects, positive SFPs should have been overall more frequent and stronger than negative ones. Indeed, there is evidence that even today, positive SFPs in individual, dyadic interactions are more likely and pronounced than negative ones. Footnote 4 For instance, focusing on mothers’ beliefs about their sons’ alcohol consumption, Willard et al. ( 2008 ) found that children “were more susceptible to their mothers’ positive than negative self-fulfilling effects” (499): “mothers’ false beliefs buffered their adolescents against increased alcohol use rather than putting them at greater risk” (Willard and Madon 2016 : 133). Similarly, studies found that “teachers’ false beliefs raised students’ achievement more than they lowered it” (Willard and Madon 2016 : 118): teacher overestimates “increase[d] achievement more than teacher underestimates tended to decrease achievement among students” (Madon et al. 1997 : 806). Experiments with stigmatized subjects corroborate these results further (ibid), leading Jussim ( 2017 ) in his literature review to conclude that high teacher expectations help students “more than low expectations harm achievement” (8).

One common explanation of this asymmetry is that SFPs typically depend on whether the targets of the trait ascriptions involved accept the expectations imposed on them via the ascriptions (Snyder and Klein 2007 ). And since subjects tend to strive to think well of themselves (Talaifar and Swann 2017 ), they respond more to positive than negative expectations (Madon et al. 1997 : 792). If we combine these considerations with the assumption that in ancestral groups of heavily interdependent subjects, positive social beliefs about in-group members (in-group favoritism) are likely to have been more prevalent than negative ones, then there is reason to hold that the SFPs of the social conceptions in the groups at issue were more often than not adaptive. With these points in mind, it is time to return to confirmation bias.

4.3 From SFPs to Confirmation Bias

Notice that SFPs depend on trait or mental-state ascriptions that are ‘ahead’ of their own truth: they are formed when an objective assessment of the available evidence doesn’t yet support their truth. Assuming direct doxastic voluntarism is false (Matheson and Vitz 2014 ), how can they nonetheless be formed and confidently maintained?

I suggest that confirmation bias plays an important role: it allows subjects to become and remain convinced about their social beliefs (e.g., trait ascriptions) when the available evidence doesn’t yet support their truth. This makes SFPs of these beliefs more likely than if the ascriber merely verbally attributed the traits without committing to the truth of the ascriptions, or believed in them but readily revised the beliefs. I shall argue that this is in fact adaptive not only when it comes to positive trait ascriptions, but also to negative ones. I will illustrate the point first with respect to positive trait ascriptions.

4.3.1 Motivated Confirmation Bias and Positive Trait Ascriptions

Suppose that you ascribe a positive property T to a subject A , who is your ward, but (unbeknownst to you) the available evidence doesn’t yet fully support that ascription. The more convinced you are about your view of A even in the light of counterevidence, the better you are at conveying your conviction to A because, generally, “people are more influenced [by others] when [these] others express judgments with high confidence than low confidence” (Kappes et al. 2020 : 1; von Hippel and Trivers 2011 ). Additionally, the better you are at conveying to A your conviction that he has T , the more confident he himself will be that he has that trait (assuming he trusts you) (Sniezek and Van Swol 2001 ). Crucially, if A too is confident that he has T , he will be more likely to conform to the corresponding expectations than if he doesn’t believe the ascription, say, because he notices that you only say but don’t believe that he has T . Relatedly, the more convinced you are about your trait ascription to A , the clearer your signaling of the corresponding expectations to A in your behavior (Tormala 2016 ) and the higher the normative impetus on him, as a cooperative subject, to conform so as to avoid disrupting interactions with you.

Returning to confirmation bias, given what we know about the cognitive effect of the bias, the more affected you are by the bias, the stronger your belief in your trait ascriptions to A (Rabin and Schrag 1999 ), and so the lower the likelihood that you will reveal in your behavior a lack of conviction about them that could undermine SFPs. Thus, the more affected you are by the bias, the higher the likelihood of SFPs of the ascriptions because conviction about the ascriptions plays a key facilitative role for SFPs. This is also experimentally supported. For several studies found that SFPs of trait ascriptions occurred only when ascribers were certain of the ascriptions, not when they were less confident (Swann and Ely 1984 ; Pelham and Swann 1994 ; Swann 2012 : 30). If we add to these points that SFPs of trait ascriptions were in developmental and educational contexts in ancestral tribal groups more often beneficial for the targets than not, then there is a basis for holding that confirmation bias might in fact have been selected for sustaining SFPs.

Notice that the argument so far equally applies to motivated reasoning. This is to be expected because, as mentioned above, motivated confirmation bias is an instance of motivated reasoning (Nickerson 1998 ). To pertain specifically to confirmation bias, however, the evolutionary proposal that the bias was selected for facilitating SFPs of social conceptions also has to hold for unmotivated confirmation bias. Is this the case?

4.3.2 Unmotivated Confirmation Bias and Negative Trait Ascriptions

Notice that when we automatically reinforce any of our views no matter whether we favor them, then our preferences won’t be required for and undermine the reinforcement process and the SFPs promoted by it. This means that such a general tendency, i.e., a confirmation bias, can fulfil the function of facilitating SFPs more frequently than motivated cognitions, namely whenever the subject has acquired a social conception (e.g., as the result of upbringing, learning, or testimony). This is adaptive for at least three reasons.

First, suppose that as a parent, caretaker, or teacher you (unknowingly) wishfully believe that A , who is your ward, has a positive trait T . You tell another subject ( B ) that A has T , and, on your testimony, B subsequently believes this too. But suppose that unlike you, B has no preference as to whether A has T. Yet, as it happens, she still has a confirmation bias toward her beliefs. Just like you, B will now process information so that it strengthens her view about A . This increases her conviction in, and so the probability of an SFP of, the trait ascription to A , because now both you and B are more likely to act toward A in ways indicating ascription-related expectations. As a general tendency to support any of one’s beliefs rather than only favored ones, the bias thus enables a social ‘ripple’ effect in the process of making trait ascriptions match reality. Since this process is in ultra-social and ultra-cooperative groups more often than not adaptive (e.g., boosting the development of a positive trait in A ), in facilitating a social extension of it, confirmation bias is adaptive too.

Secondly, in ancestral groups, many of the social conceptions (e.g., beliefs about social roles, gender norms, stereotypes etc.) that subjects unreflectively acquired during their upbringing and socialization will have been geared toward preserving the group's function and status quo  and aligning individuals with them (Sterelny 2006 : 148). Since it can operate independently of a subject’s preferences, a confirmation bias in each member of the group would have helped the group enlist each of its members for re-producing social identities, social structures, traits, and roles in the image of the group’s conceptions even when these individuals disfavored them. In sustaining SFPs of these conceptions, which might have included various stereotypes or ethnocentric, prejudicial attitudes that we today consider offensive negative trait ascriptions (e.g., gender or racist stereotypes) (Whitaker et al. 2018 ), confirmation bias would have been adaptive in the past. For, as Richerson and Boyd ( 2005 : 121f) note too, in ancestral groups, selection pressure favored social conformity, predictability, and stability. That confirmation bias might have evolved for facilitating SFPs that serve the ‘tribal' collective , possibly even against the preference, autonomy, and better judgment of the individual, is in line with recent research suggesting that many uniquely human features of cognition evolved through pressures selecting for the ability to conform to other people and to facilitate social projects (Henrich 2016 ). It is thought that these features may work against common ideals associated with self-reliance or “achieving basic personal autonomy, because the main purpose of [them] is to allow us to fluidly mesh with others, making us effective nodes in larger networks” (Kelly and Hoburg 2017 : 10). I suggest that confirmation bias too was selected for making us effective ‘nodes’ in social networks by inclining us to create social reality that corresponds to these networks’ conceptions even when we dislike them or they are harmful to others (e.g., out-group members).

Thirdly, in helping us make social affairs match our beliefs about them even when we don’t favor them, confirmation bias also provides us with significant epistemic benefits in social cognition. Consider Jack and Jill. Both have just seen an agent A act ambiguously, and both have formed a first impression of A according to which A is acting the way he is because he has trait T . Suppose neither Jack nor Jill has any preference as to whether A has that trait but subsequently process information in the following two different ways. Jack does not have a confirmation bias but impartially assesses the evidence and swiftly revises his beliefs when encountering contradictory data. As it happens, A ’s behavior soon does provide him with just such evidence, leading him to abandon his first impression of A and reopen the search for an explanation of A ’s action. In contrast, Jill does have a confirmation bias with respect to her beliefs and interprets the available evidence so that it supports her beliefs. Jill too sees A act in a way that contradicts her first impression of him. But unlike Jack, she doesn’t abandon her view. Rather, she reinterprets A ’s action so that it bolsters her view. Whose information processing might be more adaptive? For Jack, encountering data challenging his view removes certainty and initiates a new cycle of computations about A , which requires him to postpone a possible collaboration with A . For Jill, however, the new evidence strengthens her view, leading her to keep the issue of explaining A ’s action settled and be ready to collaborate with him. Jack’s approach might still seem better for attaining an accurate view of A and predicting what he’ll do next. But suppose Jill confidently signals to A her view of him in her behavior. Since people have a general inclination to fulfil others’ expectations (especially positive ones) out of an interest in coordinating and getting along with them (Dardenne and Leyens 1995 ; Bacharach et al. 2007 ), when A notices Jill’s conviction that he displays T , he too is likely to conform, which provides Jill with a correct view of what he will do next. Jill’s biased processing is thus more adaptive than Jack’s approach: a confirmation bias provides her with certainty and simpler information processing that simultaneously facilitates accurate predictions (via contributing to SFPs). Generalizing from Jill, in everyday social interactions we all form swift first impressions of others without having any particular preference with respect to these impressions either way. Assuming that confirmation bias operates on them nonetheless, the bias will frequently be adaptive in the ways just mentioned.

4.3.3 Summing Up: The Reality-Matching Account

By helping subjects make social reality match their beliefs about it no matter whether they favor these beliefs or the latter are sufficiently evidentially supported, confirmation bias is adaptive: when the bias targets positive social beliefs and trait ascriptions, it serves both the subject and the group by producing effects that (1) assist them in their development (to become, e.g., more communicative, cooperative, or knowledgeable) and (2) make social cognition more tractable (by increasing social conformity and predictability). To be sure, when it targets negative trait ascriptions (pernicious stereotypes, etc.), the bias can have ethically problematic SFP effects. But, as noted, especially in ancestral ‘tribal’ groups, it would perhaps still have contributed to social conformity, predictability, and sustaining the status quo , which would have been adaptive in these groups (Richerson and Boyd  2005 ) inter alia  by facilitating social cognition. Taken together, these considerations provide a basis for holding that confirmation bias was selected for promoting SFPs. I shall call the proposal introduced in this section, the  reality-matching (RM) account of the function of confirmation bias.

5 Supporting the RM Account

Before offering empirical support for the RM account and highlighting its explanatory benefits, it is useful to disarm an objection: if confirmation bias was selected for its SFP-related effects, then people should not also display the bias with respect to beliefs that can’t produce SFPs (e.g., beliefs about physics, climate change, religion, etc.). But they do (Nickerson 1998 ).

5.1 From Social to Non-social Beliefs

In response to the objection just mentioned, two points should be noted. First, the RM account is compatible with the view that confirmation bias was also selected for adaptive effects related to non -social beliefs. It only claims that facilitating the alignment of social reality with social beliefs (i.e., reality matching) is one of the important adaptive features for which the bias was selected that has so far been neglected.

Second, it doesn’t follow that because confirmation bias also affects beliefs that can’t initiate SFPs that it could not have been selected for affecting beliefs that can and do initiate SFPs. The literature offers many examples of biological features or cognitive traits that were selected for fulfilling a certain function despite rarely doing so or even having maladaptive effects (Millikan 1984 ; Haselton and Nettle 2006 ). Consider the “baby-face overgeneralization” bias (Zebrowitz and Montepare 2008 ). Studies suggest that people have a strong readiness to favorably respond to babies’ distinctive facial features. And this tendency is overgeneralized such that even adults are more readily viewed more favorably, treated as likeable (but also physically weak, and naïve) when they display babyface features. While this overgeneralization tendency often leads to errors, it is thought to have evolved because failures to respond favorably to babies (i.e., false negatives) are evolutionarily more costly than overgeneralizing (i.e., false positives) (ibid).

Might our domain-general tendency to confirm our own beliefs be similarly less evolutionarily costly than not having such a general tendency? It is not implausible to assume so because, as noted, we are ultra-social and ultra-cooperative, and our beliefs about people’s social standing, knowledge, intentions, abilities, etc. are critical for our flourishing (Sterelny 2007 : 720; Tomasello 2014 : 190f; Henrich 2016 ). Importantly, these beliefs, unlike beliefs about the non-social world, are able to and frequently do initiate SFPs contributing to the outlined evolutionary benefits. This matters because if social beliefs are pervasive and SFPs of them significant for our flourishing, then a domain-general tendency to confirm any of our beliefs ensures that we don’t miss opportunities to align social reality with our conceptions and to reap the related developmental and epistemic benefits. Granted, this tendency overgeneralizes, which creates clear costs. But given the special role of social beliefs in our species and our dependence on social learning and social cognition, which are facilitated by SFPs, it is worth taking seriously the possibility that these costs can often outweigh the benefits.

While this thought doesn’t yet show that the RM account is correct, it does help disarm the above objection. For it explains why the fact that confirmation bias also affects beliefs that cannot initiate SFPs doesn’t disprove the view that the bias was selected for reality matching: the special role of social beliefs in our species (compared to others species) lends plausibility to the assumption that the costs of the bias’ overgeneralizing might be lower than the costs of its failing to generalize. I now turn to the positive support for the RM account.

5.2 Empirical Data

If, as the RM account proposes, confirmation bias was selected for facilitating the process of making reality match our beliefs, then the bias should be common and pronounced when (1) it comes to social beliefs, that is, beliefs (a) about oneself, (b) about other people, and (c) about social structures that the subject can determine, and when (2) social conditions are conducive to reality matching. While there are no systematic comparative studies on whether the bias is more frequent or stronger with respect to some beliefs but not others (e.g., social vs. non-social beliefs), there is related empirical research that does provide some support for these predictions.

Self - related Beliefs

In a number of studies, Swann and colleagues (Swann 1983 ; Swann et al. 1992 ; for an overview, see Swann 2012 ) found that selective information processing characteristic of confirmation bias is “especially pronounced with regards to self-concepts” and so self-related beliefs (Müller-Pinzler et al. 2019 : 9). Footnote 5 Interestingly, and counterintuitively, the data show that “just as people with positive self-views preferentially seek positive evaluations, those with negative self-views preferentially seek negative evaluations” (Talaifar and Swann 2017 : 3). For instance, those “who see themselves as likable seek out and embrace others who evaluate them positively, whereas those who see themselves as dislikeable seek out and embrace others who evaluate them negatively” (ibid). Much in line with the RM account, Swann ( 2012 ) notes that this confirmatory tendency “would have been advantageous” in “hunter-gatherer groups”: once “people used input from the social environment to form self-views, self-verification strivings would have stabilized their identities and behavior, which in turn would make each individual more predictable to other group members” (26).

Similarly, in a study in which subjects received feedback about aspects of their self that can be relatively easily changed (e.g., their ability to estimate the weights of animals), Müller-Pinzler et al. ( 2019 ) found that “prior beliefs about the self modulate self-related belief-formation” in that subjects updated their performance estimates “in line with a confirmation bias”: individuals with prior negative self-related beliefs (e.g., low self-esteem) showed increased biases towards factoring in negative (vs. positive) feedback, and, interestingly, this tendency was “modulated by the social context and only present when participants were exposed to a potentially judging audience” (ibid: 9–10). This coheres with the view that confirmation bias might serve the ‘collective’ to bring subjects into accordance with its social conceptions (positive or negative).

Other - Related Beliefs

If confirmation bias was selected for sustaining social beliefs for the sake of reality matching then the bias should also be particularly pronounced when it comes to beliefs about other people especially in situations conducive to reality matching. For instance, powerful individuals have been found to be more likely to prompt subordinates to behaviorally confirm their social conceptions than relatively powerless subjects (Copeland 1994 ; Leyens et al. 1999 ). That is, interactions between powerful and powerless individuals are conducive to reality matching of the powerful individuals’ social beliefs. According to the RM account, powerful individuals should display a stronger confirmation bias with respect to the relevant social beliefs. Goodwin et al. ( 2000 ) found just that: powerful people, in particular, tend to fail to take into account data that may contradict their social beliefs (capturing, e.g., stereotypes) about subordinates and attend more closely to information that supports their expectations. Relative to the powerless, powerful people displayed a stronger confirmation bias in their thinking about subordinates (ibid: 239f).

Similarly, if confirmation bias serves to facilitate social interaction by contributing to a match between beliefs and social reality then the bias should be increased with respect to trait attributions to other people in subjects who care about social interactions compared to other subjects. Dardenne and Leyens ( 1995 ) reasoned that when testing a hypothesis about the personality of another individual (e.g., their being introverted or extroverted), a preference for questions that match the hypothesis (e.g., that the subject is introverted) indicates social skill, conveying a feeling of being understood to the individual and contributing to a smooth conversation. Socially skilled people (‘high self-monitors’) should thus request ‘matching questions’, say, in an interview setting, for instance, when testing the introvert hypothesis, an interviewer could ask questions that are answered ‘yes’ by a typical introvert (e.g., ‘Do you like to stay alone?’), confirming the presence of the hypothesized trait (ibid). Dardenne and Leyens did find that matching questions pertaining to an introvert or an extrovert hypothesis were selected most by high self-monitors: socially skilled subjects displayed a stronger confirmatory tendency than less socially skilled subjects (ibid).

Finally, there is also evidence that confirmation bias is more pronounced with respect to social beliefs compared to non-social beliefs. For instance, Marsh and Hanlon ( 2007 ) gave one group of behavioral ecologists a specific set of expectations with respect to sex differences in salamander behavior, while a second group was given the opposite set of expectations. In one experiment, subjects collected data on variable sets of live salamanders, while in the other experiment, observers collected data from identical videotaped trials. Across experiments and observed behaviors, the expectations of the observers biased their observations “only to a small or moderate degree”, Marsh and Hanlon note, concluding that these “results are largely optimistic with respect to confirmation bias in behavioral ecology” ( 2007 : 1089). This insignificant confirmation bias with respect to beliefs about non-social matters contrasts with findings of a significant confirmation bias with respect to beliefs about people (Talaifar and Swann 2017 ; Goodwin et al. 2000 ; Marks and Fraley 2006 ; Darley and Gross 1983 ), and, as I shall argue now, social affairs whose reality the subject can determine.

Non - personal, Social Beliefs

One important kind of social beliefs are political beliefs, which concern social states of affairs pertaining to politics. Political beliefs are especially interesting in the context of the RM account because they are very closely related to reality matching. This is not only because subjects can often directly influence political affairs via voting, running as a candidate, campaigning, etc. It is also because subjects who are highly confident about their political beliefs are more likely to be able to convince other people of them too (Kappes et al. 2020 ). And the more widespread a political conviction in a population, the higher the probability that the population will adopt political structures that shape reality in line with it (Jost et al. 2003 ; Ordabayeva and Fernandes 2018 ).

If, as the RM account proposes, confirmation bias was selected for sustaining social beliefs for the sake of reality matching then the bias should be particularly strong when it comes to beliefs about political states of affairs. And indeed Taber and Lodge ( 2006 ) did find that “motivated [confirmation] biases come to the fore in the processing of political arguments”, in particular, and, crucially, subjects “with weak […] [political] attitudes show less [confirmation] bias in processing political arguments” (767). In fact, in psychology, attitude strength, especially, in politically relevant domains of thinking has long been and still is widely accepted to increase the kind of selective exposure constitutive of confirmation bias (Knobloch-Westerwick et al. 2015 : 173). For instance, Brannon et al. ( 2007 ) found that stronger, more extreme political attitudes are correlated with higher ratings of interest in attitude-consistent versus attitude-discrepant political articles. Similarly, Knobloch-Westerwick et al. ( 2015 ) found that people online who attach high importance to particular political topics spent more time on attitude-consistent messages than users who attached low importance to the topics, and “[a]ttitude-consistent messages […] were preferred”, reinforcing the attitudes further (171). While this can contribute to political group polarization, such a polarization also boosts the group-wide reality-matching endeavour and can so be adaptive itself (Johnson and Fowler 2011 : 317).

In short, then, while there are currently no systematic comparative studies on whether confirmation bias is more frequent or stronger with respect to social beliefs, related empirical studies do suggest that when it comes to (positive or negative) social beliefs about oneself, other people, and social states of affairs that the subject can determine (e.g., political beliefs), confirmation bias is both particularly common and pronounced. Empirical data thus corroborate some of the predictions of the RM account.

5.3 Explanatory Benefits

The theoretical and empirical considerations from the preceding sections offer support for the RM account. Before concluding, it is worth mentioning three further reasons for taking the account seriously. First, it has greater explanatory power than the three alternative views outlined above. Second, it is consistent with, and provides new contributions to, different areas of evolutionary theorizing on human cognition. And it casts new light on the epistemic character of confirmation bias. I’ll now support these three points.

For instance, the argumentative-function account holds that confirmation bias is adaptive in making us better arguers. This was problematic because the bias hinders us in anticipating people’s objections, which weakens our argumentative skill and increases the risk of us appearing incompetent in argumentative exchanges. The RM account avoids these problems: if confirmation bias was selected for reinforcing our preconceptions about people to promote SFPs then, since in one’s own reasoning one only needs to justify one’s beliefs to oneself, the first point one finds acceptable will suffice. To convince others , one would perhaps need to anticipate objections. But if the bias functions to boost primarily only one’s own conviction about particular beliefs so as to facilitate SFPs then ‘laziness’ in critical thinking about one’s own positions (Trouche et al. 2016 ) shouldn’t be surprising.

Turning to the group-cognition account, the proposal was that confirmation bias is adaptive in and was selected for making group-level inquires more thorough, reliable, and efficient. In response, I noted that the concept of ‘group selection’ is problematic when it comes to traits threatening an individual’s fitness (West et al. 2007 ; Pinker 2012 ), and that confirmation bias would arguably only lead to the group-level benefits at issue in groups with viewpoint diversity. Yet, it is doubtful that ancestral groups met this condition. The RM account is preferable to the group-cognition view because it doesn’t rely on a notion of group selection but concerns primarily individual-level benefits, and it doesn’t tie the adaptive effects of the bias to conditions of viewpoint diversity. It proposes instead that the adaptive SFP-related effects of the bias increase individuals’ fitness (e.g., by facilitating their navigation of the social world, aligning them/others with their group's conceptions etc.) and can emerge whenever people hold beliefs about each other, interact, and fulfill social expectations. This condition is satisfied even in groups with viewpoint homogeneity.

The RM account also differs from the intention–alignment view, which holds that confirmation bias evolved for allowing us to synchronize intentions with others. One problem with this view was that the bias seems to hinder an intention alignment of individuals by weakening their perspective-taking capacity, and inclining them to ignore or distort people’s objections. The RM account avoids this problem because it suggests that by disregarding objections or counterevidence to one’s beliefs, one can remain convinced about them, which helps align social reality (not only, e.g., people’s intentions) with them, producing the adaptive outcomes outlined above. The account can also explain why confirmation bias is particularly strong in groups in which shared ideologies are at stake (Taber and Lodge 2006 ; Gerken 2019 ). For subjects have a keen interest in reality corresponding to their ideological conceptions. Since the latter are shaping social reality via their impact on behavior and are more effective in doing so the more convinced people are about them (Kappes et al. 2020 ), it is to be expected that when it comes to ideological propositions in like-minded groups, confirmation bias is more pronounced. And, as noted, the resulting group polarization itself can then be adaptive in strengthening the reality-matching process.

Moving beyond extant work on the evolution of confirmation bias, the RM account also contributes to and raises new questions for other areas of research in different disciplines. It, for instance, yields predictions that psychologists can experimentally explore in comparative studies such as the prediction that confirmation bias is more common and stronger when targeting social versus non-social beliefs, or when conditions are conducive to reality matching as opposed to when they are not. The account also adds a new perspective to research on SFPs and on how social conceptions interact with their targets (Hacking 1995 ; Snyder and Klein 2007 ; Jussim 2017 ). Relatedly, the RM account also contributes to recent philosophical work on, folk-psychology , i.e., our ability to ascribe mental states to agents to make sense of their behavior. In that work, some philosophers argue that folk-psychology serves “mindshaping”, that is, the moulding of people’s behavior and minds so that they fit our conceptions, making people more predictable and cooperation with them easier (Mameli 2001 ; Zawidzki 2013 ; Peters 2019b ). There are clear connections between the mindshaping view of folk psychology and the RM account, but also important differences. For instance, the RM account pertains to the function of confirmation bias, not folk psychology. Moreover, advocates of the mindshaping view have so far left the conditions for effective mindshaping via folk-psychological ascriptions and the possible role of confirmation bias in it unexplored. The RM account begins to fill this gap in the research and in doing so adds to work on the question of how epistemic (or ‘mindreading’) and non-epistemic (or ‘mindshaping’, e.g., motivational) processes are related in folk-psychology (Peters 2019b : 545f; Westra 2020 ; Fernández-Castro and Martínez-Manrique 2020 ).

In addition to offering contributions to a range of different areas of research, the RM account also casts new light on the epistemic character of confirmation bias. Capturing the currently common view on the matter, Mercier ( 2016 ) writes that “piling up reasons that support our preconceived views is not the best way to correct them. […] [It] stop[s] people from fixing mistaken beliefs” (110). The RM account offers a different perspective, suggesting that when it is directed at beliefs about social affairs, confirmation bias does often help subjects correct their mistaken conceptions to the extent that it contributes to SFPs of them. Similarly, Dutilh Novaes ( 2018 ) holds that the bias involves or contributes to a failure of perspective taking, and so, “given the importance of being able to appreciate one’s interlocutor’s perspective for social interaction”, is “best not seen as an adaptation” (520). The RM account, on the other hand, proposes that the bias often facilitates social understanding: in making us less sensitive to our interlocutor’s opposing perspective, it helps us remain confident about our social beliefs, which increases the probability of SFPs that in turn make people more predictable and mindreadable.

6 Conclusion

After outlining limitations of three recent proposals on the evolution of confirmation bias, I developed and supported a novel alternative, the reality-matching (RM) account, which holds that one of the adaptive features for which the bias evolved is that it helps us bring social reality into alignment with our beliefs. When the bias targets positive social beliefs, this serves both the subject and the group, assisting them in their development (to become, e.g., more communicative or knowledgeable) while also making their social cognition more effective and tractable. When it targets negative social beliefs, in promoting reality matching, the bias might contribute to ethically problematic outcomes, but it can then still support social conformity and predictability, which were perhaps especially in ancestral tribal groups adaptive. While the socially constructive aspect of confirmation bias highlighted here may not be the main or only feature of the bias that led to its evolution, it is one that has so far been overlooked in the evolutionary theorizing on confirmation bias. If we attend to it, an account of the function of confirmation bias becomes available that coheres with data from across the psychological sciences, manages to avoid many of the shortcomings of competitor views, and has explanatory benefits that help advance the research on the function, nature, and epistemic character of the bias.

Mercier and Sperber ( 2017 ) and others prefer the term ‘myside bias’ to ‘confirmation bias’ because people don’t have a general tendency to confirm any hypothesis that comes to their mind but only ones that are on ‘their side’ of a debate. I shall here use the term ‘confirmation bias’ because it is more common and in any case typically understood in the way just mentioned.

Researchers working on folk psychology might be reminded of the ‘mindshaping’ view of folk psychology (Mameli 2001 ; Zawidzki 2013 ). I will come back to this view and demarcate it from my account of confirmation bias here in Sect.  5 .

It might be proposed that when participants in the experiment seek reasons for their judgments, perhaps they take themselves already to have formed the judgements for good reasons and then wonder what these reasons might have been. Why would they seek reasons against a view that they have formed (by their own lights) for good reasons? However, we might equally well ask why they would take themselves to have formed a judgment for good reasons in the first place even though they don’t know any of them? If it is a general default tendency to assume that any view that one holds rests on good reasons, then that would again suggest the presence of a confirmation bias. For a general tendency to think that one’s views rest on good reasons even when one doesn’t know them is a tendency to favor and confirm these views while resisting balanced scrutiny of their basis.

SFPs can also accumulate when they occur across different interactions, and in contemporary societies, overall accumulative SFP effects of negative social beliefs capturing, e.g., stereotypes might be stronger than those of positive social beliefs in individual dyadic interactions (Madon et al. 2018 ). However, in ancestral, ‘tribal’ groups of highly interdependent subjects, even accumulative SFPs of, e.g., stereotypes would perhaps still have contributed to conformity and social stability. I shall return to the possible SFP-related benefits of nowadays highly negative social conceptions, i.e., stereotypes, ethnocentrism etc. below.

Relatedly, neuroscientific data show that a positive view of one’s own traits tends to correlate with a reduced activation of the right inferior prefrontal gyrus, which is the area of the brain processing self-related content, when the subject receives negative self-related information (Sharot et al. 2011 ). That is, optimists about themselves display a diminished sensitivity for negative information that is in tension with self-related trait optimism (ibid).

Alfano, M. (2013). Character as moral fiction . Cambridge: CUP.

Book   Google Scholar  

Bacharach, M., Guerra, G., & Zizzo, D. J. (2007). The self-fulfilling property of trust: An experimental study. Theory and Decision, 63, 349–388.

Article   Google Scholar  

Ball, P. (2017). The trouble with scientists. How one psychologist is tackling human biases in science. Nautilus . Retrieved May 2, 2019 from http://nautil.us/issue/54/the-unspoken/the-trouble-with-scientists-rp .

Biggs, M. (2009). Self-fulfilling prophecies. In P. Bearman & P. Hedstrom (Eds.), The Oxford handbook of analytical sociology (pp. 294–314). Oxford: OUP.

Google Scholar  

Brannon, L. A., Tagler, M. J., & Eagly, A. H. (2007). The moderating role of attitude strength in selective exposure to information. Journal of Experimental Social Psychology, 43, 611–617.

Copeland, J. (1994). Prophecies of power: Motivational implications of social power for behavioral confirmation. Journal of Personality and Social Psychology, 67, 264–277.

Cornelissen, G., Dewitte, S., & Warlop, L. (2007). Whatever people say I am that’s what I am: Social labeling as a social marketing tool. International Journal of Research in Marketing, 24 (4), 278–288.

Dardenne, B., & Leyens, J. (1995). Confirmation bias as a social skill. Personality and Social Psychology Bulletin, 21 (11), 1229–1239.

Darley, J. M., & Gross, P. H. (1983). A hypothesis-confirming bias in labeling effects. Journal of Personality and Social Psychology, 44, 20–33.

Davidson, O. B., & Eden, D. (2000). Remedial self-fulfilling prophecy: Two field experiments to prevent Golem effects among disadvantaged women. Journal of Applied Psychology, 85 (3), 386–398.

De Bruine, L. M. (2009). Beyond ‘just-so stories’: How evolutionary theories led to predictions that non-evolution-minded researchers would never dream of. Psychologist, 22 (11), 930–933.

De Cruz, H., & De Smedt, J. (2016). How do philosophers evaluate natural theological arguments? An experimental philosophical investigation. In H. De Cruz & R. Nichols (Eds.), Advances in religion, cognitive science, and experimental philosophy (pp. 119–142). New York: Bloomsbury.

Downey, G., Freitas, A. L., Michaelis, B., & Khouri, H. (1998). The self-fulfilling prophecy in close relationships: Rejection sensitivity and rejection by romantic partners. Journal of Personality and Social Psychology, 75, 545–560.

Draper, P., & Nichols, R. (2013). Diagnosing bias in philosophy of religion. The Monist, 96, 420–446.

Dutilh Novaes, C. (2018). The enduring enigma of reason. Mind and Language, 33, 513–524.

Evans, J. (1996). Deciding before you think: Relevance and reasoning in the selection task. British Journal of Psychology, 87, 223–240.

Fernández-Castro, V., & Martínez-Manrique, F. (2020). Shaping your own mind: The self-mindshaping view on metacognition. Phenomenology and the Cognitive Sciences . https://doi.org/10.1007/s11097-020-09658-2 .

Gerken, M. (2019). Public scientific testimony in the scientific image. Studies in History and Philosophy of Science Part A . https://doi.org/10.1016/j.shpsa.2019.05.006 .

Golec de Zavala, A. (2011). Collective narcissism and intergroup hostility: The dark side of ‘in-group love’. Social and Personality Psychology Compass, 5, 309–320.

Goodwin, S., Gubin, A., Fiske, S., & Yzerbyt, V. (2000). Power can bias impression formation: Stereotyping subordinates by default and by design. Group Processes and Intergroup Relations, 3, 227–256.

Gould, S. J., & Lewontin, R. C. (1979). The spandrels of San Marco and the Panglossian paradigm: A critique of the adaptationist programme. Proceedings of the Royal Society of London. Series B, 205 (1161), 581–598.

Grusec, J., Kuczynski, L., Rushton, J., & Simutis, Z. (1978). Modeling, direct instruction, and attributions: Effects on altruism. Developmental Psychology, 14, 51–57.

Hacking, I. (1995). The looping effects of human kinds. In D. Sperber, et al. (Eds.), Causal cognition (pp. 351–383). New York: Clarendon Press.

Hahn, U., & Harris, A. J. L. (2014). What does it mean to be biased: Motivated reasoning and rationality. In H. R. Brian (Ed.), Psychology of learning and motivation (pp. 41–102). New York: Academic Press.

Haidt, J. (2012). The righteous mind . New York: Pantheon.

Hall, L., Johansson, P., & Strandberg, T. (2012). Lifting the veil of morality: Choice blindness and attitude reversals on a self-transforming survey. PLoS ONE, 7 (9), e45457.

Haselton, M. G., & Nettle, D. (2006). The paranoid optimist: An integrative evolutionary model of cognitive biases. Personality and Social Psychology Review, 10, 47–66.

Henrich, J. (2016). The secret of our success . Princeton, NJ: Princeton University Press.

Hernandez, I., & Preston, J. L. (2013). Disfluency disrupts the confirmation bias. Journal of Experimental Social Psychology, 49 (1), 178–182.

Jensen, R. E., & Moore, S. G. (1977). The effect of attribute statements on cooperativeness and competitiveness in school-age boys. Child Development, 48 (1), 305–307.

Johnson, D. D. P., Blumstein, D. T., Fowler, J. H., & Haselton, M. G. (2013). The evolution of error: Error management, cognitive constraints, and adaptive decision-making biases. Trends in Ecology & Evolution, 28, 474–481.

Johnson, D. D. P., & Fowler, J. H. (2011). The evolution of overconfidence. Nature, 477, 317–320.

Jost, J. T., Glaser, J., Kruglanski, A. W., & Sulloway, F. J. (2003). Political conservatism as motivated social cognition. Psychological Bulletin, 129 (3), 339–375.

Jussim, L. (2012). Social perception and social reality . Oxford: OUP.

Jussim, L. (2017). Précis of social perception and social reality: Why accuracy dominates bias and self-fulfilling prophecy. Behavioral and Brain Sciences, 40, 1–20.

Kappes, A., Harvey, A. H., Lohrenz, T., et al. (2020). Confirmation bias in the utilization of others’ opinion strength. Nature Neuroscience, 23, 130–137.

Kelly, D. (2013). Moral disgust and the tribal instincts hypothesis. In K. Sterelny, R. Joyce, B. Calcott, & B. Fraser (Eds.), Cooperation and its evolution (pp. 503–524). Cambridge, MA: The MIT Press.

Kelly, D., & Hoburg, P. (2017). A tale of two processes: On Joseph Henrich’s the secret of our success: How culture is driving human evolution, domesticating our species, and making us smarter. Philosophical Psychology, 30 (6), 832–848.

Ketelaar, T., & Ellis, B. J. (2000). Are evolutionary explanations unfalsifiable? Evolutionary psychology and the Lakatosian philosophy of science. Psychological Inquiry, 11 (1), 1–21.

Klayman, J. (1995). Varieties of confirmation bias. Psychology of Learning and Motivation, 32, 385–418.

Knobloch-Westerwick, S., Johnson, B. K., & Westerwick, A. (2015). Confirmation bias in online searches: Impacts of selective exposure before an election on political attitude strength and shifts. Journal of Computer-Mediated Communication, 20, 171–187.

Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108 (3), 480–498.

Leidner, B., Castano, E., Zaiser, E., & Giner-Sorolla, R. (2010). Ingroup glorification, moral disengagement, and justice in the context of collective violence. Personality and Social Psychology Bulletin, 36 (8), 1115–1129.

Levy, N. (2019). Due deference to denialism: Explaining ordinary people’s rejection of established scientific findings. Synthese, 196 (1), 313–327.

Leyens, J., Dardenne, B., Yzerbyt, V., Scaillet, N., & Snyder, M. (1999). Confirmation and disconfirmation: Their social advantages. European Review of Social Psychology, 10 (1), 199–230.

Lilienfeld, S. O. (2017). Psychology’s replication crisis and the grant culture: Righting the ship. Perspectives on Psychological Science, 12 (4), 660–664.

Lord, C., Lepper, M., & Preston, E. (1984). Considering the opposite: A corrective strategy for social judgment. Journal of Personality and Social Psychology, 47, 1231–1243.

Madon, S., Jussim, L., & Eccles, J. (1997). In search of the powerful self-fulfilling prophecy. Journal of Personality and Social Psychology, 72, 791–809.

Madon, S., Jussim, L., Guyll, M., Nofziger, H., Salib, E. R., Willard, J., et al. (2018). The accumulation of stereotype-based self-fulfilling prophecies. Journal of Personality and Social Psychology, 115 (5), 825–844.

Mameli, M. (2001). Mindreading, mindshaping, and evolution. Biology and Philosophy, 16, 597–628.

Marks, M. J., & Fraley, R. C. (2006). Confirmation bias and the sexual double standard. Sex Roles: A Journal of Research, 54 (1–2), 19–26.

Marsh, D. M., & Hanlon, T. J. (2007). Seeing what we want to see: Confirmation biasin animal behavior research. Ethology, 113, 1089–1098.

Matheson, J., & Vitz, R. (Eds.). (2014). The ethics of belief: Individual and social . Oxford: OUP.

Mayo, R., Alfasi, D., & Schwarz, N. (2014). Distrust and the positive test heuristic: Dispositional and situated social distrust improves performance on the Wason Rule Discovery Task. Journal of Experimental Psychology: General, 143 (3), 985–990.

McDonald, M. M., Navarrete, C. D., & van Vugt, M. (2012). Evolution and the psychology of intergroup conflict: The male warrior hypothesis. Philosophical Transactions of the Royal Society, B, 367, 670–679.

Mercier, H. (2016). Confirmation (or myside) bias. In R. Pohl (Ed.), Cognitive illusions (pp. 99–114). London: Psychology Press.

Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34 (2), 57–111.

Mercier, H., & Sperber, D. (2017). The enigma of reason . Cambridge, MA: Harvard University Press.

Merton, R. (1948). The self-fulfilling prophecy. The Antioch Review, 8 (2), 193–210.

Miller, R., Brickman, P., & Bolen, D. (1975). Attribution versus persuasion as a means for modifying behavior. Journal of Personality and Social Psychology, 31 (3), 430–441.

Millikan, R. G. (1984). Language thought and other biological categories . Cambridge, MA: MIT Press.

Müller-Pinzler, L., Czekalla, N., Mayer, A. V., et al. (2019). Negativity-bias in forming beliefs about own abilities. Scientific Reports, 9, 14416. https://doi.org/10.1038/s41598-019-50821-w .

Murray, S. L., Holmes, J. G., & Griffin, D. W. (1996). The self-fulfilling nature of positive illusions in romantic relationships: Love is not blind, but prescient. Journal of Personality and Social Psychology , 71 , 1155–1180.

Myers, D., & DeWall, N. (2015). Psychology . New York: Worth Publishers.

Myers, D. G., & Lamm, H. (1976). The group polarization phenomenon. Psychological Bulletin, 83, 602–627.

Nickerson, R. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2, 175–220.

Norman, A. (2016). Why we reason: Intention–alignment and the genesis of human rationality. Biology and Philosophy, 31, 685–704.

Ordabayeva, N., & Fernandes, D. (2018). Better or different? How political ideology shapes preferences for differentiation in the social hierarchy. Journal of Consumer Research, 45 (2), 227–250.

Palminteri, S., Lefebvre, G., Kilford, E. J., & Blakemore, S. J. (2017). Confirmation bias in human reinforcement learning: Evidence from counterfactual feedback processing. PLoS Computational Biology, 13 (8), e1005684.

Pelham, B. W., & Swann, W. B. (1994). The juncture of intrapersonal and interpersonal knowledge: Self-certainty and interpersonal congruence. Personality and Social Psychology Bulletin, 20 (4), 349–357.

Peters, U. (2018). Illegitimate values, confirmation bias, and mandevillian cognition in science. British Journal for Philosophy of Science . https://doi.org/10.1093/bjps/axy079 .

Peters, U. (2019a). Implicit bias, ideological bias, and epistemic risks in philosophy. Mind & Language , 34 , 393–419. https://doi.org/10.1111/mila.12194 .

Peters, U. (2019b). The complementarity of mindshaping and mindreading. Phenomenology and the Cognitive Sciences , 18 (3), 533–549.

Peters, U., Honeycutt, N., De Block, A., & Jussim, L. (forthcoming). Ideological diversity, hostility, and discrimination in philosophy. Philosophical Psychology . Available online: https://philpapers.org/archive/PETIDH-2.pdf .

Pinker, S. (2012). The false allure of group selection. Retrieved July 20, 2012 from http://edge.org/conversation/the-false-allure-of-group-selection .

Rabin, M., & Schrag, J. L. (1999). First impressions matter: A model of confirmatory bias. Quarterly Journal of Economics, 114 (1), 37–82.

Richerson, P., & Boyd, R. (2001). The evolution of subjective commitment to groups: A tribal instincts hypothesis. In R. M. Nesse (Ed.), Evolution and the capacity for commitment (pp. 186–202). New York: Russell Sage Found.

Richerson, P., & Boyd, R. (2005). Not by genes alone: How culture transformed human evolution . Chicago: University of Chicago Press.

Roberts, S. C., van Vugt, M., & Dunbar, R. I. M. (2012). Evolutionary psychology in the modern world: Applications, perspectives, and strategies. Evolutionary Psychology, 10, 762–769.

Schuck, P. H. (2001). The perceived values of diversity, then and now. Cardozo Law Review, 22, 1915–1960.

Sharot, T., Korn, C. W., & Dolan, R. J. (2011). How unrealistic optimism is maintained in the face of reality. Nature Neuroscience, 14, 1475–1479.

Simpson, J. A., & Beckes, L. (2010). Evolutionary perspectives on prosocial behavior. In M. Mikulincer & P. Shaver (Eds.), Prosocial motives, emotions, and behavior: The better angels of our nature (pp. 35–53). Washington, DC: American Psychological Association.

Chapter   Google Scholar  

Smart, P. (2018). Mandevillian intellingence. Synthese, 195, 4169–4200.

Smith, J. J., & Wald, B. (2019). Collectivized intellectualism. Res Philosophica, 96 (2), 199–227.

Sniezek, J. A., & Van Swol, L. M. (2001). Trust, confidence, and expertise in a judge–advisor system. Organizational Behavior and Human Decision Processes, 84, 288–307.

Snyder, M., & Klein, O. (2007). Construing and constructing others: On the reality and the generality of the behavioral confirmation scenario. In P. Hauf & F. Forsterling (Eds.), Making minds (pp. 47–60). John Benjamins: Amsterdam/Philadelphia.

Solomon, G. B. (2016). Improving performance by means of action–cognition coupling in athletes and coaches. In M. Raab, B. Lobinger, S. Hoffman, A. Pizzera, & S. Laborde (Eds.), Performance psychology: Perception, action, cognition, and emotion (pp. 88–101). London, England: Elsevier Academic Press.

Stangor, C. (2011). Principles of social psychology . Victoria, BC: BCcampus.

Stanovich, K., West, R., & Toplak, M. (2013). Myside bias, rational thinking, and intelligence. Current Directions in Psychological Science, 22, 259–264.

Steel, D. (2018). Wishful thinking and values in science: Bias and beliefs about injustice. Philosophy of Science . https://doi.org/10.1086/699714 .

Sterelny, K. (2006). Memes revisited. British Journal for the Philosophy of Science, 57, 145–165.

Sterelny, K. (2007). Social intelligence, human intelligence and niche construction. Philosophical Transactions of the Royal Society B, 362, 719–730.

Sterelny, K. (2018). Why reason? Hugo Mercier’s and Dan Sperber’s the enigma of reason: A new theory of human understanding. Mind and Language, 33 (5), 502–512.

Stibel, J. (2018). Fake news: How our brains lead us into echo chambers that promote racism and sexism. USA Today . Retrieved October 8, 2018 from https://eu.usatoday.com/story/money/columnist/2018/05/15/fake-news-social-media-confirmation-bias-echo-chambers/533857002/ .

Swann, W. B. (1983). Self-verification: Bringing social reality into harmony with the self. In J. Suls & A. G. Greenwald (Eds.), Social psychological perspectives on the self (Vol. 2, pp. 33–66). London: Erlbaum.

Swann, W. B., Jr. (2012). Self-verification theory. In P. A. M. Van Lange, A. W. Kruglanski, & E. T. Higgins (Eds.), Handbook of theories of social psychology (pp. 23–42). Beverley Hills, CA: Sage Publications Ltd.

Swann, W., & Ely, R. (1984). A battle of wills: Self-verification versus behavioral confirmation. Journal of Personality and Social Psychology, 46, 1287–1302.

Swann, W. B., Jr., Stein-Seroussi, A., & Giesler, B. (1992). Why people self-verify. Journal of Personality and Social Psychology, 62, 392–406.

Taber, C., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50, 755–769.

Talaifar, S., & Swann, W. B. (2017). Self-verification theory. In L. Goossens, M. Maes, S. Danneel, J. Vanhalst, & S. Nelemans (Eds.), Encyclopedia of personality and individual differences (pp. 1–9). Berlin: Springer.

Tomasello, M. (2014). The ultra-social animal. European Journal of Social Psychology, 44, 187–194.

Tooby, J., & Cosmides, L. (2015). The theoretical foundations of evolutionary psychology. In D. M. Buss (Ed.), The handbook of evolutionary psychology (pp. 3–87). Hoboken, NJ: Wiley.

Tormala, Z. L. (2016). The role of certainty (and uncertainty) in attitudes and persuasion. Current Opinion in Psychology, 10, 6–11.

Trouche, E., et al. (2016). The selective laziness of reasoning. Cognitive Science, 40, 2122–2136.

Turnwald, B., et al. (2018). Learning one’s genetic risk changes physiology independent of actual genetic risk. Nature Human Behaviour . https://doi.org/10.1038/s41562-018-0483-4 .

von Hippel, W., & Trivers, R. (2011). The evolution and psychology of self-deception. Behavioral and Brain Sciences, 34 (1), 1–16.

Wenger, A., & Fowers, B. J. (2008). Positive illusions in parenting: Every child is above average. Journal of Applied Social Psychology, 38 (3), 611–634.

West, S. A., Griffin, A. S., & Gardiner, A. (2007). Social semantics: How useful has group selection been? Journal of Evolutionary Biology , 21 , 374–385.

Westra, E. (2020). Folk personality psychology: Mindreading and mindshaping in trait attribution. Synthese . https://doi.org/10.1007/s11229-020-02566-7 .

Whitaker, R. M., Colombo, G. B., & Rand, D. G. (2018). Indirect reciprocity and the evolution of prejudicial groups. Scientific Reports , 8 (1), 13247. https://doi.org/10.1038/s41598-018-31363-z .

Whittlestone, J. (2017). The importance of making assumptions: Why confirmation is not necessarily a bias . Ph.D. Thesis. Coventry: University of Warwick.

Willard, J., & Madon, S. (2016). Understanding the connections between self-fulfilling prophecies and social problems. In S. Trusz & P. Przemysław Bąbel (Eds.), Interpersonal and intrapersonal expectancies (pp. 117–125). London: Routledge.

Willard, J., Madon, S., Guyll, M., Spoth, R., & Jussim, L. (2008). Self-efficacy as a moderator of negative and positive self-fulfilling prophecy effects: Mothers’ beliefs and children’s alcohol use. European Journal of Social Psychology, 38, 499–520.

Word, C. O., Zanna, M. P., & Cooper, J. (1974). The nonverbal mediation of self-fulfilling prophecies in interracial interaction. Journal of Experimental Social Psychology, 10, 109–120.

Zawidzki, T. (2008). The function of folk psychology: Mind reading or mind shaping? Philosophical Explorations, 11 (3), 193–210.

Zawidzki, T. (2013). Mindshaping: A new framework for understanding human social cognition. Cambridge: MIT Press.

Zebrowitz, L. A., & Montepare, J. M. (2008). Social psychological face perception: Why appearance matters. Social and Personality Psychology Compass, 2, 1497–1517.

Download references

Acknowledgements

Many thanks to Andreas De Block, Mikkel Gerken, and Alex Krauss for comments on earlier drafts. The research for this paper was partly funded by the Danmarks Frie Forskningsfond Grant no: 8018-00053B allocated to Mikkel Gerken.

Author information

Authors and affiliations.

Department of Philosophy, University of Southern Denmark, Odense, Denmark

Department of Psychology, King’s College London, De Crespigny Park, Camberwell, London, SE5 8AB, UK

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Uwe Peters .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Peters, U. What Is the Function of Confirmation Bias?. Erkenn 87 , 1351–1376 (2022). https://doi.org/10.1007/s10670-020-00252-1

Download citation

Received : 07 May 2019

Accepted : 27 March 2020

Published : 20 April 2020

Issue Date : June 2022

DOI : https://doi.org/10.1007/s10670-020-00252-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

Effectiviology

The Confirmation Bias: Why People See What They Want to See

The Confirmation Bias

The confirmation bias is a cognitive bias that causes people to search for, favor, interpret, and recall information in a way that confirms their preexisting beliefs. For example, if someone is presented with a lot of information on a certain topic, the confirmation bias can cause them to only remember the bits of information that confirm what they already thought.

The confirmation bias influences people’s judgment and decision-making in many areas of life, so it’s important to understand it. As such, in the following article you will first learn more about the confirmation bias, and then see how you can reduce its influence, both in other people’s thought process as well as in your own.

How the confirmation bias affects people

The confirmation bias promotes various problematic patterns of thinking , such as people’s tendency to ignore information that contradicts their beliefs . It does so through several types of biased cognitive processes:

  • Biased search for information. This means that the confirmation bias causes people to search for information that confirms their preexisting beliefs, and to avoid information that contradicts them.
  • Biased favoring of information. This means that the confirmation bias causes people to give more weight to information that supports their beliefs, and less weight to information that contradicts them.
  • Biased interpretation of information. This means that the confirmation bias causes people to interpret information in a way that confirms their beliefs, even if the information could be interpreted in a way that contradicts them.
  • Biased recall of information. This means that the confirmation bias causes people to remember information that supports their beliefs and to forget information that contradicts them, or to remember supporting information as having been more supporting than it really was, or to incorrectly remember contradictory information as having supported their beliefs.

Note : one closely related phenomenon is cherry picking . It involves focusing only on evidence that supports one’s stance, while ignoring evidence that contradicts it. People often engage in cherry picking due to the confirmation bias, though it’s possible to engage in cherry picking even if a person is fully aware of what they’re doing, and is unaffected by the bias.

Examples of the confirmation bias

One example of the confirmation bias is someone who searches online to supposedly check whether a belief that they have is correct, but ignores or dismisses all the sources that state that it’s wrong. Similarly, another example of the confirmation bias is someone who forms an initial impression of a person, and then interprets everything that this person does in a way that confirms this initial impression.

Furthermore, other examples of the confirmation appear in various domains. For instance, the confirmation bias can affect:

  • How people view political information. For example, people generally prefer to spend more time looking at information that supports their political stance and less time looking at information that contradicts it.
  • How people assess pseudoscientific beliefs. For example, people who believe in pseudoscientific theories tend to ignore information that  disproves those theories .
  • How people invest money. For example, investors give more weight to information that confirms their preexisting beliefs regarding the value of certain stocks.
  • How scientists conduct research. For example, scientists often display the confirmation bias when they selectively analyze and interpret data in a way that confirms  their preferred hypothesis.
  • How medical professionals diagnose patients. For example, doctors often search for new information in a selective manner that will allow them to confirm their initial diagnosis of a patient, while ignoring signs that this diagnosis could be wrong.

In addition, an example of how the confirmation bias can influence people appears in the following quote, which references the prevalent misinterpretation of evidence during witch trials in the 17th century:

“When men wish to construct or support a theory, how they torture facts into their service!” ⁠— From “ Extraordinary Popular Delusions and the Madness of Crowds “

Similarly, another example of how people display the confirmation bias is the following:

“… If the new information is consonant with our beliefs, we think it is well founded and useful: ‘Just what I always said!’ But if the new information is dissonant, then we consider it biased or foolish: ‘What a dumb argument!’ So powerful is the need for consonance that when people are forced to look at disconfirming evidence, they will find a way to criticize, distort, or dismiss it so that they can maintain or even strengthen their existing belief.” ⁠— From “ Mistakes Were Made (but Not by Me): Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts “

Overall, examples of the confirmation bias appear in various domains. These examples illustrate the various different ways in which it can affect people, and show that this bias is highly prevalent, including among trained professionals who are often assumed to assess information in a purely rational manner.

Psychology and causes of the confirmation bias

The confirmation bias can be attributed to two main cognitive mechanisms:

  • Challenge avoidance , which is the desire to avoid finding out that you’re wrong.
  • Reinforcement seeking , which is the desire to find out that you’re right.

These forms of motivated reasoning can be attributed to people’s underlying desire to minimize their  cognitive dissonance , which is psychological distress that occurs when people hold two or more contradictory beliefs simultaneously. Challenge avoidance can reduce dissonance by reducing engagement with information that contradicts preexisting beliefs. Conversely, reinforcement seeking can reduce dissonance by increasing engagement with information that affirms people’s sense of correctness , including if they encounter contradictory information later.

Furthermore, the confirmation bias also occurs due to flaws in the way we test hypotheses.  For example, when people try to find an explanation for a certain phenomenon, they tend to focus on only one hypothesis at a time, and disregard alternative hypotheses, even in cases where they’re not emotionally incentivized to confirm their initial hypothesis. This can cause people to simply try and prove that their initial hypothesis is true, instead of trying to actually check whether it’s true or not, which causes them to ignore the possibility that the information that they encounter could disprove this initial hypothesis, or support alternative hypotheses.

An example of this is a doctor who forms an initial diagnosis of a patient, and who then focuses solely on trying to prove that this diagnosis is right, instead of trying to actively determine whether alternative diagnoses could make more sense.

This explains why people can experience unmotivated confirmation bias in situations where they have no emotional reason to favor a specific hypothesis over others. This is contrasted with a motivated confirmation bias, which occurs when the person displaying the bias is motivated by some emotional consideration.

Finally, the confirmation bias can also be attributed to a number of additional causes. For example, in the case of the motivated confirmation bias, an additional reason why people experience the bias is that the brain sometimes suppresses neural activity in areas associated with emotional regulation and emotionally neutral reasoning. This causes people to process information based on how their emotions guide them to, rather than based on how their logic would guide them.

Overall, people experience the confirmation bias primarily because they want to minimize psychological distress, and specifically due to challenge avoidance , which is the desire to avoid finding out that they’re wrong, and  reinforcement seeking , which is the desire to find out that they’re right. Furthermore, people can also experience the confirmation due to other causes, such as the flawed way they test hypotheses, as in the case where people fixate on confirming a single hypothesis while ignoring alternatives.

Note : Some of the behaviors that people engage in due to the confirmation bias can be viewed as a form of selective exposure . This involves people choosing to engage only with information that supports their preexisting beliefs and decisions, while ignoring information that contradicts them.

How to reduce the confirmation bias

Reducing other people’s confirmation bias.

There are various things that you can do to reduce the influence that the confirmation bias has on people. These methods generally revolve around trying to counteract the cognitive mechanisms that promote the confirmation bias in the first place .

As such, these methods generally involve trying to get people to overcome their tendency to focus on and prefer confirmatory information, or their tendency to avoid and reject challenging information, while also encouraging them to conduct a valid reasoning process.

Specifically, the following are some of the most notable techniques that you can use to reduce the confirmation bias in people:

  • Explain what the confirmation bias is, why we experience it, how it affects us, and why it can be a problem, potentially using relevant examples. Understanding this phenomenon better can motivate people to avoid it, and can help them deal with it more effectively, by helping them recognize when and how it affects them. Note that in some cases, it may be beneficial to point out the exact way in which a person is displaying the confirmation bias.
  • Make it so that the goal is to find the right answer, rather than defend an existing belief. For example, consider a situation where you’re discussing a controversial topic with someone, and you know for certain that they’re wrong. If you argue hard against them, that might cause them to get defensive and feel that they must stick by their initial stance regardless of whatever evidence you show them. Conversely, if you state that you’re just trying to figure out what the right answer is, and discuss the topic with them in a friendly manner, that can make them more open to considering the challenging evidence that you present. In this case, your goal is to frame your debate as a journey that you go on together in search of the truth, rather than a battle where you fight each other to prove the other wrong. The key here is that, when it comes to a joint journey, both of you can be “winners”, while in the case of a battle, only one of you can, and the other person will often experience the confirmation bias to avoid feeling that they were the “loser”.
  • Minimize the unpleasantness and issues associated with finding out that they’re wrong. In general, the more unpleasant and problematic being wrong is, the more a person will use the confirmation bias to stick by their initial stance. There are various ways in which you can make the experience of being wrong less unpleasant or problematic, such as by emphasizing the value of learning new things, and by avoiding mocking people for having held incorrect beliefs.
  • Encourage people to avoid letting their emotional response dictate their actions. Specifically, explain that while it’s natural to want to avoid challenges and seek reinforcement, letting these feelings dictate how you process information and make decisions is problematic. This means, for example, that if you feel that you want to avoid a certain piece of information, because it might show that you’re wrong, then you should realize this, but choose to see that information anyway.
  • Encourage people to give information sufficient consideration. When it comes to avoiding the confirmation bias, it often helps to engage with information in a deep and meaningful way, since shallow engagement can lead people to rely on biased intuitions, rather than on proper analytical reasoning. There are various things that people can do to ensure that they give information sufficient consideration , such as spending a substantial amount of time considering it, or interacting with it in an environment that has no distractions.
  • Encourage people to avoid forming a hypothesis too early. Once people have a specific hypothesis in mind, they often try and confirm it , instead of trying to formulate and test other possible hypotheses. As such, it can often help to encourage people to process as much information as possible before forming their initial hypothesis.
  • Ask people to explain their reasoning. For example, you can ask them to clearly state what their stance is, and what evidence has caused them to support that stance. This can help people identify potential issues in their reasoning, such as that their stance is unsupported.
  • Ask people to think about various reasons why their preferred hypothesis might be wrong. This can help them test their preferred hypothesis in ways that they might not otherwise, and can make them more likely to accept and internalize challenging information .
  • Ask people to think about alternative hypotheses, and why those hypotheses might be right. Similarly to asking people to think about reasons why their preferred hypothesis might be wrong, this can encourage people to engage in a proper reasoning process, which they might not do otherwise. Note that, when doing this, it is generally better to focus on a small number of alternative hypotheses , rather than a large number of them.

Different techniques will be more effective for reducing the confirmation bias in different situations, and it is generally most effective to use a combination of techniques, while taking into account relevant situational and personal factors.

Furthermore, in addition to the above techniques, which are aimed at reducing the confirmation bias in particular, there are additional debiasing techniques that you can use to help people overcome their confirmation bias. This includes, for example, getting people to slow down their reasoning process, creating favorable conditions for optimal decision making, and standardizing the decision-making process.

Overall, to reduce the confirmation bias in others, you can use various techniques that revolve around trying to counteract the cognitive mechanisms that promote the confirmation bias in the first place. This includes, for example, making people aware of this bias, making discussions be about finding the right answer instead of defending an existing belief, minimizing the unpleasantness associated with being wrong, encouraging people to give information sufficient consideration, and asking people to think about why their preferred hypothesis might be wrong or why competing hypotheses could be right.

Reducing your own confirmation bias

To mitigate the confirmation bias in yourself, you can use similar techniques to those that you would use to mitigate it in others. Specifically, you can do the following:

  • Identify when and how you’re likely to experience the bias.
  • Maintain awareness of the bias in relevant situations, and even actively ask yourself whether you’re experiencing it.
  • Figure out what kind of negative outcomes the bias can cause for you.
  • Focus on trying to find the right answer, rather than on proving that your initial belief was right.
  • Avoid feeling bad if you find out that you’re wrong; for example, try to focus on having learned something new that you can use in the future.
  • Don’t let your emotions dictate how you process information, particularly when it comes to seeking confirmation or avoiding challenges to your beliefs.
  • Dedicate sufficient time and mental effort when processing relevant information.
  • Avoid forming a hypothesis too early, before you’d had a chance to analyze sufficient information.
  • Clearly outline your reasoning, for example by identifying your stance and the evidence that you’re basing it on.
  • Think of reasons why your preferred hypothesis might be wrong.
  • Come up with alternative hypotheses, as well as reasons why those hypotheses might be right.

An added benefit of many of these techniques is that they can help you understand opposing views better, which is important when it comes to explaining your own stance and communicating with others on the topic.

In addition, you can also use general debiasing techniques , such as standardizing your decision-making process and creating favorable conditions for assessing information.

Furthermore, keep in mind that, as is the case with reducing the confirmation bias in others, different techniques will be more effective than others, both in general and in particular circumstances. You should take this into account, and try to find the approach that works best for you in any given situation.

Finally, note that in some ways, debiasing yourself can be easier than debiasing others, since other people are often not as open to your debiasing attempts as you yourself are. At the same time, however, debiasing yourself is also more difficult in some ways, since we often struggle to notice our own blind spots, and to identify areas where we are affected by cognitive biases in general, and the confirmation bias in particular.

Overall, to reduce the confirmation bias in yourself, you can use similar techniques to those that you would use to reduce it in others. This includes, for example, maintaining awareness of this bias, focusing on trying to find the right answer rather than proving that you were right, dedicating sufficient time and effort to analyzing information, clearly outlining your reasoning, thinking of reasons why your preferred hypothesis might be wrong, and coming up with alternative hypotheses.

Additional information

Related cognitive biases.

There are many cognitive biases that are closely associated with the confirmation bias, either because they involved a similar pattern or reasoning, or because they occur, at least partly, due to underlying confirmation bias.

For example, there is the backfire effect , which is a cognitive bias that causes people who encounter evidence that challenges their beliefs to reject that evidence, and to strengthen their support of their original stance. This bias can, for instance, cause people to increase their support for a political candidate after they encounter negative information about that candidate, or to strengthen their belief in a scientific misconception after they encounter evidence that highlights the issues with that misconception. The backfire effect is closely associated with the confirmation bias, since it involves the rejection of challenging evidence, with the goal of confirming one’s original beliefs.

Another example of a cognitive bias that is closely related to the confirmation bias is the halo effect , which is a cognitive bias that causes people’s impression of someone or something in one domain to influence their impression of them in other domains. This bias can, for instance, cause people to assume that if someone is physically attractive, then they must also have an interesting personality , or it can cause people to give higher ratings to an essay if they believe that it was written by an attractive author . The halo effect is closely associated with the confirmation bias, since it can be attributed in some cases to people’s tendency to confirm their initial impression of someone, by forming later impressions of them in a biased manner.

The origin and history of the confirmation bias

The term ‘confirmation bias’ was first used in a 1977 paper titled “ Confirmation bias in a simulated research environment: An experimental study of scientific inference “, published by Clifford R. Mynatt, Michael E. Doherty, and Ryan D. Tweney in the Quarterly Journal of Experimental Psychology (Volume 29, Issue 1, pp. 85-95). However, as the authors themselves note, evidence of the confirmation bias can be found earlier in the psychological literature.

Specifically, the following passage is the abstract of the paper that coined the term. It outlines the work presented in the paper, and also notes the existence of prior work on the topic:

“Numerous authors (e.g., Popper, 1959 ) argue that scientists should try to falsify  rather than  confirm theories. However, recent empirical work (Wason and Johnson-Laird, 1972 ) suggests the existence of a confirmation bias, at least on abstract problems. Using a more realistic, computer controlled environment modeled after a real research setting, subjects in this study first formulated hypotheses about the laws governing events occurring in the environment. They then chose between pairs of environments in which they could: (1) make observations which would probably confirm these hypotheses, or (2) test alternative hypotheses. Strong evidence for a confirmation bias involving failure to choose environments allowing tests of alternative hypotheses was found. However, when subjects did obtain explicit falsifying information, they used this information to reject incorrect hypotheses.”

In addition, a number of other past studies are discussed in the paper :

“Examples abound of scientists clinging to pet theories and refusing to seek alternatives in the face of large amounts of contradictory data (see Kuhn, 1970 ). Objective evidence, however, is scant. Wason ( 1968a ) has conducted several experiments on inferential reasoning in which subjects were given conditional rules of the form ‘If P then Q’, where P was a statement about one side of a stimulus card and Q a statement about the other side. Four stimulus cards, corresponding to P, not-P, Q, and not-Q were provided. The subjects’ task was to indicate those cards—and only those cards—which had to be turned over in order to determine if the rule was true or false. Most subjects chose only P, or P and Q. The only cards which can falsify the rule, however, are P and not-Q. Since the not-Q card is almost never selected, the results indicate a strong tendency to seek confirmatory rather than disconfirmatory evidence. This bias for selecting confirmatory evidence has proved remarkably difficult to eradicate (see Wason and Johnson-Laird, 1972 , pp. 171-201). In another set of experiments, Wason ( 1960 , 1968b , 1971 ) also found evidence of failure to consider alternative hypotheses. Subjects were given the task of recovering an experimenter defined rule for generating numerical sequences. The correct rule was a very general one and, consequently, many incorrect specific rules could generate sequences which were compatible with the correct rule. Most subjects produced a few sequences based upon a single, specific rule, received positive feedback, and announced mistakenly that they had discovered the correct rule. With some notable exceptions, what subjects did not do was to generate and eliminate alternative rules in a systematic fashion. Somewhat similar results have been reported by Miller ( 1967 ). Finally, Mitroff ( 1974 ), in a large-scale non-experimental study of NASA scientists, reports that a strong confirmation bias existed among many members of this group. He cites numerous examples of these scientists’ verbalizations of their own and other scientists’ obduracy in the face of data as evidence for this conclusion.”

Summary and conclusions

  • The confirmation bias is a cognitive bias that causes people to search for, favor, interpret, and recall information in a way that confirms their preexisting beliefs.
  • The confirmation bias affects people in every area of life; for example, it can cause people to disregard negative information about a political candidate that they support, or to only pay attention to news articles that support what they already think.
  • People experience the confirmation bias due to various reasons, including challenge avoidance (the desire to avoid finding out that they’re wrong), reinforcement seeking (the desire to find out that they’re right), and flawed testing of hypotheses (e.g., fixating on a single explanation from the start).
  • To reduce the confirmation bias in yourself and in others, you can use various techniques that revolve around trying to counteract the cognitive mechanisms that promote the confirmation bias in the first place.
  • Relevant debiasing techniques you can use include maintaining awareness of this bias, focusing on trying to find the right answer rather than being proven right, dedicating sufficient time and effort to analyzing relevant information, clearly outlining the reasoning process, thinking of reasons why a preferred hypothesis might be wrong, and coming up with alternative hypotheses and reasons why those hypotheses might be right.

Other articles you may find interesting:

  • The Backfire Effect: Why Facts Don't Always Change Minds
  • Cherry Picking: When People Ignore Evidence that They Dislike
  • Belief Bias: When People Rely on Beliefs Rather Than Logic

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

9.8: Confirmation Bias

  • Last updated
  • Save as PDF
  • Page ID 54779

  • Mehgan Andrade and Neil Walker
  • College of the Canyons

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

Picture1.png

Confirmation bias is a person’s tendency to seek, interpret and use evidence in a way that conforms to their existing beliefs. This can lead a person to make certain mistakes such as: poor judgments that limits their ability to learn, induces changing in beliefs to justify past actions, and act in a hostile manner towards people who disagree with them. Confirmation bias lead a person to perpetuate stereotypes or cause a doctor to inaccurately diagnose a condition.

What is noteworthy about confirmation bias is that it supports the The Argumentative Theory.Although confirmation bias is almost universally deplored as a regrettable failing of reason in others, the argumentative theory of reason explains that this bias is Adaptive Behavior because it aids in forming persuasive arguments by preventing us from being distracted by useless evidence and unhelpful stories.

Interestingly, Charles Darwin made a practice of recording evidence against his theory in a special notebook, because he found that this contradictory evidence was particularly difficult to remember.

Logo for University of Central Florida Pressbooks

Thinking and Intelligence

Pitfalls to Problem Solving

Learning objectives.

  • Explain some common roadblocks to effective problem solving

Not all problems are successfully solved, however. What challenges stop us from successfully solving a problem? Albert Einstein once said, “Insanity is doing the same thing over and over again and expecting a different result.” Imagine a person in a room that has four doorways. One doorway that has always been open in the past is now locked. The person, accustomed to exiting the room by that particular doorway, keeps trying to get out through the same doorway even though the other three doorways are open. The person is stuck—but she just needs to go to another doorway, instead of trying to get out through the locked doorway. A mental set is where you persist in approaching a problem in a way that has worked in the past but is clearly not working now.  Functional fixedness is a type of mental set where you cannot perceive an object being used for something other than what it was designed for. During the Apollo 13 mission to the moon, NASA engineers at Mission Control had to overcome functional fixedness to save the lives of the astronauts aboard the spacecraft. An explosion in a module of the spacecraft damaged multiple systems. The astronauts were in danger of being poisoned by rising levels of carbon dioxide because of problems with the carbon dioxide filters. The engineers found a way for the astronauts to use spare plastic bags, tape, and air hoses to create a makeshift air filter, which saved the lives of the astronauts.

Link to Learning

Check out this Apollo 13 scene where the group of NASA engineers are given the task of overcoming functional fixedness.

Researchers have investigated whether functional fixedness is affected by culture. In one experiment, individuals from the Shuar group in Ecuador were asked to use an object for a purpose other than that for which the object was originally intended. For example, the participants were told a story about a bear and a rabbit that were separated by a river and asked to select among various objects, including a spoon, a cup, erasers, and so on, to help the animals. The spoon was the only object long enough to span the imaginary river, but if the spoon was presented in a way that reflected its normal usage, it took participants longer to choose the spoon to solve the problem. (German & Barrett, 2005). The researchers wanted to know if exposure to highly specialized tools, as occurs with individuals in industrialized nations, affects their ability to transcend functional fixedness. It was determined that functional fixedness is experienced in both industrialized and nonindustrialized cultures (German & Barrett, 2005).

In order to make good decisions, we use our knowledge and our reasoning. Often, this knowledge and reasoning is sound and solid. Sometimes, however, we are swayed by biases or by others manipulating a situation. For example, let’s say you and three friends wanted to rent a house and had a combined target budget of $1,600. The realtor shows you only very run-down houses for $1,600 and then shows you a very nice house for $2,000. Might you ask each person to pay more in rent to get the $2,000 home? Why would the realtor show you the run-down houses and the nice house? The realtor may be challenging your anchoring bias. An anchoring bias occurs when you focus on one piece of information when making a decision or solving a problem. In this case, you’re so focused on the amount of money you are willing to spend that you may not recognize what kinds of houses are available at that price point.

The confirmation bias is the tendency to focus on information that confirms your existing beliefs. For example, if you think that your professor is not very nice, you notice all of the instances of rude behavior exhibited by the professor while ignoring the countless pleasant interactions he is involved in on a daily basis. This bias proves that first impressions do matter and that we tend to look for information to confirm our initial judgments of others.

You can view the transcript for “Confirmation Bias: Your Brain is So Judgmental” here (opens in new window) .

Hindsight bias leads you to believe that the event you just experienced was predictable, even though it really wasn’t. In other words, you knew all along that things would turn out the way they did. Representative bias describes a faulty way of thinking, in which you unintentionally stereotype someone or something; for example, you may assume that your professors spend their free time reading books and engaging in intellectual conversation, because the idea of them spending their time playing volleyball or visiting an amusement park does not fit in with your stereotypes of professors.

Finally, the availability heuristic is a heuristic in which you make a decision based on an example, information, or recent experience that is that readily available to you, even though it may not be the best example to inform your decision . To use a common example, would you guess there are more murders or more suicides in America each year? When asked, most people would guess there are more murders. In truth, there are twice as many suicides as there are murders each year. However, murders seem more common because we hear a lot more about murders on an average day. Unless someone we know or someone famous takes their own life, it does not make the news. Murders, on the other hand, we see in the news every day. This leads to the erroneous assumption that the easier it is to think of instances of something, the more often that thing occurs.

Watch the following video for an example of the availability heuristic.

You can view the transcript for “Availability Heuristic: Are Planes More Dangerous Than Cars?” here (opens in new window) .

Biases tend to “preserve that which is already established—to maintain our preexisting knowledge, beliefs, attitudes, and hypotheses” (Aronson, 1995; Kahneman, 2011). These biases are summarized in Table 2 below.

Learn more about heuristics and common biases through the article, “ 8 Common Thinking Mistakes Our Brains Make Every Day and How to Prevent Them ” by  Belle Beth Cooper.

You can also watch this clever music video explaining these and other cognitive biases.

Think It Over

Which type of bias do you recognize in your own decision making processes? How has this bias affected how you’ve made decisions in the past and how can you use your awareness of it to improve your decisions making skills in the future?

CC licensed content, Original

  • Modification, adaptation, and original content. Provided by : Lumen Learning. License : CC BY: Attribution

CC licensed content, Shared previously

  • Problem Solving. Authored by : OpenStax College. Located at : https://openstax.org/books/psychology-2e/pages/7-3-problem-solving . License : Public Domain: No Known Copyright . License Terms : Download for free at https://openstax.org/books/psychology-2e/pages/1-introduction
  • More information on heuristics. Authored by : Dr. Scott Roberts, Dr. Ryan Curtis, Samantha Levy, and Dr. Dylan Selterman. Provided by : University of Maryland. Located at : http://openpsyc.blogspot.com/2014/07/heuristics.html . Project : OpenPSYC. License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike

continually using an old solution to a problem without results

inability to see an object as useful for any other use other than the one for which it was intended

faulty heuristic in which you fixate on a single aspect of a problem to find a solution

seeking out information that supports our stereotypes while ignoring information that is inconsistent with our stereotypes

belief that the event just experienced was predictable, even though it really wasn’t

faulty heuristic in which you stereotype someone or something without a valid basis for your judgment

faulty heuristic in which you make a decision based on information readily available to you

General Psychology Copyright © by OpenStax and Lumen Learning is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

pep

Find what you need to study

5.8 Biases and Errors in Thinking

5 min read • december 22, 2022

Sadiyya Holsey

Sadiyya Holsey

Dalia Savy

Haseung Jun

Errors in Problem Solving

Because of our mental concepts and other processes, we may be biased or think of situations without an open mind. Let's discuss what those other processes are.

Fixation is only thinking from one point of view. It is in the inability to approach a situation from different perspectives 👀 Fixation is used interchangeably with your mental concept.

Functional Fixedness 

Functional fixedness is the tendency to only think of the familiar functions of an object.

An example of functional fixedness would be the candle problem . Individuals were given a box with thumbtacks, matches 🔥, and a candle 🕯️Then they were asked to put the candle on the wall in a way that the candle wax would not drip while it was lit.

Most of the subjects were unable to solve the problem. Some tried to solve it by trying to pin the candle on the wall with a thumbtack. The successful method was to attach the box to the wall using the thumbtacks. Then, put the candle in the box to light it.

Because of functional fixedness , individuals were unsuccessful because they couldn't understand how a box 📦 can be more than just a container for something.

The following two heuristics can lead us to make poor decisions and snap judgements, which downgrade our thinking.

Availability Heuristic

An availability heuristic is the ability to easily recall immediate examples from the mind about something. When someone asks you "What is the first thing that comes to mind when you think of . . .," you are using the availability heuristic .

Rather than thinking further about a topic, you just mention/assume other events based on the first thing that comes to your mind (or the first readily available concept in your mind).

This makes us fear the wrong things. Many parents may not let their children walk to school 🏫 because the only thing they could think of is that one kid going missing ⚠️This is the very first thing that comes to their mind and because of it, they fear their children suffering the same fate.

Therefore, we really fear what is readily in our memory.

https://firebasestorage.googleapis.com/v0/b/fiveable-92889.appspot.com/o/images%2F-gbpmOKfGFOGZ.png?alt=media&token=3be53495-25a3-4835-99dd-a11de70b4e2d

Image Courtesy of The Decision Lab .

Representativeness Heuristic

The representativeness heuristic is when you judge something based on how they match your prototype. This leads us to ignore information and is honestly the stem of stereotypes.

For example, if someone was asked to decide who most likely went to an ivy league school (when looking at a truck driver 🚚 and a professor 👩‍🏫👨‍🏫), most people would say the professor. This doesn't mean that the professor actually went to an ivy league school, this is just us being stereotypical because of our prototype for a person that goes to an ivy.

There are so many different types of biases and we experience each and every one of them in our everyday lives.

Confirmation Bias 

Confirmation bias is the tendency of individuals to support or search for information that aligns with their opinions and ignore information that doesn't. This eventually leads us to be more polarized ⬅️➡️ as individuals, and is another way of experiencing fixation .

A key example is how many republicans 🔴 watch Fox News to view a channel that confirms their political beliefs. People really dislike it when others have differing opinions and continue to find information that back up their own beliefs.

Belief Perseverance and Belief Bias

Belief perseverance is the tendency to hold onto a belief even if it has lost its credibility. This is different from belief bias , which is the tendency for our preexisting beliefs to distort logical thinking, making logical conclusions look illogical.

Halo Effect 

The halo effect is when positive impressions of people lead to positive views about their character and personality traits. For example, if you see someone as attractive you may think of them as having better personality traits and character even though this isn't necessarily true. 

Self-Serving Bias 

Self-serving bias is when a person attributes positive outcomes to their own doing and negative outcomes to external factors.

For example, if you do well on a test 💯 you may think it makes sense, because you did a good job of studying to prepare for the exam. But if you fail the test, you may put the blame on the teacher for not teaching all the material or for making the test too hard.

Attentional Bias 

Attentional bias is when people’s perceptions are influenced by recurring thoughts.

For example, if marine biology has been on your mind a lot lately, your conversations may include references to marine biology. You would also be more likely to notice information that relates to your thoughts (marine biology).

Actor-observer Bias

Actor-observer bias is when a person might attribute their own actions to external factors and the actions of others to internal factors.

For example, if you see someone else litter, you might think about how people are careless. But if you litter, you might say it was because there was no trash can🗑️ within sight.

Anchoring Bias 

Anchoring bias is when an individual relies heavily on the first piece of information given when making a decision. The first piece of information acts as an anchor and compares it to all subsequent information.

Hindsight Bias

Hindsight bias is when you think you knew something all along after the outcome has occurred. People overestimate their ability to have predicted a certain outcome even if it couldn't possibly have been predicted. People often say "I knew that."

Image Courtesy of Giphy .

Framing impacts decisions and judgments. It's the way we present an issue, and it can be a very powerful persuasion tool.

For example, a doctor could say one of two things about a surgery:

10% of people die 😲

90% of people survive 😌

Obviously, 10% of people die is a much more direct way to phrase the same thing. This makes it scarier than "90% of people survive." Framing is a very important tool!

https://firebasestorage.googleapis.com/v0/b/fiveable-92889.appspot.com/o/images%2F-BZxuFkgQ32F9.JPG?alt=media&token=23cbe5b7-1c07-4c0b-848b-15ad53984667

Key Terms to Review ( 17 )

Actor-Observer Bias

Anchoring Bias

Attentional Bias

Belief Bias

Belief Perseverance

Candle Problem

Confirmation Bias

Functional Fixedness

Halo Effect

Self-Serving Bias

Fiveable

About Fiveable

Code of Conduct

Terms of Use

Privacy Policy

CCPA Privacy Policy

AP Score Calculators

Study Guides

Practice Quizzes

Cram Events

Crisis Text Line

Help Center

Stay Connected

© 2024 Fiveable Inc. All rights reserved.

AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.

7. Thinking and Intelligence

Problem solving, learning objectives.

By the end of this section, you will be able to:

  • Describe problem solving strategies
  • Define algorithm and heuristic
  • Explain some common roadblocks to effective problem solving

People face problems every day—usually, multiple problems throughout the day. Sometimes these problems are straightforward: To double a recipe for pizza dough, for example, all that is required is that each ingredient in the recipe be doubled. Sometimes, however, the problems we encounter are more complex. For example, say you have a work deadline, and you must mail a printed copy of a report to your supervisor by the end of the business day. The report is time-sensitive and must be sent overnight. You finished the report last night, but your printer will not work today. What should you do? First, you need to identify the problem and then apply a strategy for solving the problem.

PROBLEM-SOLVING STRATEGIES

When you are presented with a problem—whether it is a complex mathematical problem or a broken printer, how do you solve it? Before finding a solution to the problem, the problem must first be clearly identified. After that, one of many problem solving strategies can be applied, hopefully resulting in a solution.

A problem-solving strategy is a plan of action used to find a solution. Different strategies have different action plans associated with them ( [link] ). For example, a well-known strategy is trial and error . The old adage, “If at first you don’t succeed, try, try again” describes trial and error. In terms of your broken printer, you could try checking the ink levels, and if that doesn’t work, you could check to make sure the paper tray isn’t jammed. Or maybe the printer isn’t actually connected to your laptop. When using trial and error, you would continue to try different solutions until you solved your problem. Although trial and error is not typically one of the most time-efficient strategies, it is a commonly used one.

Another type of strategy is an algorithm. An algorithm is a problem-solving formula that provides you with step-by-step instructions used to achieve a desired outcome (Kahneman, 2011). You can think of an algorithm as a recipe with highly detailed instructions that produce the same result every time they are performed. Algorithms are used frequently in our everyday lives, especially in computer science. When you run a search on the Internet, search engines like Google use algorithms to decide which entries will appear first in your list of results. Facebook also uses algorithms to decide which posts to display on your newsfeed. Can you identify other situations in which algorithms are used?

A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. A “rule of thumb” is an example of a heuristic. Such a rule saves the person time and energy when making a decision, but despite its time-saving characteristics, it is not always the best method for making a rational decision. Different types of heuristics are used in different types of situations, but the impulse to use a heuristic occurs when one of five conditions is met (Pratkanis, 1989):

  • When one is faced with too much information
  • When the time to make a decision is limited
  • When the decision to be made is unimportant
  • When there is access to very little information to use in making the decision
  • When an appropriate heuristic happens to come to mind in the same moment

Working backwards is a useful heuristic in which you begin solving the problem by focusing on the end result. Consider this example: You live in Washington, D.C. and have been invited to a wedding at 4 PM on Saturday in Philadelphia. Knowing that Interstate 95 tends to back up any day of the week, you need to plan your route and time your departure accordingly. If you want to be at the wedding service by 3:30 PM, and it takes 2.5 hours to get to Philadelphia without traffic, what time should you leave your house? You use the working backwards heuristic to plan the events of your day on a regular basis, probably without even thinking about it.

Another useful heuristic is the practice of accomplishing a large goal or task by breaking it into a series of smaller steps. Students often use this common method to complete a large research project or long essay for school. For example, students typically brainstorm, develop a thesis or main topic, research the chosen topic, organize their information into an outline, write a rough draft, revise and edit the rough draft, develop a final draft, organize the references list, and proofread their work before turning in the project. The large task becomes less overwhelming when it is broken down into a series of small steps.

Problem-solving abilities can improve with practice. Many people challenge themselves every day with puzzles and other mental exercises to sharpen their problem-solving skills. Sudoku puzzles appear daily in most newspapers. Typically, a sudoku puzzle is a 9×9 grid. The simple sudoku below ( [link] ) is a 4×4 grid. To solve the puzzle, fill in the empty boxes with a single digit: 1, 2, 3, or 4. Here are the rules: The numbers must total 10 in each bolded box, each row, and each column; however, each digit can only appear once in a bolded box, row, and column. Time yourself as you solve this puzzle and compare your time with a classmate.

A four column by four row Sudoku puzzle is shown. The top left cell contains the number 3. The top right cell contains the number 2. The bottom right cell contains the number 1. The bottom left cell contains the number 4. The cell at the intersection of the second row and the second column contains the number 4. The cell to the right of that contains the number 1. The cell below the cell containing the number 1 contains the number 2. The cell to the left of the cell containing the number 2 contains the number 3.

How long did it take you to solve this sudoku puzzle? (You can see the answer at the end of this section.)

Here is another popular type of puzzle ( [link] ) that challenges your spatial reasoning skills. Connect all nine dots with four connecting straight lines without lifting your pencil from the paper:

A square shaped outline contains three rows and three columns of dots with equal space between them.

Did you figure it out? (The answer is at the end of this section.) Once you understand how to crack this puzzle, you won’t forget.

Take a look at the “Puzzling Scales” logic puzzle below ( [link] ). Sam Loyd, a well-known puzzle master, created and refined countless puzzles throughout his lifetime (Cyclopedia of Puzzles, n.d.).

A puzzle involving a scale is shown. At the top of the figure it reads: “Sam Loyds Puzzling Scales.” The first row of the puzzle shows a balanced scale with 3 blocks and a top on the left and 12 marbles on the right. Below this row it reads: “Since the scales now balance.” The next row of the puzzle shows a balanced scale with just the top on the left, and 1 block and 8 marbles on the right. Below this row it reads: “And balance when arranged this way.” The third row shows an unbalanced scale with the top on the left side, which is much lower than the right side. The right side is empty. Below this row it reads: “Then how many marbles will it require to balance with that top?”

PITFALLS TO PROBLEM SOLVING

Not all problems are successfully solved, however. What challenges stop us from successfully solving a problem? Albert Einstein once said, “Insanity is doing the same thing over and over again and expecting a different result.” Imagine a person in a room that has four doorways. One doorway that has always been open in the past is now locked. The person, accustomed to exiting the room by that particular doorway, keeps trying to get out through the same doorway even though the other three doorways are open. The person is stuck—but she just needs to go to another doorway, instead of trying to get out through the locked doorway. A mental set is where you persist in approaching a problem in a way that has worked in the past but is clearly not working now.

Functional fixedness is a type of mental set where you cannot perceive an object being used for something other than what it was designed for. During the Apollo 13 mission to the moon, NASA engineers at Mission Control had to overcome functional fixedness to save the lives of the astronauts aboard the spacecraft. An explosion in a module of the spacecraft damaged multiple systems. The astronauts were in danger of being poisoned by rising levels of carbon dioxide because of problems with the carbon dioxide filters. The engineers found a way for the astronauts to use spare plastic bags, tape, and air hoses to create a makeshift air filter, which saved the lives of the astronauts.

Link to Learning

Check out this Apollo 13 scene where the group of NASA engineers are given the task of overcoming functional fixedness.

Researchers have investigated whether functional fixedness is affected by culture. In one experiment, individuals from the Shuar group in Ecuador were asked to use an object for a purpose other than that for which the object was originally intended. For example, the participants were told a story about a bear and a rabbit that were separated by a river and asked to select among various objects, including a spoon, a cup, erasers, and so on, to help the animals. The spoon was the only object long enough to span the imaginary river, but if the spoon was presented in a way that reflected its normal usage, it took participants longer to choose the spoon to solve the problem. (German & Barrett, 2005). The researchers wanted to know if exposure to highly specialized tools, as occurs with individuals in industrialized nations, affects their ability to transcend functional fixedness. It was determined that functional fixedness is experienced in both industrialized and nonindustrialized cultures (German & Barrett, 2005).

In order to make good decisions, we use our knowledge and our reasoning. Often, this knowledge and reasoning is sound and solid. Sometimes, however, we are swayed by biases or by others manipulating a situation. For example, let’s say you and three friends wanted to rent a house and had a combined target budget of $1,600. The realtor shows you only very run-down houses for $1,600 and then shows you a very nice house for $2,000. Might you ask each person to pay more in rent to get the $2,000 home? Why would the realtor show you the run-down houses and the nice house? The realtor may be challenging your anchoring bias. An anchoring bias occurs when you focus on one piece of information when making a decision or solving a problem. In this case, you’re so focused on the amount of money you are willing to spend that you may not recognize what kinds of houses are available at that price point.

The confirmation bias is the tendency to focus on information that confirms your existing beliefs. For example, if you think that your professor is not very nice, you notice all of the instances of rude behavior exhibited by the professor while ignoring the countless pleasant interactions he is involved in on a daily basis. Hindsight bias leads you to believe that the event you just experienced was predictable, even though it really wasn’t. In other words, you knew all along that things would turn out the way they did. Representative bias describes a faulty way of thinking, in which you unintentionally stereotype someone or something; for example, you may assume that your professors spend their free time reading books and engaging in intellectual conversation, because the idea of them spending their time playing volleyball or visiting an amusement park does not fit in with your stereotypes of professors.

Finally, the availability heuristic is a heuristic in which you make a decision based on an example, information, or recent experience that is that readily available to you, even though it may not be the best example to inform your decision . Biases tend to “preserve that which is already established—to maintain our preexisting knowledge, beliefs, attitudes, and hypotheses” (Aronson, 1995; Kahneman, 2011). These biases are summarized in [link] .

Please visit this site to see a clever music video that a high school teacher made to explain these and other cognitive biases to his AP psychology students.

Were you able to determine how many marbles are needed to balance the scales in [link] ? You need nine. Were you able to solve the problems in [link] and [link] ? Here are the answers ( [link] ).

The first puzzle is a Sudoku grid of 16 squares (4 rows of 4 squares) is shown. Half of the numbers were supplied to start the puzzle and are colored blue, and half have been filled in as the puzzle’s solution and are colored red. The numbers in each row of the grid, left to right, are as follows. Row 1:  blue 3, red 1, red 4, blue 2. Row 2: red 2, blue 4, blue 1, red 3. Row 3: red 1, blue 3, blue 2, red 4. Row 4: blue 4, red 2, red 3, blue 1.The second puzzle consists of 9 dots arranged in 3 rows of 3 inside of a square. The solution, four straight lines made without lifting the pencil, is shown in a red line with arrows indicating the direction of movement. In order to solve the puzzle, the lines must extend beyond the borders of the box. The four connecting lines are drawn as follows. Line 1 begins at the top left dot, proceeds through the middle and right dots of the top row, and extends to the right beyond the border of the square. Line 2 extends from the end of line 1, through the right dot of the horizontally centered row, through the middle dot of the bottom row, and beyond the square’s border ending in the space beneath the left dot of the bottom row. Line 3 extends from the end of line 2 upwards through the left dots of the bottom, middle, and top rows. Line 4 extends from the end of line 3 through the middle dot in the middle row and ends at the right dot of the bottom row.

Many different strategies exist for solving problems. Typical strategies include trial and error, applying algorithms, and using heuristics. To solve a large, complicated problem, it often helps to break the problem into smaller steps that can be accomplished individually, leading to an overall solution. Roadblocks to problem solving include a mental set, functional fixedness, and various biases that can cloud decision making skills.

Self Check Questions

Critical thinking questions.

1. What is functional fixedness and how can overcoming it help you solve problems?

2. How does an algorithm save you time and energy when solving a problem?

Personal Application Question

3. Which type of bias do you recognize in your own decision making processes? How has this bias affected how you’ve made decisions in the past and how can you use your awareness of it to improve your decisions making skills in the future?

1. Functional fixedness occurs when you cannot see a use for an object other than the use for which it was intended. For example, if you need something to hold up a tarp in the rain, but only have a pitchfork, you must overcome your expectation that a pitchfork can only be used for garden chores before you realize that you could stick it in the ground and drape the tarp on top of it to hold it up.

2. An algorithm is a proven formula for achieving a desired outcome. It saves time because if you follow it exactly, you will solve the problem without having to figure out how to solve the problem. It is a bit like not reinventing the wheel.

  • Psychology. Authored by : OpenStax College. Located at : http://cnx.org/contents/[email protected]:1/Psychology . License : CC BY: Attribution . License Terms : Download for free at http://cnx.org/content/col11629/latest/.
  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Functional Fixedness as a Cognitive Bias

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

confirmation bias problem solving psychology

Sean is a fact-checker and researcher with experience in sociology, field research, and data analytics.

confirmation bias problem solving psychology

Functional fixedness is a type of cognitive bias that involves a tendency to see objects as only working in a particular way. For example, you might view a thumbtack as something that can only be used to hold paper to a corkboard. But what other uses might the item have?

In many cases, functional fixedness can prevent people from seeing the full range of uses for an object. It can also impair our ability to think of novel solutions to problems .

How Functional Fixedness Influences Problem-Solving

Imagine that you need to drive a nail into a wall so you can hang a framed photo. Unable to find a hammer, you spend a significant amount of time searching your house to find the missing tool. A friend comes over and suggests using a metal wrench instead to pound the nail into the wall.

Why didn't you think of using the metal wrench? Psychologists suggest that something known as functional fixedness often prevents us from thinking of alternative solutions to problems and different uses for objects.

A Classic Example

Here's one well-known example of functional fixedness at work:

You have two candles, numerous thumbtacks, and a box of matches. Using only these items, try to figure out how to mount the candles to a wall.

How would you accomplish this? Many people might immediately start trying to use the thumbtacks to affix the candles to the wall. Due to functional fixedness, you might think of only one way to directly use the thumbtacks. There is another solution, however. Using the matches, melt the bottom part of each candle and then use the hot wax to stick the candle to the matchbox. Once the candles are attached to the box, use the thumbtacks to stick the box to the wall.

Functional fixedness is just one type of mental obstacle that can make problem-solving more difficult.

Functional fixedness isn't always a bad thing. In many cases, it can act as a mental shortcut allowing you to quickly and efficiently determine a practical use for an object.

For example, imagine that someone has asked you to open a toolbox and find a tool that can be used to loosen a screw. It would take a tremendous amount of time if you had to analyze every item in the box to determine how effective it might be at performing the task. Instead, you are able to quickly grab a screwdriver, the most obvious item for loosening a screw.

American Psychological Association. APA Dictionary of Psychology: functional fixedness . 2020.

Munoz-Rubke F, Olson D, Will R, James KH. Functional fixedness in tool use: Learning modality, limitations and individual differences . Acta Psychol (Amst). 2018;190:11-26. doi:10.1016/j.actpsy.2018.06.006

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

  • Our expertise
  • Our services
  • Working papers
  • Who we work with
  • Our locations
  • How we work
  • 25th May 2018

How confirmation bias stops us solving problems

This is the third blog in our Behavioural Government series , which explores how behavioural insights can be used to improve how government itself works.

Confirmation bias is the tendency to seek out, interpret, judge and remember information so that it supports one’s pre-existing views and ideas.

Confirmation bias can make people less likely to engage with information which challenges their views. An example of this is a recent study of 376 million Facebook users, which found that many preferred to get their news from a small number of sources they already agreed with.

Even when people do get exposed to challenging information, confirmation bias can cause them to reject it and, perversely, become even more certain that their own beliefs are correct.

One famous experiment gave students evidence two scientific studies – one that supported capital punishment, and one that opposed it. The students denigrated whichever study went against their pre-existing opinion, and left the lab embracing their original position even more passionately.

The mental process which helps explain this behaviour is called motivated reasoning. What is worrying is that motivated reasoning may actually reduce our ability to understand and interpret evidence, and so make us less likely to be swayed by reasoned argument.

This is illustrated by a recent Danish study which showed elected politicians (hypothetical) satisfaction statistics for two different schools, then asked them to identify the best-performing one. Around 75% answered correctly when the options were labelled innocuously (e.g. “School A” and “School B”). However, these results changed dramatically when the options were framed in terms of public vs private services (e.g. “Private School” and “Public School”), a contentious issue in Danish politics.

Figure 1 shows that when the correct answer was in line with their pre-existing beliefs about public services (i.e. the politician strongly believed in the value of public services and the correct answer was that the public school was better), 92% of politicians chose correctly. But only 56% got it right when the answer was at odds with their beliefs (i.e. the politician strongly believed in the value of public services and the correct answer was that the private school was better).

confirmation bias problem solving psychology

Figure 1. Relationship between prior attitudes and correct interpretations of statistical data among 127 Danish politicians.

confirmation bias problem solving psychology

In our view, confirmation bias is one of the most pervasive and problematic cognitive biases that affects policy making. For that reason, it is also one of the hardest to tackle. However, we think that there are realistic improvements to be made.

Sign up to our mailing list to be among the first to hear them when we release our full Behavioural Government report.

photo of employee

Dr Michael Hallsworth

Managing Director, BIT Americas

confirmation bias problem solving psychology

Dr Mark Egan

Principal Research Advisor

Related content

  • 15th Aug 2018

Central banking: when communication is the policy

  • 2nd May 2018

Behavioural Government: A major new initiative from BIT

Default image

  • 3rd Aug 2016

How can government make better use of data science? Insights from the first Data Science & Government Conference

What is decision making?

Signpost with three blank signs on sky backgrounds

Decisions, decisions. When was the last time you struggled with a choice? Maybe it was this morning, when you decided to hit the snooze button—again. Perhaps it was at a restaurant, with a miles-long menu and the server standing over you. Or maybe it was when you left your closet in a shambles after trying on seven different outfits before a big presentation. Often, making a decision—even a seemingly simple one—can be difficult. And people will go to great lengths—and pay serious sums of money—to avoid having to make a choice. The expensive tasting menu at the restaurant, for example. Or limiting your closet choices to black turtlenecks, à la Steve Jobs.

Get to know and directly engage with senior McKinsey experts on decision making

Aaron De Smet is a senior partner in McKinsey’s New Jersey office, Eileen Kelly Rinaudo  is McKinsey’s global director of advancing women executives and is based in the New York office, Frithjof Lund is a senior partner in the Oslo office, and Leigh Weiss is a senior adviser in the Boston office.

If you’ve ever wrestled with a decision at work, you’re definitely not alone. According to McKinsey research, executives spend a significant portion of their time— nearly 40 percent , on average—making decisions. Worse, they believe most of that time is poorly used. People struggle with decisions so much so that we actually get exhausted from having to decide too much, a phenomenon called decision fatigue.

But decision fatigue isn’t the only cost of ineffective decision making. According to a McKinsey survey of more than 1,200 global business leaders, inefficient decision making costs a typical Fortune 500 company 530,000 days  of managers’ time each year, equivalent to about $250 million in annual wages. That’s a lot of turtlenecks.

How can business leaders ease the burden of decision making and put this time and money to better use? Read on to learn the ins and outs of smart decision making—and how to put it to work.

Learn more about our People & Organizational Performance Practice .

How can organizations untangle ineffective decision-making processes?

McKinsey research has shown that agile is the ultimate solution for many organizations looking to streamline their decision making . Agile organizations are more likely to put decision making in the right hands, are faster at reacting to (or anticipating) shifts in the business environment, and often attract top talent who prefer working at companies with greater empowerment and fewer layers of management.

For organizations looking to become more agile, it’s possible to quickly boost decision-making efficiency by categorizing the type of decision to be made and adjusting the approach accordingly. In the next section, we review three types of decision making and how to optimize the process for each.

What are three keys to faster, better decisions?

Business leaders today have access to more sophisticated data than ever before. But it hasn’t necessarily made decision making any easier. For one thing, organizational dynamics—such as unclear roles, overreliance on consensus, and death by committee—can get in the way of straightforward decision making. And more data often means more decisions to be taken, which can become too much for one person, team, or department. This can make it more difficult for leaders to cleanly delegate, which in turn can lead to a decline in productivity.

Leaders are growing increasingly frustrated with broken decision-making processes, slow deliberations, and uneven decision-making outcomes. Fewer than half  of the 1,200 respondents of a McKinsey survey report that decisions are timely, and 61 percent say that at least half the time they spend making decisions is ineffective.

What’s the solution? According to McKinsey research, effective solutions center around categorizing decision types and organizing different processes to support each type. Further, each decision category should be assigned its own practice—stimulating debate, for example, or empowering employees—to yield improvements in effectiveness.

Here are the three decision categories  that matter most to senior leaders, and the standout practice that makes the biggest difference for each type of decision.

  • Big-bet decisions are infrequent but high risk, such as acquisitions. These decisions carry the potential to shape the future of the company, and as a result are generally made by top leaders and the board. Spurring productive debate by assigning someone to argue the case for and against a potential decision can improve big-bet decision making.
  • Cross-cutting decisions, such as pricing, can be frequent and high risk. These are usually made by business unit heads, in cross-functional forums as part of a collaborative process. These types of decisions can be improved by doubling down on process refinement. The ideal process should be one that helps clarify objectives, measures, and targets.
  • Delegated decisions are frequent but low risk and are handled by an individual or working team with some input from others. Delegated decision making can be improved by ensuring that the responsibility for the decision is firmly in the hands of those closest to the work. This approach also enhances engagement and accountability.

In addition, business leaders can take the following four actions to help sustain rapid decision making :

  • Focus on the game-changing decisions, ones that will help an organization create value and serve its purpose.
  • Convene only necessary meetings, and eliminate lengthy reports. Turn unnecessary meetings into emails, and watch productivity bloom. For necessary meetings, provide short, well-prepared prereads to aid in decision making.
  • Clarify the roles of decision makers and other voices. Who has a vote, and who has a voice?
  • Push decision-making authority to the front line—and tolerate mistakes.

Circular, white maze filled with white semicircles.

Introducing McKinsey Explainers : Direct answers to complex questions

How can business leaders effectively delegate decision making.

Business is more complex and dynamic than ever, meaning business leaders are faced with needing to make more decisions in less time. Decision making takes up an inordinate amount of management’s time—up to 70 percent for some executives—which leads to inefficiencies and opportunity costs.

As discussed above, organizations should treat different types of decisions differently . Decisions should be classified  according to their frequency, risk, and importance. Delegated decisions are the most mysterious for many organizations: they are the most frequent, and yet the least understood. Only about a quarter of survey respondents  report that their organizations make high-quality and speedy delegated decisions. And yet delegated decisions, because they happen so often, can have a big impact on organizational culture.

The key to better delegated decisions is to empower employees by giving them the authority and confidence to act. That means not simply telling employees which decisions they can or can’t make; it means giving employees the tools they need to make high-quality decisions and the right level of guidance as they do so.

Here’s how to support delegation and employee empowerment:

  • Ensure that your organization has a well-defined, universally understood strategy. When the strategic intent of an organization is clear, empowerment is much easier because it allows teams to pull in the same direction.
  • Clearly define roles and responsibilities. At the foundation of all empowerment efforts is a clear understanding of who is responsible for what, including who has input and who doesn’t.
  • Invest in capability building (and coaching) up front. To help managers spend meaningful coaching time, organizations should also invest in managers’ leadership skills.
  • Build an empowerment-oriented culture. Leaders should role model mindsets that promote empowerment, and managers should build the coaching skills they want to see. Managers and employees, in particular, will need to get comfortable with failure as a necessary step to success.
  • Decide when to get involved. Managers should spend effort up front to decide what is worth their focused attention. They should know when it’s appropriate to provide close guidance and when not to.

How can you guard against bias in decision making?

Cognitive bias is real. We all fall prey, no matter how we try to guard ourselves against it. And cognitive and organizational bias undermines good decision making, whether you’re choosing what to have for lunch or whether to put in a bid to acquire another company.

Here are some of the most common cognitive biases and strategies for how to avoid them:

  • Confirmation bias. Often, when we already believe something, our minds seek out information to support that belief—whether or not it is actually true. Confirmation bias  involves overweighting evidence that supports our belief, underweighting evidence against our belief, or even failing to search impartially for evidence in the first place. Confirmation bias is one of the most common traps organizational decision makers fall into. One famous—and painful—example of confirmation bias is when Blockbuster passed up the opportunity  to buy a fledgling Netflix for $50 million in 2000. (Actually, that’s putting it politely. Netflix executives remember being “laughed out” of Blockbuster’s offices.) Fresh off the dot-com bubble burst of 2000, Blockbuster executives likely concluded that Netflix had approached them out of desperation—not that Netflix actually had a baby unicorn on its hands.
  • Herd mentality. First observed by Charles Mackay in his 1841 study of crowd psychology, herd mentality happens when information that’s available to the group is determined to be more useful than privately held knowledge. Individuals buy into this bias because there’s safety in the herd. But ignoring competing viewpoints might ultimately be costly. To counter this, try a teardown exercise , wherein two teams use scenarios, advanced analytics, and role-playing to identify how a herd might react to a decision, and to ensure they can refute public perceptions.
  • Sunk-cost fallacy. Executives frequently hold onto underperforming business units or projects because of emotional or legacy attachment . Equally, business leaders hate shutting projects down . This, researchers say, is due to the ingrained belief that if everyone works hard enough, anything can be turned into gold. McKinsey research indicates two techniques for understanding when to hold on and when to let go. First, change the burden of proof from why an asset should be cut to why it should be retained. Next, categorize business investments according to whether they should be grown, maintained, or disposed of—and follow clearly differentiated investment rules  for each group.
  • Ignoring unpleasant information. Researchers call this the “ostrich effect”—when people figuratively bury their heads in the sand , ignoring information that will make their lives more difficult. One study, for example, found that investors were more likely to check the value of their portfolios when the markets overall were rising, and less likely to do so when the markets were flat or falling. One way to help get around this is to engage in a readout process, where individuals or teams summarize discussions as they happen. This increases the likelihood that everyone leaves a meeting with the same understanding of what was said.
  • Halo effect. Important personal and professional choices are frequently affected by people’s tendency to make specific judgments based on general impressions . Humans are tempted to use simple mental frames to understand complicated ideas, which means we frequently draw conclusions faster than we should. The halo effect is particularly common in hiring decisions. To avoid this bias, structured interviews can help mitigate the essentializing tendency. When candidates are measured against indicators, intuition is less likely to play a role.

For more common biases and how to beat them, check out McKinsey’s Bias Busters Collection .

Learn more about Strategy & Corporate Finance consulting  at McKinsey—and check out job opportunities related to decision making if you’re interested in working at McKinsey.

Articles referenced include:

  • “ Bias busters: When the crowd isn’t necessarily wise ,” McKinsey Quarterly , May 23, 2022, Eileen Kelly Rinaudo , Tim Koller , and Derek Schatz
  • “ Boards and decision making ,” April 8, 2021, Aaron De Smet , Frithjof Lund , Suzanne Nimocks, and Leigh Weiss
  • “ To unlock better decision making, plan better meetings ,” November 9, 2020, Aaron De Smet , Simon London, and Leigh Weiss
  • “ Reimagine decision making to improve speed and quality ,” September 14, 2020, Julie Hughes , J. R. Maxwell , and Leigh Weiss
  • “ For smarter decisions, empower your employees ,” September 9, 2020, Aaron De Smet , Caitlin Hewes, and Leigh Weiss
  • “ Bias busters: Lifting your head from the sand ,” McKinsey Quarterly , August 18, 2020, Eileen Kelly Rinaudo
  • “ Decision making in uncertain times ,” March 24, 2020, Andrea Alexander, Aaron De Smet , and Leigh Weiss
  • “ Bias busters: Avoiding snap judgments ,” McKinsey Quarterly , November 6, 2019, Tim Koller , Dan Lovallo, and Phil Rosenzweig
  • “ Three keys to faster, better decisions ,” McKinsey Quarterly , May 1, 2019, Aaron De Smet , Gregor Jost , and Leigh Weiss
  • “ Decision making in the age of urgency ,” April 30, 2019, Iskandar Aminov, Aaron De Smet , Gregor Jost , and David Mendelsohn
  • “ Bias busters: Pruning projects proactively ,” McKinsey Quarterly , February 6, 2019, Tim Koller , Dan Lovallo, and Zane Williams
  • “ Decision making in your organization: Cutting through the clutter ,” McKinsey Quarterly , January 16, 2018, Aaron De Smet , Simon London, and Leigh Weiss
  • “ Untangling your organization’s decision making ,” McKinsey Quarterly , June 21, 2017, Aaron De Smet , Gerald Lackey, and Leigh Weiss
  • “ Are you ready to decide? ,” McKinsey Quarterly , April 1, 2015, Philip Meissner, Olivier Sibony, and Torsten Wulf.

Signpost with three blank signs on sky backgrounds

Want to know more about decision making?

Related articles.

Three gear wheels in contact

What is productivity?

" "

What is the future of work?

" "

What is leadership?

7.3 Problem Solving

Learning objectives.

By the end of this section, you will be able to:

  • Describe problem solving strategies
  • Define algorithm and heuristic
  • Explain some common roadblocks to effective problem solving and decision making

People face problems every day—usually, multiple problems throughout the day. Sometimes these problems are straightforward: To double a recipe for pizza dough, for example, all that is required is that each ingredient in the recipe be doubled. Sometimes, however, the problems we encounter are more complex. For example, say you have a work deadline, and you must mail a printed copy of a report to your supervisor by the end of the business day. The report is time-sensitive and must be sent overnight. You finished the report last night, but your printer will not work today. What should you do? First, you need to identify the problem and then apply a strategy for solving the problem.

Problem-Solving Strategies

When you are presented with a problem—whether it is a complex mathematical problem or a broken printer, how do you solve it? Before finding a solution to the problem, the problem must first be clearly identified. After that, one of many problem solving strategies can be applied, hopefully resulting in a solution.

A problem-solving strategy is a plan of action used to find a solution. Different strategies have different action plans associated with them ( Table 7.2 ). For example, a well-known strategy is trial and error . The old adage, “If at first you don’t succeed, try, try again” describes trial and error. In terms of your broken printer, you could try checking the ink levels, and if that doesn’t work, you could check to make sure the paper tray isn’t jammed. Or maybe the printer isn’t actually connected to your laptop. When using trial and error, you would continue to try different solutions until you solved your problem. Although trial and error is not typically one of the most time-efficient strategies, it is a commonly used one.

Another type of strategy is an algorithm. An algorithm is a problem-solving formula that provides you with step-by-step instructions used to achieve a desired outcome (Kahneman, 2011). You can think of an algorithm as a recipe with highly detailed instructions that produce the same result every time they are performed. Algorithms are used frequently in our everyday lives, especially in computer science. When you run a search on the Internet, search engines like Google use algorithms to decide which entries will appear first in your list of results. Facebook also uses algorithms to decide which posts to display on your newsfeed. Can you identify other situations in which algorithms are used?

A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. A “rule of thumb” is an example of a heuristic. Such a rule saves the person time and energy when making a decision, but despite its time-saving characteristics, it is not always the best method for making a rational decision. Different types of heuristics are used in different types of situations, but the impulse to use a heuristic occurs when one of five conditions is met (Pratkanis, 1989):

  • When one is faced with too much information
  • When the time to make a decision is limited
  • When the decision to be made is unimportant
  • When there is access to very little information to use in making the decision
  • When an appropriate heuristic happens to come to mind in the same moment

Working backwards is a useful heuristic in which you begin solving the problem by focusing on the end result. Consider this example: You live in Washington, D.C. and have been invited to a wedding at 4 PM on Saturday in Philadelphia. Knowing that Interstate 95 tends to back up any day of the week, you need to plan your route and time your departure accordingly. If you want to be at the wedding service by 3:30 PM, and it takes 2.5 hours to get to Philadelphia without traffic, what time should you leave your house? You use the working backwards heuristic to plan the events of your day on a regular basis, probably without even thinking about it.

Another useful heuristic is the practice of accomplishing a large goal or task by breaking it into a series of smaller steps. Students often use this common method to complete a large research project or long essay for school. For example, students typically brainstorm, develop a thesis or main topic, research the chosen topic, organize their information into an outline, write a rough draft, revise and edit the rough draft, develop a final draft, organize the references list, and proofread their work before turning in the project. The large task becomes less overwhelming when it is broken down into a series of small steps.

Everyday Connection

Solving puzzles.

Problem-solving abilities can improve with practice. Many people challenge themselves every day with puzzles and other mental exercises to sharpen their problem-solving skills. Sudoku puzzles appear daily in most newspapers. Typically, a sudoku puzzle is a 9×9 grid. The simple sudoku below ( Figure 7.7 ) is a 4×4 grid. To solve the puzzle, fill in the empty boxes with a single digit: 1, 2, 3, or 4. Here are the rules: The numbers must total 10 in each bolded box, each row, and each column; however, each digit can only appear once in a bolded box, row, and column. Time yourself as you solve this puzzle and compare your time with a classmate.

Here is another popular type of puzzle ( Figure 7.8 ) that challenges your spatial reasoning skills. Connect all nine dots with four connecting straight lines without lifting your pencil from the paper:

Take a look at the “Puzzling Scales” logic puzzle below ( Figure 7.9 ). Sam Loyd, a well-known puzzle master, created and refined countless puzzles throughout his lifetime (Cyclopedia of Puzzles, n.d.).

Pitfalls to Problem Solving

Not all problems are successfully solved, however. What challenges stop us from successfully solving a problem? Imagine a person in a room that has four doorways. One doorway that has always been open in the past is now locked. The person, accustomed to exiting the room by that particular doorway, keeps trying to get out through the same doorway even though the other three doorways are open. The person is stuck—but they just need to go to another doorway, instead of trying to get out through the locked doorway. A mental set is where you persist in approaching a problem in a way that has worked in the past but is clearly not working now.

Functional fixedness is a type of mental set where you cannot perceive an object being used for something other than what it was designed for. Duncker (1945) conducted foundational research on functional fixedness. He created an experiment in which participants were given a candle, a book of matches, and a box of thumbtacks. They were instructed to use those items to attach the candle to the wall so that it did not drip wax onto the table below. Participants had to use functional fixedness to overcome the problem ( Figure 7.10 ). During the Apollo 13 mission to the moon, NASA engineers at Mission Control had to overcome functional fixedness to save the lives of the astronauts aboard the spacecraft. An explosion in a module of the spacecraft damaged multiple systems. The astronauts were in danger of being poisoned by rising levels of carbon dioxide because of problems with the carbon dioxide filters. The engineers found a way for the astronauts to use spare plastic bags, tape, and air hoses to create a makeshift air filter, which saved the lives of the astronauts.

Link to Learning

Check out this Apollo 13 scene about NASA engineers overcoming functional fixedness to learn more.

Researchers have investigated whether functional fixedness is affected by culture. In one experiment, individuals from the Shuar group in Ecuador were asked to use an object for a purpose other than that for which the object was originally intended. For example, the participants were told a story about a bear and a rabbit that were separated by a river and asked to select among various objects, including a spoon, a cup, erasers, and so on, to help the animals. The spoon was the only object long enough to span the imaginary river, but if the spoon was presented in a way that reflected its normal usage, it took participants longer to choose the spoon to solve the problem. (German & Barrett, 2005). The researchers wanted to know if exposure to highly specialized tools, as occurs with individuals in industrialized nations, affects their ability to transcend functional fixedness. It was determined that functional fixedness is experienced in both industrialized and nonindustrialized cultures (German & Barrett, 2005).

In order to make good decisions, we use our knowledge and our reasoning. Often, this knowledge and reasoning is sound and solid. Sometimes, however, we are swayed by biases or by others manipulating a situation. For example, let’s say you and three friends wanted to rent a house and had a combined target budget of $1,600. The realtor shows you only very run-down houses for $1,600 and then shows you a very nice house for $2,000. Might you ask each person to pay more in rent to get the $2,000 home? Why would the realtor show you the run-down houses and the nice house? The realtor may be challenging your anchoring bias. An anchoring bias occurs when you focus on one piece of information when making a decision or solving a problem. In this case, you’re so focused on the amount of money you are willing to spend that you may not recognize what kinds of houses are available at that price point.

The confirmation bias is the tendency to focus on information that confirms your existing beliefs. For example, if you think that your professor is not very nice, you notice all of the instances of rude behavior exhibited by the professor while ignoring the countless pleasant interactions he is involved in on a daily basis. Hindsight bias leads you to believe that the event you just experienced was predictable, even though it really wasn’t. In other words, you knew all along that things would turn out the way they did. Representative bias describes a faulty way of thinking, in which you unintentionally stereotype someone or something; for example, you may assume that your professors spend their free time reading books and engaging in intellectual conversation, because the idea of them spending their time playing volleyball or visiting an amusement park does not fit in with your stereotypes of professors.

Finally, the availability heuristic is a heuristic in which you make a decision based on an example, information, or recent experience that is that readily available to you, even though it may not be the best example to inform your decision . Biases tend to “preserve that which is already established—to maintain our preexisting knowledge, beliefs, attitudes, and hypotheses” (Aronson, 1995; Kahneman, 2011). These biases are summarized in Table 7.3 .

Watch this teacher-made music video about cognitive biases to learn more.

Were you able to determine how many marbles are needed to balance the scales in Figure 7.9 ? You need nine. Were you able to solve the problems in Figure 7.7 and Figure 7.8 ? Here are the answers ( Figure 7.11 ).

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/psychology-2e/pages/1-introduction
  • Authors: Rose M. Spielman, William J. Jenkins, Marilyn D. Lovett
  • Publisher/website: OpenStax
  • Book title: Psychology 2e
  • Publication date: Apr 22, 2020
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/psychology-2e/pages/1-introduction
  • Section URL: https://openstax.org/books/psychology-2e/pages/7-3-problem-solving

© Jan 6, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

John Nosta

The Word is Mightier than the Algorithm

Is language itself the operating system for human-ai collaboration.

Posted June 2, 2024 | Reviewed by Kaja Perina

  • AI and LLMs are revolutionizing language, shifting the skills we value from math to effective communication.
  • Language fosters human connection and community, with its magic lying in the unspoken spaces between words.
  • Embracing verbal skills is crucial for harnessing AI's potential, while preserving our humanity in the future.

Art: DALL-E/OpenAI

The Timeless Significance of Language

"In the beginning was the Word, and the Word was with God, and the Word was God." This opening verse from the Gospel of John alludes to the unique power of language and communication. As we enter the Cognitive Age , one often dominated by artificial intelligence (AI) and large language models (LLMs), these ancient words take on a newfound significance.

The Rise of AI and LLMs: Harnessing the Power of Language

The rapid advancements and adoption of AI, particularly with LLMs, are nothing short of astonishing. LLMs have demonstrated a remarkable ability to understand, generate, and manipulate human language with unprecedented fluency and coherence. These models can engage in complex conversations, answer questions, and even write creative fiction or poetry. In essence, they are harnessing the power of the word in ways that were once thought to be exclusively human.

The Shifting Landscape of Valued Skills

As AI continues to evolve, it raises important questions about the future of communication and the skills we value as a society. Traditionally, we have placed a high premium on mathematical ability, often seeing it as a marker of intelligence and a key to success in many fields. However, as AI becomes increasingly adept at solving complex mathematical problems, the landscape of valued skills may be shifting.

In a world where machines can crunch numbers and solve equations with ease, the ability to communicate effectively, think critically, and engage in creative problem-solving may become the distinguishing factors for human success. The power of the word, in all its forms—from persuasive arguments to poetic expression—could rise to the forefront.

When I Grow Up, I Wanna be...

This shift has significant implications for many aspects of life, from education to work. Schools may need to place a greater emphasis on developing strong verbal skills, fostering the ability to articulate ideas, collaborate with others, and adapt to new challenges. Fields that have traditionally relied heavily on math, such as finance and engineering, may begin to prioritize communication skills alongside technical expertise.

However, this is not to suggest that mathematical thinking will become irrelevant. Rather, the power of the word will lie in the ability to interpret and communicate the results of mathematical analyses, to explain complex concepts to diverse audiences, and to use language to bridge the gap between technical expertise and practical application.

The Social and Ethical Power of Words

Moreover, the rise of AI and the power of the word could have critical societal implications. As machines become more adept at handling computational tasks, we may see a rebalancing of the skills and abilities we value. This could lead to a more equitable and inclusive future, one that recognizes and rewards a diverse range of talents.

At the same time, the power of the word in the age of AI raises ethical considerations. As machines become more sophisticated in their ability to generate and manipulate language, we must grapple with questions of authenticity , trust, and accountability. How do we ensure that the words we encounter, whether generated by humans or machines, are truthful and reliable? How do we maintain the integrity of human communication in a world where AI can mimic our language with uncanny precision?

Language as a Tool for Connection and Community

However, the power of language extends far beyond its utilitarian function as a means of communication. From the ancient wisdom of the Upanishads, where the term "upanishad" itself means "sitting down near" in Sanskrit, to the rounds in a hospital, the dining room table, and even the campfire, the act of sitting close and leveraging language has been a powerful tool for fostering connection and community.

confirmation bias problem solving psychology

In these intimate settings, words become more than mere vessels of information; they transform into the threads that weave the tapestry of human relationships. Language, in this sense, serves as a functional glue, bonding individuals together through shared stories, experiences, and understanding. It is through the power of the spoken word that we create and maintain the social fabric that defines our communities and cultures.

The Magic Between the Words

As the legendary jazz musician Miles Davis once said, "I don't play what's there, I play what's not there." This curious insight extends beyond the realm of music and into the very heart of human communication. The true magic of language lies not just in the words themselves, but in the spaces between them—the unspoken emotions, the subtle nuances, and the profound connections that emerge when we read between the lines. It's in these "cognitive interstitial zones" that empathy, imagination , and creativity flourish, allowing us to transcend the literal and tap into the essence of what makes us human. As AI continues to master the mechanics of language, it is this ineffable "magic between the words" that may prove to be the enduring frontier of human expression and connection.

Shaping Our Future with the Power of the Word

The power of the word is not just about communication, but about connection. It is through language that we express our thoughts, share our experiences, and build relationships with one another. Perhaps we can even call language a functional operating system for humans and their technological partners.

As we navigate the challenges and opportunities of the Cognitive Age, embracing the power of the word and fostering strong verbal skills will be essential to harnessing the potential of AI while preserving the qualities that make us human. For in the end, it is not just the word, but the spirit behind it, that will shape our future.

John Nosta

John Nosta is an innovation theorist and founder of NostaLab.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • International
  • New Zealand
  • South Africa
  • Switzerland
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

May 2024 magazine cover

At any moment, someone’s aggravating behavior or our own bad luck can set us off on an emotional spiral that threatens to derail our entire day. Here’s how we can face our triggers with less reactivity so that we can get on with our lives.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

IMAGES

  1. Guide to the Most Common Cognitive Biases and Heuristics

    confirmation bias problem solving psychology

  2. Confirmation Bias

    confirmation bias problem solving psychology

  3. 17 Confirmation Bias Examples (2024)

    confirmation bias problem solving psychology

  4. What Is Confirmation Bias Definition Examples

    confirmation bias problem solving psychology

  5. Heuristic Psychology

    confirmation bias problem solving psychology

  6. Confirmed Meaning

    confirmation bias problem solving psychology

VIDEO

  1. Breaking the Bias: Understanding Confirmation Bias in Decision-Making

  2. Confirmation bias / #psychology #criticalthinking #logicalthinking #fallacy #logicalfallacies 

  3. Omission Bias: The Decision Errors We Make by Doing Nothing

  4. Confirmation Bias Psychology #shorts

  5. REASONING for all exams ||ANALOGY ||समरूपता || BY AMIT MAVI

  6. Confirmation Bias and Politics

COMMENTS

  1. Confirmation Bias In Psychology: Definition & Examples

    Confirmation Bias is the tendency to look for information that supports, rather than rejects, one's preconceptions, typically by interpreting evidence to confirm existing beliefs while rejecting or ignoring any conflicting data (American Psychological Association). One of the early demonstrations of confirmation bias appeared in an experiment ...

  2. Cognitive Bias List: 13 Common Types of Bias

    The Misinformation Effect. The Actor-Observer Bias. The False Consensus Effect. The Halo Effect. The Self-Serving Bias. The Availability Heuristic. The Optimism Bias. Other Kinds. Although we like to believe that we're rational and logical, the fact is that we are continually under the influence of cognitive biases.

  3. Confirmation Bias: Definition, Signs, Overcoming

    Confirmation bias is a type of cognitive bias that favors information that confirms your previously existing beliefs or biases. For example, imagine that Mary believes left-handed people are more creative than right-handed people. Whenever Mary encounters a left-handed, creative person, she will place greater importance on this "evidence ...

  4. Confirmation Bias: Seeing What We Want to Believe

    Our memory is influenced by our expectations (Eysenck & Keane, 2015). Confirmation bias is a widely recognized phenomenon and refers to our tendency to seek out evidence in line with our current beliefs and stick to ideas even when the data contradicts them (Lidén, 2023). Evolutionary and cognitive psychologists agree that we naturally tend to ...

  5. 7.3 Problem-Solving

    GESTALT PSYCHOLOGY AND PROBLEM SOLVING. As you may recall from the sensation and perception chapter, Gestalt psychology describes whole patterns, forms and configurations of perception and cognition such as closure, good continuation, and figure-ground. ... The confirmation bias is the tendency to focus on information that confirms your ...

  6. What Is the Function of Confirmation Bias?

    Confirmation bias is one of the most widely discussed epistemically problematic cognitions, challenging reliable belief formation and the correction of inaccurate views. Given its problematic nature, it remains unclear why the bias evolved and is still with us today. To offer an explanation, several philosophers and scientists have argued that the bias is in fact adaptive. I critically discuss ...

  7. The Curious Case of Confirmation Bias

    The concept of confirmation bias appears to rest on three claims: First, firm evidence, going back 60 years, has demonstrated that people are prone to confirmation bias. Second, confirmation bias ...

  8. The Confirmation Bias: Why People See What They Want to See

    The confirmation bias is a cognitive bias that causes people to search for, favor, interpret, and recall information in a way that confirms their preexisting beliefs. For example, if someone is presented with a lot of information on a certain topic, the confirmation bias can cause them to only remember the bits of information that confirm what they already thought.

  9. 9.8: Confirmation Bias

    9.8: Confirmation Bias. Figure 7. Most people use conformation bias unwittingly because it is usually easier to cling to a reassuring lie than an inconvenient truth. Confirmation bias is a person's tendency to seek, interpret and use evidence in a way that conforms to their existing beliefs. This can lead a person to make certain mistakes ...

  10. Confirmation bias

    cognitive bias. confirmation bias, people's tendency to process information by looking for, or interpreting, information that is consistent with their existing beliefs. This biased approach to decision making is largely unintentional, and it results in a person ignoring information that is inconsistent with their beliefs.

  11. Confirmation Bias, Problem-Solving and Cognitive Models

    Abstract. In a study exploring the nature of rule-discovery in a conceptual task, it is shown that confirmation bias is eliminated when individuals build a model of the problem through a process of questioning. Such models support the process of discovery in a number of ways without altering the nature and order of the hypotheses being tested.

  12. Confirmation Bias, Problem-Solving and Cognitive Models

    Confirmation bias 555 METHOD Subjects and Design The subjects were 16 undergraduate psychology students of University College London (6 men, 1 0 women) with no previous experience of the task. They were unpaid volunteers. Eight were assigned either to the standard or to the model condition. Procedure Subjects were tested individually.

  13. Pitfalls to Problem Solving

    Summary of Decision Biases. Bias. Description. Anchoring. Tendency to focus on one particular piece of information when making decisions or problem-solving. Confirmation. Focuses on information that confirms existing beliefs. Hindsight. Belief that the event just experienced was predictable.

  14. Varieties of Confirmation Bias

    This tendency has been referred to as perseverance of beliefs, hypothesis preservation, and confirmation bias. Research in this area presents a rather heterogeneous collection of findings: a set of confirmation biases, rather than one unified confirmation bias. There are often substantial task-to-task differences in the observed phenomena ...

  15. One Way to Avoid the Confirmation Bias

    Confirmation bias accounts for many instances of faulty thinking. Our favored stories can shape our thinking more than the facts themselves. The confirmation bias is the tendency to favor ...

  16. Confirmation bias

    Confirmation bias (also confirmatory bias, myside bias, or congeniality bias) is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one's prior beliefs or values. People display this bias when they select information that supports their views, ignoring contrary information, or when they interpret ambiguous evidence as supporting their ...

  17. Biases and Errors in Thinking

    Biases. : Biases are preconceived notions or prejudices that can influence our thinking, decision-making, and interactions. They can be conscious (known) or unconscious (unknown). Candle Problem. : The candle problem is a cognitive performance test measuring the influence of functional fixedness on problem-solving tasks.

  18. Problem Solving And Decision Making

    Confirmation bias is a barrier to problem-solving. This exists when a person has a tendency to look for information that supports their idea or approach instead of looking at new information that may contradict their approach or ideas. Strategies for problem-solving. There are many strategies that can make solving a problem easier and more ...

  19. Problem Solving

    A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. A "rule of thumb" is an example of a heuristic.

  20. Functional Fixedness as a Cognitive Bias

    How Functional Fixedness Influences Problem-Solving . Imagine that you need to drive a nail into a wall so you can hang a framed photo. Unable to find a hammer, you spend a significant amount of time searching your house to find the missing tool. A friend comes over and suggests using a metal wrench instead to pound the nail into the wall.

  21. Problem solving

    Problem solving in psychology refers to the process of finding solutions to problems encountered in life. Solutions to these problems are usually situation- or context-specific. ... Confirmation bias is an unintentional tendency to collect and use data which favors preconceived notions. Such notions may be incidental rather than motivated by ...

  22. How confirmation bias stops us solving problems

    Confirmation bias is the tendency to seek out, interpret, judge and remember information so that it supports one's pre-existing views and ideas. Confirmation bias can make people less likely to engage with information which challenges their views. An example of this is a recent study of 376 million Facebook users, which found that many ...

  23. Boost Problem-Solving with Cognitive Psychology

    Learning from failure is a powerful way to enhance problem-solving skills. Cognitive psychology suggests that reflecting on failures can provide valuable insights. Instead of viewing failures as ...

  24. What is decision making?

    Confirmation bias. Often, when we already believe something, our minds seek out information to support that belief—whether or not it is actually true. Confirmation bias involves overweighting evidence that supports our belief, underweighting evidence against our belief, or even failing to search impartially for evidence in the first place ...

  25. Introverts Aren't Who You Think They Are

    Studies have found that introverted individuals are more likely to experience social anxiety compared to their extroverted counterparts (Cain, 2012; Noya & Vernon, 2019). The quiet, reflective ...

  26. 7.3 Problem Solving

    Solving Puzzles. Problem-solving abilities can improve with practice. Many people challenge themselves every day with puzzles and other mental exercises to sharpen their problem-solving skills. Sudoku puzzles appear daily in most newspapers. Typically, a sudoku puzzle is a 9×9 grid. The simple sudoku below is a 4×4 grid. To solve the puzzle ...

  27. Enhance Problem-Solving with Cognitive Psychology

    How can you enhance your problem-solving skills using cognitive psychology principles? Powered by AI and the LinkedIn community. 1. Think Patterns. Be the first to add your personal experience. 2 ...

  28. Connecting in Divisive Times

    Our dreams offer us solutions even and especially in turbulent times. The people appearing in our dreams often represent our options when deciding how to respond a waking dilemma. Upsetting ...

  29. How To Be Your Own Relationship Therapist: 7 Skills

    Rather than focusing on getting the other to change, focus instead on the pattern—the way you both get triggered and go on autopilot. Then, focus on changing your side of the behavioral equation ...

  30. The Word is Mightier than the Algorithm

    In the Cognitive Age, where AI masters language, the power of the word—to connect, imagine, and shape our shared future—could be the defining skill for human-machine collaboration.