Implicit Bias (Unconscious Bias): Definition & Examples

Charlotte Ruhl

Research Assistant & Psychology Graduate

BA (Hons) Psychology, Harvard University

Charlotte Ruhl, a psychology graduate from Harvard College, boasts over six years of research experience in clinical and social psychology. During her tenure at Harvard, she contributed to the Decision Science Lab, administering numerous studies in behavioral economics and social psychology.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

On This Page:

Implicit bias refers to the beliefs and attitudes that affect our understanding, actions and decisions in an unconscious way.

Take-home Messages

  • Implicit biases are unconscious attitudes and stereotypes that can manifest in the criminal justice system, workplace, school setting, and in the healthcare system.
  • Implicit bias is also known as unconscious bias or implicit social cognition.
  • There are many different examples of implicit biases, ranging from categories of race, gender, and sexuality.
  • These biases often arise from trying to find patterns and navigate the overwhelming stimuli in this complicated world. Culture, media, and upbringing can also contribute to the development of such biases.
  • Removing these biases is a challenge, especially because we often don’t even know they exist, but research reveals potential interventions and provides hope that levels of implicit biases in the United States are decreasing.

implicit bias

The term implicit bias was first coined in 1995 by psychologists Mahzarin Banaji and Anthony Greenwald, who argued that social behavior is largely influenced by unconscious associations and judgments (Greenwald & Banaji, 1995).

So, what is implicit bias?

Specifically, implicit bias refers to attitudes or stereotypes that affect our understanding, actions, and decisions in an unconscious way, making them difficult to control.

Since the mid-90s, psychologists have extensively researched implicit biases, revealing that, without even knowing it, we all possess our own implicit biases.

System 1 and System 2 Thinking

Kahneman (2011) distinguishes between two types of thinking: system 1 and system 2.
  • System 1 is the brain’s fast, emotional, unconscious thinking mode. This type of thinking requires little effort, but it is often error-prone. Most everyday activities (like driving, talking, cleaning, etc.) heavily use the type 1 system.
  • System 2 is slow, logical, effortful, conscious thought, where reason dominates.

Daniel Kahnemans Systems

Implicit Bias vs. Explicit Bias

What is meant by implicit bias.

Implicit bias (unconscious bias) refers to attitudes and beliefs outside our conscious awareness and control. Implicit biases are an example of system one thinking, so we are unaware they exist (Greenwald & Krieger, 2006).

An implicit bias may counter a person’s conscious beliefs without realizing it. For example, it is possible to express explicit liking of a certain social group or approval of a certain action while simultaneously being biased against that group or action on an unconscious level.

Therefore, implicit and explicit biases might differ for the same person.

It is important to understand that implicit biases can become explicit biases. This occurs when you become consciously aware of your prejudices and beliefs. They surface in your mind, leading you to choose whether to act on or against them.

What is meant by explicit bias?

Explicit biases are biases we are aware of on a conscious level (for example, feeling threatened by another group and delivering hate speech as a result). They are an example of system 2 thinking.

It is also possible that your implicit and explicit biases differ from your neighbor, friend, or family member. Many factors can control how such biases are developed.

What Are the Implications of Unconscious Bias?

Implicit biases become evident in many different domains of society. On an interpersonal level, they can manifest in simply daily interactions.

This occurs when certain actions (or microaggressions) make others feel uncomfortable or aware of the specific prejudices you may hold against them.

Implicit Prejudice

Implicit prejudice is the automatic, unconscious attitudes or stereotypes that influence our understanding, actions, and decisions. Unlike explicit prejudice, which is consciously controlled, implicit prejudice can occur even in individuals who consciously reject prejudice and strive for impartiality.

Unconscious racial stereotypes are a major example of implicit prejudice. In other words, having an automatic preference for one race over another without being aware of this bias.

This bias can manifest in small interpersonal interactions and has broader implications in society’s legal system and many other important sectors.

Examples may include holding an implicit stereotype that associates Black individuals as violent. As a result, you may cross the street at night when you see a Black man walking in your direction without even realizing why you are crossing the street.

The action taken here is an example of a microaggression. A microaggression is a subtle, automatic, and often nonverbal that communicates hostile, derogatory, or negative prejudicial slights and insults toward any group (Pierce, 1970). Crossing the street communicates an implicit prejudice, even though you might not even be aware.

Another example of an implicit racial bias is if a Latino student is complimented by a teacher for speaking perfect English, but he is a native English speaker. Here, the teacher assumed that English would not be his first language simply because he is Latino.

Gender Stereotypes

Gender biases are another common form of implicit bias. Gender biases are the ways in which we judge men and women based on traditional feminine and masculine assigned traits.

For example, a greater assignment of fame to male than female names (Banaji & Greenwald, 1995) reveals a subconscious bias that holds men at a higher level than their female counterparts. Whether you voice the opinion that men are more famous than women is independent of this implicit gender bias.

Another common implicit gender bias regards women in STEM (science, technology, engineering, and mathematics).

In school, girls are more likely to be associated with language over math. In contrast, males are more likely to be associated with math over language (Steffens & Jelenec, 2011), revealing clear gender-related implicit biases that can ultimately go so far as to dictate future career paths.

Even if you outwardly say men and women are equally good at math, it is possible you subconsciously associate math more strongly with men without even being aware of this association.

Health Care

Healthcare is another setting where implicit biases are very present. Racial and ethnic minorities and women are subject to less accurate diagnoses, curtailed treatment options, less pain management, and worse clinical outcomes (Chapman, Kaatz, & Carnes, 2013).

Additionally, Black children are often not treated as children or given the same compassion or level of care provided for White children (Johnson et al., 2017).

It becomes evident that implicit biases infiltrate the most common sectors of society, making it all the more important to question how we can remove these biases.

LGBTQ+ Community Bias

Similar to implicit racial and gender biases, individuals may hold implicit biases against members of the LGBTQ+ community. Again, that does not necessarily mean that these opinions are voiced outwardly or even consciously recognized by the beholder, for that matter.

Rather, these biases are unconscious. A really simple example could be asking a female friend if she has a boyfriend, assuming her sexuality and that heterosexuality is the norm or default.

Instead, you could ask your friend if she is seeing someone in this specific situation. Several other forms of implicit biases fall into categories ranging from weight to ethnicity to ability that come into play in our everyday lives.

Legal System

Both law enforcement and the legal system shed light on implicit biases. An example of implicit bias functioning in law enforcement is the shooter bias – the tendency among the police to shoot Black civilians more often than White civilians, even when they are unarmed (Mekawi & Bresin, 2015).

This bias has been repeatedly tested in the laboratory setting, revealing an implicit bias against Black individuals. Blacks are also disproportionately arrested and given harsher sentences, and Black juveniles are tried as adults more often than their White peers.

Black boys are also seen as less childlike, less innocent, more culpable, more responsible for their actions, and as being more appropriate targets for police violence (Goff, 2014).

Together, these unconscious stereotypes, which are not rooted in truth, form an array of implicit biases that are extremely dangerous and utterly unjust.

Implicit biases are also visible in the workplace. One experiment that tracked the success of White and Black job applicants found that stereotypically White received 50% more callbacks than stereotypically Black names, regardless of the industry or occupation (Bertrand & Mullainathan, 2004).

This reveals another form of implicit bias: the hiring bias – Anglicized‐named applicants receiving more favorable pre‐interview impressions than other ethnic‐named applicants (Watson, Appiah, & Thornton, 2011).

We’re susceptible to bias because of these tendencies:

We tend to seek out patterns

A key reason we develop such biases is that our brains have a natural tendency to look for patterns and associations to make sense of a very complicated world.

Research shows that even before kindergarten, children already use their group membership (e.g., racial group, gender group, age group, etc.) to guide inferences about psychological and behavioral traits.

At such a young age, they have already begun seeking patterns and recognizing what distinguishes them from other groups (Baron, Dunham, Banaji, & Carey, 2014).

And not only do children recognize what sets them apart from other groups, they believe “what is similar to me is good, and what is different from me is bad” (Cameron, Alvarez, Ruble, & Fuligni, 2001).

Children aren’t just noticing how similar or dissimilar they are to others; dissimilar people are actively disliked (Aboud, 1988).

Recognizing what sets you apart from others and then forming negative opinions about those outgroups (a social group with which an individual does not identify) contributes to the development of implicit biases.

We like to take shortcuts

Another explanation is that the development of these biases is a result of the brain’s tendency to try to simplify the world.

Mental shortcuts make it faster and easier for the brain to sort through all of the overwhelming data and stimuli we are met with every second of the day. And we take mental shortcuts all the time. Rules of thumb, educated guesses, and using “common sense” are all forms of mental shortcuts.

Implicit bias is a result of taking one of these cognitive shortcuts inaccurately (Rynders, 2019). As a result, we incorrectly rely on these unconscious stereotypes to provide guidance in a very complex world.

And especially when we are under high levels of stress, we are more likely to rely on these biases than to examine all of the relevant, surrounding information (Wigboldus, Sherman, Franzese, & Knippenberg, 2004).

Social and Cultural influences

Influences from media, culture, and your individual upbringing can also contribute to the rise of implicit associations that people form about the members of social outgroups. Media has become increasingly accessible, and while that has many benefits, it can also lead to implicit biases.

The way TV portrays individuals or the language journal articles use can ingrain specific biases in our minds.

For example, they can lead us to associate Black people with criminals or females as nurses or teachers. The way you are raised can also play a huge role. One research study found that parental racial attitudes can influence children’s implicit prejudice (Sinclair, Dunn, & Lowery, 2005).

And parents are not the only figures who can influence such attitudes. Siblings, the school setting, and the culture in which you grow up can also shape your explicit beliefs and implicit biases.

Implicit Attitude Test (IAT)

What sets implicit biases apart from other forms is that they are subconscious – we don’t know if we have them.

However, researchers have developed the Implicit Association Test (IAT) tool to help reveal such biases.

The Implicit Attitude Test (IAT) is a psychological assessment to measure an individual’s unconscious biases and associations. The test measures how quickly a person associates concepts or groups (such as race or gender) with positive or negative attributes, revealing biases that may not be consciously acknowledged.

The IAT requires participants to categorize negative and positive words together with either images or words (Greenwald, McGhee, & Schwartz, 1998).

Tests are taken online and must be performed as quickly as possible, the faster you categorize certain words or faces of a category, the stronger the bias you hold about that category.

For example, the Race IAT requires participants to categorize White faces and Black faces and negative and positive words. The relative speed of association of black faces with negative words is used as an indication of the level of anti-black bias.

Kahneman

Professor Brian Nosek and colleagues tested more than 700,000 subjects. They found that more than 70% of White subjects more easily associated White faces with positive words and Black faces with negative words, concluding that this was evidence of implicit racial bias (Nosek, Greenwald, & Banaji, 2007).

Outside of lab testing, it is very difficult to know if we do, in fact, possess these biases. The fact that they are so hard to detect is in the very nature of this form of bias, making them very dangerous in various real-world settings.

How to Reduce Implicit Bias

Because of the harmful nature of implicit biases, it is critical to examine how we can begin to remove them.

Practicing mindfulness is one potential way, as it reduces the stress and cognitive load that otherwise leads to relying on such biases.

A 2016 study found that brief mediation decreased unconscious bias against black people and elderly people (Lueke & Gibson, 2016), providing initial insight into the usefulness of this approach and paving the way for future research on this intervention.

Adjust your perspective

Another method is perspective-taking – looking beyond your own point of view so that you can consider how someone else may think or feel about something.

Researcher Belinda Gutierrez implemented a videogame called “Fair Play,” in which players assume the role of a Black graduate student named Jamal Davis.

As Jamal, players experience subtle race bias while completing “quests” to obtain a science degree.

Gutierrez hypothesized that participants who were randomly assigned to play the game would have greater empathy for Jamal and lower implicit race bias than participants randomized to read narrative text (not perspective-taking) describing Jamal’s experience (Gutierrez, 2014), and her hypothesis was supported, illustrating the benefits of perspective taking in increasing empathy towards outgroup members.

Specific implicit bias training has been incorporated in different educational and law enforcement settings. Research has found that diversity training to overcome biases against women in STEM improved with men (Jackson, Hillard, & Schneider, 2014).

Training programs designed to target and help overcome implicit biases may also be beneficial for police officers (Plant & Peruche, 2005), but there is not enough conclusive evidence to completely support this claim. One pitfall of such training is a potential rebound effect.

Actively trying to inhibit stereotyping actually results in the bias eventually increasing more so than if it had not been initially suppressed in the first place (Macrae, Bodenhausen, Milne, & Jetten, 1994). This is very similar to the white bear problem that is discussed in many psychology curricula.

This concept refers to the psychological process whereby deliberate attempts to suppress certain thoughts make them more likely to surface (Wegner & Schneider, 2003).

Education is crucial. Understanding what implicit biases are, how they can arise how, and how to recognize them in yourself and others are all incredibly important in working towards overcoming such biases.

Learning about other cultures or outgroups and what language and behaviors may come off as offensive is critical as well. Education is a powerful tool that can extend beyond the classroom through books, media, and conversations.

On the bright side, implicit biases in the United States have been improving.

From 2007 to 2016, implicit biases have changed towards neutrality for sexual orientation, race, and skin-tone attitudes (Charlesworth & Banaji, 2019), demonstrating that it is possible to overcome these biases.

Books for further reading

As mentioned, education is extremely important. Here are a few places to get started in learning more about implicit biases:

  • Biased: Uncovering the Hidden Prejudice That Shapes What We See Think and Do by Jennifer Eberhardt
  • Blindspot by Anthony Greenwald and Mahzarin Banaji
  • Implicit Racial Bias Across the Law by Justin Levinson and Robert Smith

Keywords and Terminology

To find materials on implicit bias and related topics, search databases and other tools using the following keywords:

Is unconscious bias the same as implicit bias?

Yes, unconscious bias is the same as implicit bias. Both terms refer to the biases we carry without awareness or conscious control, which can affect our attitudes and actions toward others.

In what ways can implicit bias impact our interactions with others?

Implicit bias can impact our interactions with others by unconsciously influencing our attitudes, behaviors, and decisions. This can lead to stereotyping, prejudice, and discrimination, even when we consciously believe in equality and fairness.

It can affect various domains of life, including workplace dynamics, healthcare provision, law enforcement, and everyday social interactions.

What are some implicit bias examples?

Some examples of implicit biases include assuming a woman is less competent than a man in a leadership role, associating certain ethnicities with criminal behavior, or believing that older people are not technologically savvy.

Other examples include perceiving individuals with disabilities as less capable or assuming that someone who is overweight is lazy or unmotivated.

Aboud, F. E. (1988). Children and prejudice . B. Blackwell.

Banaji, M. R., & Greenwald, A. G. (1995). Implicit gender stereotyping in judgments of fame. Journal of Personality and Social Psychology , 68 (2), 181.

Baron, A. S., Dunham, Y., Banaji, M., & Carey, S. (2014). Constraints on the acquisition of social category concepts. Journal of Cognition and Development , 15 (2), 238-268.

Bertrand, M., & Mullainathan, S. (2004). Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. American economic review , 94 (4), 991-1013.

Cameron, J. A., Alvarez, J. M., Ruble, D. N., & Fuligni, A. J. (2001). Children’s lay theories about ingroups and outgroups: Reconceptualizing research on prejudice. Personality and Social Psychology Review , 5 (2), 118-128.

Chapman, E. N., Kaatz, A., & Carnes, M. (2013). Physicians and implicit bias: how doctors may unwittingly perpetuate health care disparities. Journal of general internal medicine , 28 (11), 1504-1510.

Charlesworth, T. E., & Banaji, M. R. (2019). Patterns of implicit and explicit attitudes: I. Long-term change and stability from 2007 to 2016. Psychological science , 30(2), 174-192.

Goff, P. A., Jackson, M. C., Di Leone, B. A. L., Culotta, C. M., & DiTomasso, N. A. (2014). The essence of innocence: consequences of dehumanizing Black children. Journal of personality and socialpsychology,106(4), 526.

Greenwald, A. G., & Banaji, M. R. (1995). Implicit social cognition: attitudes, self-esteem, and stereotypes. Psychological review, 102(1), 4.

Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. (1998). Measuring individual differences in implicit cognition: the implicit association test. Journal of personality and social psychology , 74(6), 1464.

Greenwald, A. G., & Krieger, L. H. (2006). Implicit bias: Scientific foundations. California Law Review , 94 (4), 945-967.

Gutierrez, B., Kaatz, A., Chu, S., Ramirez, D., Samson-Samuel, C., & Carnes, M. (2014). “Fair Play”: a videogame designed to address implicit race bias through active perspective taking. Games for health journal , 3 (6), 371-378.

Jackson, S. M., Hillard, A. L., & Schneider, T. R. (2014). Using implicit bias training to improve attitudes toward women in STEM. Social Psychology of Education , 17 (3), 419-438.

Johnson, T. J., Winger, D. G., Hickey, R. W., Switzer, G. E., Miller, E., Nguyen, M. B., … & Hausmann, L. R. (2017). Comparison of physician implicit racial bias toward adults versus children. Academic pediatrics , 17 (2), 120-126.

Kahneman, D. (2011). Thinking, fast and slow . Macmillan.

Lueke, A., & Gibson, B. (2016). Brief mindfulness meditation reduces discrimination. Psychology of Consciousness: Theory, Research, and Practice , 3 (1), 34.

Macrae, C. N., Bodenhausen, G. V., Milne, A. B., & Jetten, J. (1994). Out of mind but back in sight: Stereotypes on the rebound. Journal of personality and social psychology , 67 (5), 808.

Mekawi, Y., & Bresin, K. (2015). Is the evidence from racial bias shooting task studies a smoking gun? Results from a meta-analysis. Journal of Experimental Social Psychology , 61 , 120-130.

Nosek, B. A., Greenwald, A. G., & Banaji, M. R. (2007). The Implicit Association Test at age 7: A methodological and conceptual review. Automatic processes in social thinking and behavior , 4 , 265-292.

Pierce, C. (1970). Offensive mechanisms. The black seventies , 265-282.

Plant, E. A., & Peruche, B. M. (2005). The consequences of race for police officers’ responses to criminal suspects. Psychological Science , 16 (3), 180-183.

Rynders, D. (2019). Battling Implicit Bias in the IDEA to Advocate for African American Students with Disabilities. Touro L. Rev. , 35 , 461.

Sinclair, S., Dunn, E., & Lowery, B. (2005). The relationship between parental racial attitudes and children’s implicit prejudice. Journal of Experimental Social Psychology , 41 (3), 283-289.

Steffens, M. C., & Jelenec, P. (2011). Separating implicit gender stereotypes regarding math and language: Implicit ability stereotypes are self-serving for boys and men, but not for girls and women. Sex Roles , 64(5-6), 324-335.

Watson, S., Appiah, O., & Thornton, C. G. (2011). The effect of name on pre‐interview impressions and occupational stereotypes: the case of black sales job applicants. Journal of Applied Social Psychology , 41 (10), 2405-2420.

Wegner, D. M., & Schneider, D. J. (2003). The white bear story. Psychological Inquiry , 14 (3-4), 326-329.

Wigboldus, D. H., Sherman, J. W., Franzese, H. L., & Knippenberg, A. V. (2004). Capacity and comprehension: Spontaneous stereotyping under cognitive load. Social Cognition , 22 (3), 292-309.

Further Information

Test yourself for bias.

  • Project Implicit (IAT Test) From Harvard University
  • Implicit Association Test From the Social Psychology Network
  • Test Yourself for Hidden Bias From Teaching Tolerance
  • How The Concept Of Implicit Bias Came Into Being With Dr. Mahzarin Banaji, Harvard University. Author of Blindspot: hidden biases of good people5:28 minutes; includes a transcript
  • Understanding Your Racial Biases With John Dovidio, Ph.D., Yale University From the American Psychological Association11:09 minutes; includes a transcript
  • Talking Implicit Bias in Policing With Jack Glaser, Goldman School of Public Policy, University of California Berkeley21:59 minutes
  • Implicit Bias: A Factor in Health Communication With Dr. Winston Wong, Kaiser Permanente19:58 minutes
  • Bias, Black Lives and Academic Medicine Dr. David Ansell on Your Health Radio (August 1, 2015)21:42 minutes
  • Uncovering Hidden Biases Google talk with Dr. Mahzarin Banaji, Harvard University
  • Impact of Implicit Bias on the Justice System 9:14 minutes
  • Students Speak Up: What Bias Means to Them 2:17 minutes
  • Weight Bias in Health Care From Yale University16:56 minutes
  • Gender and Racial Bias In Facial Recognition Technology 4:43 minutes

Journal Articles

  • An implicit bias primer Mitchell, G. (2018). An implicit bias primer. Virginia Journal of Social Policy & the Law , 25, 27–59.
  • Implicit Association Test at age 7: A methodological and conceptual review Nosek, B. A., Greenwald, A. G., & Banaji, M. R. (2007). The Implicit Association Test at age 7: A methodological and conceptual review. Automatic processes in social thinking and behavior, 4 , 265-292.
  • Implicit Racial/Ethnic Bias Among Health Care Professionals and Its Influence on Health Care Outcomes: A Systematic Review Hall, W. J., Chapman, M. V., Lee, K. M., Merino, Y. M., Thomas, T. W., Payne, B. K., … & Coyne-Beasley, T. (2015). Implicit racial/ethnic bias among health care professionals and its influence on health care outcomes: a systematic review. American Journal of public health, 105 (12), e60-e76.
  • Reducing Racial Bias Among Health Care Providers: Lessons from Social-Cognitive Psychology Burgess, D., Van Ryn, M., Dovidio, J., & Saha, S. (2007). Reducing racial bias among health care providers: lessons from social-cognitive psychology. Journal of general internal medicine, 22 (6), 882-887.
  • Integrating implicit bias into counselor education Boysen, G. A. (2010). Integrating Implicit Bias Into Counselor Education. Counselor Education & Supervision, 49 (4), 210–227.
  • Cognitive Biases and Errors as Cause—and Journalistic Best Practices as Effect Christian, S. (2013). Cognitive Biases and Errors as Cause—and Journalistic Best Practices as Effect. Journal of Mass Media Ethics, 28 (3), 160–174.
  • Empathy intervention to reduce implicit bias in pre-service teachers Whitford, D. K., & Emerson, A. M. (2019). Empathy Intervention to Reduce Implicit Bias in Pre-Service Teachers. Psychological Reports, 122 (2), 670–688.

Print Friendly, PDF & Email

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How Does Implicit Bias Influence Behavior?

Strategies to Reduce the Impact of Implicit Bias

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

implicit bias essay

Akeem Marsh, MD, is a board-certified child, adolescent, and adult psychiatrist who has dedicated his career to working with medically underserved communities.

implicit bias essay

Getty Images 

  • Measurement
  • Discrimination

An implicit bias is an unconscious association, belief, or attitude toward any social group. Implicit biases are one reason why people often attribute certain qualities or characteristics to all members of a particular group, a phenomenon known as stereotyping .

It is important to remember that implicit biases operate almost entirely on an unconscious level . While explicit biases and prejudices are intentional and controllable, implicit biases are less so.

A person may even express explicit disapproval of a certain attitude or belief while still harboring similar biases on a more unconscious level. Such biases do not necessarily align with our own sense of self and personal identity. People can also hold positive or negative associations about their own race, gender, religion, sexuality, or other personal characteristics.

Causes of Implicit Bias

While people might like to believe that they are not susceptible to these implicit biases and stereotypes, the reality is that everyone engages in them whether they like it or not. This reality, however, does not mean that you are necessarily prejudiced or inclined to discriminate against other people. It simply means that your brain is working in a way that makes associations and generalizations.

In addition to the fact that we are influenced by our environment and stereotypes that already exist in the society into which we were born, it is generally impossible to separate ourselves from the influence of society.

You can, however, become more aware of your unconscious thinking and the ways in which society influences you.

It is the natural tendency of the brain to sift, sort, and categorize information about the world that leads to the formation of these implicit biases. We're susceptible to bias because of these tendencies:

  • We tend to seek out patterns . Implicit bias occurs because of the brain's natural tendency to look for patterns and associations in the world. Social cognition , or our ability to store, process, and apply information about people in social situations, is dependent on this ability to form associations about the world.
  • We like to take shortcuts . Like other cognitive biases , implicit bias is a result of the brain's tendency to try to simplify the world. Because the brain is constantly inundated with more information than it could conceivably process, mental shortcuts make it faster and easier for the brain to sort through all of this data.
  • Our experiences and social conditioning play a role . Implicit biases are influenced by experiences, although these attitudes may not be the result of direct personal experience. Cultural conditioning, media portrayals, and upbringing can all contribute to the implicit associations that people form about the members of other social groups.

How Implicit Bias Is Measured

The term implicit bias was first coined by social psychologists Mahzarin Banaji and Tony Greenwald in 1995. In an influential paper introducing their theory of implicit social cognition, they proposed that social behavior was largely influenced by unconscious associations and judgments.

In 1998, Banaji and Greenwald published their now-famous Implicit Association Test (IAT) to support their hypothesis . The test utilizes a computer program to show respondents a series of images and words to determine how long it takes someone to choose between two things.

Subjects might be shown images of faces of different racial backgrounds, for example, in conjunction with either a positive word or a negative word. Subjects would then be asked to click on a positive word when they saw an image of someone from one race and to click on a negative word when they saw someone of another race.

Interpreting the Results

The researchers suggest that when someone clicks quickly, it means that they possess a stronger unconscious association.   If a person quickly clicks on a negative word every time they see a person of a particular race, the researchers suggest that this would indicate that they hold an implicit negative bias toward individuals of that race.

In addition to a test of implicit racial attitudes, the IAT has also been utilized to measure unconscious biases related to gender, weight, sexuality, disability, and other areas. The IAT has grown in popularity and use over the last decade, yet has recently come under fire.

Among the main criticisms are findings that the test results may lack reliability . Respondents may score high on racial bias on one test, and low the next time they are tested.

Also of concern is that scores on the test may not necessarily correlate with individual behavior. People may score high for a type of bias on the IAT, but those results may not accurately predict how they would relate to members of a specific social group.

Link Between Implicit Bias and Discrimination

It is important to understand that implicit bias is not the same thing as racism, although the two concepts are related. Overt racism involves conscious prejudice against members of a particular racial group and can be influenced by both explicit and implicit biases.

Other forms of discrimination that can be influenced by unconscious biases include ageism , sexism, homophobia, and ableism.

One of the benefits of being aware of the potential impact of implicit social biases is that you can take a more active role in overcoming social stereotypes, discrimination, and prejudice.

Effects of Implicit Bias

Implicit biases can influence how people behave toward the members of different social groups. Researchers have found that such bias can have effects in a number of settings, including in school, work, and legal proceedings.

Implicit Bias in School

Implicit bias can lead to a phenomenon known as stereotype threat in which people internalize negative stereotypes about themselves based upon group associations. Research has shown, for example, that young girls often internalize implicit attitudes related to gender and math performance.  

By the age of 9, girls have been shown to exhibit the unconscious beliefs that females have a preference for language over math.   The stronger these implicit beliefs are, the less likely girls and women are to pursue math performance in school. Such unconscious beliefs are also believed to play a role in inhibiting women from pursuing careers in science, technology, engineering, and mathematics (STEM) fields.

Studies have also demonstrated that implicit attitudes can also influence how teachers respond to student behavior, suggesting that implicit bias can have a powerful impact on educational access and academic achievement.

One study, for example, found that Black children—and Black boys in particular—were more likely to be expelled from school for behavioral issues. When teachers were told to watch for challenging behaviors, they were more likely to focus on Black children than on White children.

Implicit Bias In the Workplace

While the Implicit Attitude Test itself may have pitfalls, these problems do not negate the existence of implicit bias. Or the existence and effects of bias, prejudice, and discrimination in the real world. Such prejudices can have very real and potentially devastating consequences.

One study, for example, found that when Black and White job seekers sent out similar resumes to employers, Black applicants were half as likely to be called in for interviews as White job seekers with equal qualifications.

Such discrimination is likely the result of both explicit and implicit biases toward racial groups.

Even when employers strive to eliminate potential bias in hiring, subtle implicit biases may still have an impact on how people are selected for jobs or promoted to advanced positions. Avoiding such biases entirely can be difficult, but being aware of their existence and striving to minimize them can help.

Implicit Bias in Healthcare Settings

Certainly, age, race, or health condition should not play a role in how patients get treated, however, implicit bias can influence quality healthcare and have long-term impacts including suboptimal care, adverse outcomes, and even death.

For example, one study published in the American Journal of Public Health found that physicians with high scores in implicit bias tended to dominate conversations with Black patients and, as a result, the Black patients had less confidence and trust in the provider and rated the quality of their care lower.  

Researchers continue to investigate implicit bias in relation to other ethnic groups as well as specific health conditions, including type 2 diabetes, obesity, mental health, and substance use disorders.

Implicit Bias in Legal Settings

Implicit biases can also have troubling implications in legal proceedings, influencing everything from initial police contact all the way through sentencing. Research has found that there is an overwhelming racial disparity in how Black defendants are treated in criminal sentencing.  

Not only are Black defendants less likely to be offered plea bargains than White defendants charged with similar crimes, but they are also more likely to receive longer and harsher sentences than White defendants.

Strategies to Reduce the Impact of Implict Bias

Implicit biases impact behavior, but there are things that you can do to reduce your own bias. Some ways that you can reduce the influence of implicit bias:

  • Focus on seeing people as individuals . Rather than focusing on stereotypes to define people, spend time considering them on a more personal, individual level.
  • Work on consciously changing your stereotypes . If you do recognize that your response to a person might be rooted in biases or stereotypes, make an effort to consciously adjust your response.
  • Take time to pause and reflect . In order to reduce reflexive reactions, take time to reflect on potential biases and replace them with positive examples of the stereotyped group. 
  • Adjust your perspective . Try seeing things from another person's point of view. How would you respond if you were in the same position? What factors might contribute to how a person acts in a particular setting or situation?
  • Increase your exposure . Spend more time with people of different racial backgrounds. Learn about their culture by attending community events or exhibits.
  • Practice mindfulness . Try meditation, yoga, or focused breathing to increase mindfulness and become more aware of your thoughts and actions.

While implicit bias is difficult to eliminate altogether, there are strategies that you can utilize to reduce its impact. Taking steps such as actively working to overcome your biases , taking other people's perspectives, seeking greater diversity in your life, and building your awareness about your own thoughts are a few ways to reduce the impact of implicit bias.

A Word From Verywell

Implicit biases can be troubling, but they are also a pervasive part of life. Perhaps more troubling, your unconscious attitudes may not necessarily align with your declared beliefs. While people are more likely to hold implicit biases that favor their own in-group, it is not uncommon for people to hold biases against their own social group as well.

The good news is that these implicit biases are not set in stone. Even if you do hold unconscious biases against other groups of people, it is possible to adopt new attitudes, even on the unconscious level.   This process is not necessarily quick or easy, but being aware of the existence of these biases is a good place to start making a change.

Jost JT. The existence of implicit bias is beyond reasonable doubt: A refutation of ideological and methodological objections and executive summary of ten studies that no manager should ignore . Research in Organizational Behavior . 2009;29:39-69. doi:10.1016/j.riob.2009.10.001

Greenwald AG, Mcghee DE, Schwartz JL. Measuring individual differences in implicit cognition: The implicit association test . J Pers Soc Psychol. 1998;74(6):1464-1480. doi:10.1037/0022-3514.74.6.1464

Sabin J, Nosek BA, Greenwald A, Rivara FP. Physicians' implicit and explicit attitudes about race by MD race, ethnicity, and gender . J Health Care Poor Underserved. 2009;20(3):896-913. doi:10.1353/hpu.0.0185

Capers Q, Clinchot D, McDougle L, Greenwald AG. Implicit racial bias in medical school admissions . Acad Med . 2017;92(3):365-369. doi:10.1097/ACM.0000000000001388

Kiefer AK, Sekaquaptewa D. Implicit stereotypes and women's math performance: How implicit gender-math stereotypes influence women's susceptibility to stereotype threat .  Journal of Experimental Social Psychology. 2007;43(5):825-832. doi:10.1016/j.jesp.2006.08.004

Steffens MC, Jelenec P, Noack P. On the leaky math pipeline: Comparing implicit math-gender stereotypes and math withdrawal in female and male children and adolescents .  Journal of Educational Psychology. 2010;102(4):947-963. doi:10.1037/a0019920

Edward Zigler Center in Child Development & Social Policy, Yale School of Medicine. Implicit Bias in Preschool: A Research Study Brief .

Pager D, Western B, Bonikowski B. Discrimination in a low-wage labor market: A field experiment . Am Sociol Rev. 2009;74(5):777-799. doi:10.1177/000312240907400505

Malinen S, Johnston L. Workplace ageism: Discovering hidden bias . Exp Aging Res. 2013;39(4):445-465. doi:10.1080/0361073X.2013.808111

Cooper LA, Roter DL, Carson KA, et al. The associations of clinicians' implicit attitudes about race with medical visit communication and patient ratings of interpersonal care . Am J Public Health . 2012;102(5):979-87. doi:10.2105/AJPH.2011.300558

Leiber MJ, Fox KC. Race and the impact of detention on juvenile justice decision making .  Crime & Delinquency. 2005;51(4):470-497. doi:10.1177/0011128705275976

Van Ryn M, Hardeman R, Phelan SM, et al. Medical school experiences associated with change in implicit racial bias among 3547 students: A medical student CHANGES study report . J Gen Intern Med. 2015;30(12):1748-1756. doi:10.1007/s11606-015-3447-7

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

Headshot of John Sullivan.

Imagining a different Russia

Sandra Susan Smith (from left), Gwen Carr, and Selwyn Jones speaking during the event.

Remember Eric Garner? George Floyd?

Daniel Carpenter.

Lawyers reap big profits lobbying government regulators under the radar

Mahzarin Banaji opened the symposium on Tuesday by recounting the “implicit association” experiments she had done at Yale and at Harvard. The final talk is today at 9 a.m.

Kris Snibbe/Harvard Staff Photographer

Turning a light on our implicit biases

Brett Milano

Harvard Correspondent

Social psychologist details research at University-wide faculty seminar

Few people would readily admit that they’re biased when it comes to race, gender, age, class, or nationality. But virtually all of us have such biases, even if we aren’t consciously aware of them, according to Mahzarin Banaji, Cabot Professor of Social Ethics in the Department of Psychology, who studies implicit biases. The trick is figuring out what they are so that we can interfere with their influence on our behavior.

Banaji was the featured speaker at an online seminar Tuesday, “Blindspot: Hidden Biases of Good People,” which was also the title of Banaji’s 2013 book, written with Anthony Greenwald. The presentation was part of Harvard’s first-ever University-wide faculty seminar.

“Precipitated in part by the national reckoning over race, in the wake of George Floyd, Breonna Taylor and others, the phrase ‘implicit bias’ has almost become a household word,” said moderator Judith Singer, Harvard’s senior vice provost for faculty development and diversity. Owing to the high interest on campus, Banaji was slated to present her talk on three different occasions, with the final one at 9 a.m. Thursday.

Banaji opened on Tuesday by recounting the “implicit association” experiments she had done at Yale and at Harvard. The assumptions underlying the research on implicit bias derive from well-established theories of learning and memory and the empirical results are derived from tasks that have their roots in experimental psychology and neuroscience. Banaji’s first experiments found, not surprisingly, that New Englanders associated good things with the Red Sox and bad things with the Yankees.

She then went further by replacing the sports teams with gay and straight, thin and fat, and Black and white. The responses were sometimes surprising: Shown a group of white and Asian faces, a test group at Yale associated the former more with American symbols though all the images were of U.S. citizens. In a further study, the faces of American-born celebrities of Asian descent were associated as less American than those of white celebrities who were in fact European. “This shows how discrepant our implicit bias is from even factual information,” she said.

How can an institution that is almost 400 years old not reveal a history of biases, Banaji said, citing President Charles Eliot’s words on Dexter Gate: “Depart to serve better thy country and thy kind” and asking the audience to think about what he may have meant by the last two words.

She cited Harvard’s current admission strategy of seeking geographic and economic diversity as examples of clear progress — if, as she said, “we are truly interested in bringing the best to Harvard.” She added, “We take these actions consciously, not because they are easy but  because they are in our interest and in the interest of society.”

Moving beyond racial issues, Banaji suggested that we sometimes see only what we believe we should see. To illustrate she showed a video clip of a basketball game and asked the audience to count the number of passes between players. Then the psychologist pointed out that something else had occurred in the video — a woman with an umbrella had walked through — but most watchers failed to register it. “You watch the video with a set of expectations, one of which is that a woman with an umbrella will not walk through a basketball game. When the data contradicts an expectation, the data doesn’t always win.”

Expectations, based on experience, may create associations such as “Valley Girl Uptalk” is the equivalent of “not too bright.” But when a quirky way of speaking spreads to a large number of young people from certain generations,  it stops being a useful guide. And yet, Banaji said, she has been caught in her dismissal of a great idea presented in uptalk.  Banaji stressed that the appropriate course of action is not to ask the person to change the way she speaks but rather for her and other decision makers to know that using language and accents to judge ideas is something people at their own peril.

Banaji closed the talk with a personal story that showed how subtler biases work: She’d once turned down an interview because she had issues with the magazine for which the journalist worked.

The writer accepted this and mentioned she’d been at Yale when Banaji taught there. The professor then surprised herself by agreeing to the interview based on this fragment of shared history that ought not to have influenced her. She urged her colleagues to think about positive actions, such as helping that perpetuate the status quo.

“You and I don’t discriminate the way our ancestors did,” she said. “We don’t go around hurting people who are not members of our own group. We do it in a very civilized way: We discriminate by who we help. The question we should be asking is, ‘Where is my help landing? Is it landing on the most deserved, or just on the one I shared a ZIP code with for four years?’”

To subscribe to short educational modules that help to combat implicit biases, visit outsmartinghumanminds.org .

Share this article

You might like.

Former ambassador sees two tragedies: Ukraine war and the damage Putin has inflicted on his own country

Sandra Susan Smith (from left), Gwen Carr, and Selwyn Jones speaking during the event.

Mother, uncle of two whose deaths at hands of police officers ignited movement talk about turning pain into activism, keeping hope alive

Daniel Carpenter.

Study exposes how banks sway policy from shadows, by targeting bureaucrats instead of politicians

When math is the dream

Dora Woodruff was drawn to beauty of numbers as child. Next up: Ph.D. at MIT.

Seem like Lyme disease risk is getting worse? It is.

The risk of Lyme disease has increased due to climate change and warmer temperature. A rheumatologist offers advice on how to best avoid ticks while going outdoors.

Three will receive 2024 Harvard Medal

In recognition of their extraordinary service

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Implicit Bias

Research on “implicit bias” suggests that people can act on the basis of prejudice and stereotypes without intending to do so. While psychologists in the field of “implicit social cognition” study consumer products, self-esteem, food, alcohol, political values, and more, the most striking and well-known research has focused on implicit biases toward members of socially stigmatized groups, such as African-Americans, women, and the LGBTQ community. [ 1 ] For example, imagine Frank, who explicitly believes that women and men are equally suited for careers outside the home. Despite his explicitly egalitarian belief, Frank might nevertheless behave in any number of biased ways, from distrusting feedback from female co-workers to hiring equally qualified men over women. Part of the reason for Frank’s discriminatory behavior might be an implicit gender bias. Psychological research on implicit bias has grown steadily (§1), raising metaphysical (§2), epistemological (§3), and ethical questions (§4). [ 2 ]

1.1 History of the Field

1.2 implicit measures, 2.1 attitudes, 2.2 implicit processes, 2.3.1 the propositional model of implicit attitudes, 2.3.2 generic belief, 2.3.3 spinozan belief fixation, 2.5 situations, 3.1 self-knowledge, 3.2.1 perceptual belief, 3.2.2 global skepticism, 3.3 ethical/epistemic dilemmas, 4.1.1 arguments from awareness, 4.1.2 arguments from control, 4.1.3 attributionism and deep self theories, 4.2.1 change-based interventions, 4.2.2 control-based interventions, 5.2 implicit vs. explicit, 5.3 predicting behavior, 5.4 structuralism, 6. future research, manuscripts, other links, related entries, 1. introduction: history and measures of implicit social cognition.

While Allport’s (1954) The Nature of Prejudice remains a touchstone for psychological research on prejudice, the study of implicit social cognition has two distinct and more recent sets of roots. [ 3 ] The first stems from the distinction between “controlled” and “automatic” information processing made by cognitive psychologists in the 1970s (e.g., Shiffrin & Schneider 1977). While controlled processing was thought to be voluntary, attention-demanding, and of limited capacity, automatic processing was thought to unfold without attention, to have nearly unlimited capacity, and to be hard to suppress voluntarily (Payne & Gawronski 2010; see also Bargh 1994). In important early work on implicit cognition, Fazio and colleagues showed that attitudes can be understood as activated by either controlled or automatic processes. In Fazio’s (1995) “sequential priming” task, for example, following exposure to social group labels (e.g., “black”, “women”, etc.), subjects’ reaction times (or “response latencies”) to stereotypic words (e.g., “lazy” or “nurturing”) are measured. People respond more quickly to concepts closely linked together in memory, and most subjects in the sequential priming task are quicker to respond to words like “lazy” following exposure to “black” than “white”. Researchers standardly take this pattern to indicate a prejudiced automatic association between semantic concepts. The broader notion embedded in this research was that subjects’ automatic responses were thought to be “uncontaminated” by controlled or strategic responses (Amodio & Devine 2009).

While this first stream of research focused on automaticity, a second stream focused on (un)consciousness. Many studies demonstrated that awareness of stereotypes can affect social judgment and behavior in relative independence from subjects’ reported attitudes (Devine 1989; Devine & Monteith 1999; Dovidio & Gaertner 2004; Greenwald & Banaji 1995; Banaji et al. 1993). These studies were influenced by theories of implicit memory (e.g., Jacoby & Dallas 1981; Schacter 1987), leading to Greenwald & Banaji’s original definition of “implicit attitudes” as

introspectively unidentified (or inaccurately identified) traces of past experience that mediate favorable or unfavorable feeling, thought, or action toward social objects. (1995: 8)

The guiding idea here, as Dovidio and Gaertner (1986) put it, is that in the modern world prejudice has been “driven underground,” that is, out of conscious awareness. This idea has led to the common view that what makes a bias implicit is that a person is unwilling or unable to report it. Recent findings have challenged this view, however (§3.1)

What a person says is not necessarily a good representation of the whole of what she feels and thinks, nor of how she will behave. Arguably, the central advance of research on implicit social cognition is the ability to assess people’s thoughts, feelings, and behavior without having to ask them directly, “what do you think/feel about X?” or “what would you do in X situation?”

Implicit measures, then, might be thought of as instruments that assess people’s thoughts, feelings, and behavior indirectly, that is, without relying on “self-report.” This is too quick, however. For example, a survey that asks “what do you think of black people” is explicit and direct, in the sense that the subject’s judgment is both explicitly reported and the subject is being directly asked about the topic of interest to the researchers. However, a survey that asks “what do you think about Darnell” (i.e., a person with a stereotypically black name) is explicit and indirect, because the subject’s judgment is explicitly reported but the content of what is being judged (i.e., the subject’s attitudes toward race) is inferred by the researcher. The distinction between direct and indirect measures is also relative rather than absolute. Even in some direct measures, such as personality inventories, subjects may not be completely aware of what is being studied.

In the literature, “implicit” is used to refer to at least four distinct things (Gawronski & Brannon 2017): (1) a distinctive psychological construct, such as an “implicit attitude,” which is assessed by a variety of instruments; (2) a family of instruments, called “implicit measures,” that assess people’s thoughts and feelings in a specific way (e.g., in a way that minimizes subjects’ reliance on introspection and their ability to respond strategically); (3) a set of cognitive and affective processes—“implicit processes”—that affect responses on a variety of measures; and (4) a kind of evaluative behavior—e.g., a categorization judgment—elicited by specific circumstances, such as cognitive load. In this entry, I will use “implicit” in the senses of (2) and (4), unless otherwise noted. One virtue of this approach is that it allows one to remain agnostic about the nature of the phenomena implicit measures assess. [ 4 ] Consider Frank again. His implicit gender bias may be assessed by several different instruments, such as sequential priming or the “Implicit Association Test” (IAT; Greenwald et al. 1998). The IAT—the most well-known implicit test—is a reaction time measure. In a standard IAT, the subject attempts to sort words or pictures into categories as fast as possible while making as few errors as possible. In the images below, the correct answers would be left, right, left, right.

[a black box in the center is the word 'Michelle' in white, on the top left are the words 'Female or [in white]  Family [in green]', on the top right are the words 'Male or [in white] Career [in green]']

All images are copyright of Project Implicit and reproduced here with permission.

An IAT score is computed by comparing speed and error rates on the “blocks” (or trials) in which the pairing of concepts is consistent with common stereotypes (images 1 and 3) to the blocks in which the pairing of the concepts is inconsistent with common stereotypes (images 2 and 4). If he is typical of most subjects, Frank will be faster and make fewer errors on stereotype-consistent trials than stereotype-inconsistent trials. While this “gender-career” IAT pairs concepts (e.g., “male” and “career”), other IATs, such as the “race-evaluation” IAT, pair a concept to an evaluation (e.g., “black” and “bad”). Other IATs assess body image, age, sexual orientation, and so on. As of 2019, approximately 26 million IATs have been taken (although it is unclear if this number represents 26 million unique participants or 26 million tests taken or started; Lai p.c.). One review (Nosek et al. 2007), which tested over 700,000 subjects on the race-evaluation IAT, found that over 70% of white participants more easily associated black faces with negative words (e.g., war, bad) and white faces with positive words (e.g., peace, good). The researchers consider this an implicit preference for white faces over black faces. [ 5 ]

Although the IAT remains the most popular implicit measure, it is far from the only one. Other prominent implicit measures, many of which are derivations of sequential priming, are semantic priming (Banaji & Hardin 1996) and the Affect Misattribution Procedure (AMP; Payne et al. 2005). Also, a “second generation” of categorization-based measures (like the IAT) has been developed. For example, the Go/No-go Association Task (GNAT; Nosek & Banaji 2001) presents subjects with one target object rather than two in order to determine whether preferences or aversions are primarily responsible for scores on the standard IAT (i.e., the ease of pairing good words with white faces and bad words with black faces, or the difficulty of pairing good words with black faces and bad words with white faces; Brewer 1999).

A notable advance in the psychometrics of implicit bias has been the advent of multinomial (or formal process) models, which identify distinct processes contributing to performance on implicit measures. For example, elderly people tend to show greater bias on the race-evaluation IAT compared with younger people, but this may be due to their having stronger preferences for whites or having weaker control over their biased responding (Nosek et al. 2011). Multinomial models, like the Quadruple Process Model (Conrey et al. 2005), are used to tease apart these possibilities. The Quad model identifies four distinct processes that contribute to responses: (1) the automatic activation of an association; (2) the subject’s ability to determine a correct response (i.e., a response that reflects one’s subjective assessment of truth); (3) the ability to override automatic associations; and (4) general response biases (e.g., favoring right-handed responses). Multinomial modeling has made clear that implicit measures are not “process pure,” i.e., they do not tap into a single unified psychological process.

While there is not consensus about what implicit measures capture (§2), it is clear that they provide at least three kinds of information (Gawronski & Hahn 2019). The first is information about dissociation with more explicit, direct measures. Correlations between implicit and explicit measures tend to be relatively low ( r = .2–.25; Hofmann et al. 2005; Cameron et al. 2012), although these relations are significantly affected by methodological practices, such as comparing non-corresponding implicit and explicit measures (e.g., an implicit measure of gender stereotypes and an explicit “feelings thermometer” toward women). It is important to note the breadth of research in this vein; dissociations between implicit and explicit measures are found in the study of personality (e.g., Vianello et al. 2010), attitudes toward alcohol (e.g., de Houwer et al. 2004), phobias (Teachman & Woody 2003), and more. Second, implicit measures can be used as dependent variables in experiments. Theories about the formation and change of attitudes, for example, have focused on differential effects of manipulations, such as counter-attitudinal information, on implicit and explicit measures (e.g., Gawronski & Bodenhausen 2006; Petty 2006). Third, implicit measures are used to predict behavior. Philosophers have been especially interested in the relationship between implicit bias and discriminatory behavior, particularly when the discriminatory behavior conflicts with a person’s reported beliefs (as in the “Frank” case above). Studies report relationships between implicit bias and behavior in a huge variety of social contexts, from hiring to policing to medicine to teaching and more (for an incomplete list see Table 1 in Jost et al. 2009). There is also voluminous, varied, and on-going discussion about how well implicit measures predict behavior, along with several related critical assessments of the information implicit measures provide (§5).

2. Metaphysics

“Implicit bias” is a term of art, used in a variety of ways. In this entry, the term is used to refer to the family of evaluative judgments and behavior assessed by implicit measures (e.g., categorization judgments on an IAT). These measures mimic some relevant aspects of judgment and decision-making outside the lab (e.g., time pressure). But what do these measures measure? With some blurry boundaries, philosophical and psychological theories can be divided into five groups. Implicit measures might provide information about attitudes (§2.1), implicit processes (§2.2), beliefs (§2.3), traits (§2.4), or situations (§2.5).

The idea that people’s attitudes are the cause of implicit bias is pervasive. The term “attitudes” tends to be used differently in psychology and philosophy, however. In psychology, attitudes are akin to preferences (i.e., likings and dislikings); the term does not refer to propositional states per se (i.e., mental states that are thought to bear a relationship to a proposition), as it does in philosophy. Most attitudinal theories of implicit bias use the term in the psychologist’s sense, although variations will be noted below.

2.1.1 Dual Attitudes in Psychology

Early and influential theories posited that people hold two distinct attitudes in mind toward the same object, one implicit and the other explicit (Greenwald & Banaji 1995; Wilson et al. 2000). “Explicit attitudes” are commonly identified with verbally reported attitudes, in this vein, while “implicit attitudes” are those that a person is unwilling or unable to report. Evidence for theories of dual attitudes stems largely from two sources. The first are anecdotal reports of surprise and consternation that people sometimes express after being informed of their performance on an implicit measure (e.g., Banaji 2011; Krickel 2018). These experiences suggest that people discover their putative implicit attitudes by taking the relevant tests, just like one learns about one’s cholesterol by taking the relevant tests. The second source of evidence for dual-attitude views are dissociations between implicit and explicit measures (§1.2). These suggest that implicit and explicit measures may be tapping into distinct representations of the same attitude-object (e.g., “the elderly”).

A central challenge for theories of this sort is whether people truly are unaware of their implicit biases, and if so, in what way (e.g., if people are unaware of the source, content, or behavioral effects of their attitudes; §3.1). There may be reasons to posit unconscious representations in the human mind independent of whether people are or are not aware of their implicit biases, of course. But if people are aware of their implicit biases, then implicit measures are most likely not assessing unconscious “dual” attitudes.

2.1.2 Dual Attitudes in Philosophy

Some philosophers have proposed that implicit measures assess a distinct kind of “action-oriented” attitude, which is different from ordinary attitudes, but not necessarily in terms of being unconscious. The core idea here is that implicit attitudes link representations with behavioral impulses. [ 6 ] Gendler’s (2008a,b, 2011, 2012) account of “alief,” a sui generis mental state comprised of tightly woven co-activating representational ( R ), affective ( A ), and behavioral ( B ) components, is emblematic of this approach. Gendler argues that the R-A-B components of alief are “bundled” together or “cluster” in such a way that when an implicitly biased person sees a black face in a particular context, for example, the agent’s representation will automatically activate particular feelings and behaviors (i.e., an R–A–B cluster). This is in contrast to the “combinatoric” nature of ordinary beliefs and desires, that is, that any belief could, in principle, be combined with any desire. So while the belief that “that is a black man” is not fixed to any particular feelings or behavior, an alief will have content like, “Black man! Scary! Avoid!”

“To have an alief”, Gendler writes, is

to a reasonable approximation, to have an innate or habitual propensity to respond to an apparent stimulus in a particular way. It is to be in a mental state that is… a ssociative, a utomatic and a rational. As a class, aliefs are states that we share with non-human a nimals; they are developmentally and conceptually a ntecedent to other cognitive attitudes that the creature may go on to develop. Typically, they are also a ffect-laden and a ction-generating. (2008b: 557, original emphasis; see also 2008a: 641)

According to Gendler, aliefs explain a wide array of otherwise puzzling cases of belief-behavior discordance, including not only implicit bias, but also phobias, fictional emotions, and bad habits (2008b: 554). In fact, Gendler suggests (2008a: 663) that aliefs are causally responsible for much of the “moment-by-moment management” of human behavior, whether that behavior is belief-concordant or not.

Critics have raised a number of concerns about this approach, in particular whether putative aliefs form a unified kind (Egan 2011; Currie & Ichino 2012; Doggett 2012; Nagel 2012; Mandelbaum 2013). Others have proposed alternate conceptions of action-oriented dual attitudes. Brownstein and Madva (2012a,b; see also Madva and Brownstein 2018 and Brownstein 2018), for example, propose that implicit attitudes are comprised of F-T-B-A components: the perception of a salient F eature triggers automatic low-level feelings of affective T ension, which are associated in turn with specific B ehavioral responses, which either do or do not A lleviate the agent’s felt tension. This approach shares with Gendler’s the idea that aliefs/implicit attitudes differ in kind from beliefs/explicit attitudes. Moreover, the difference between these putative kinds of states is not necessarily the agent’s introspective access to them. Gendler proposes that while paradigmatic beliefs update when the agent requires new relevant information, paradigmatic aliefs don’t. In contrast, Brownstein and Madva argue that implicit attitudes do update in the face of new information—this is the feed-forward function of “alleviation”—and thus can automatically yet flexibly modify and improve over time. Thus, for Brownstein and Madva, implicit attitudes are implicated not only in bias and prejudice, but also in skillful, intelligent, and even ethical action. [ 7 ] But while implicit attitudes aren’t ballistic, information-insensitive reflexes, on Brownstein and Madva’s view, they also don’t update in the same way as ordinary attitudes. Brownstein and Madva draw the distinction in terms of two key features. First, implicit attitudes are paradigmatically insensitive to the logical form in which information is presented. For example, subjects have been shown to form equivalent implicit attitudes on the basis of information and the negation of that information (e.g., Gawronski et al. 2008). Second, implicit attitudes fail to respond to the semantic contents of other mental states in a systematic way; they appear to be “inferentially impoverished.” For example, implicit attitudes are implicated in behaviors for which it is difficult to give an inferential explanation (e.g., Dovidio et al. 1997) and implicit attitudes change in response to irrelevant information (e.g., Gregg et al. 2006; Han et al. 2006). Levy (2012, 2015)—who argues that implicit attitudes are “patchy endorsements”—makes similar claims about the ways in which implicit attitudes do and do not update, although he does not argue that these kinds of states are “action-oriented” in the way that Gendler and Brownstein and Madva do. Debate about these findings is ongoing (§2.3).

2.1.3 Single Attitudes

Some theories posit the existence of a singular representation of attitude-objects. According to MODE (“Motivation and Opportunity as Determinants”; Fazio 1990; Fazio & Towles-Schwen 1999; Olson & Fazio 2009) and the related MCM (“Meta-Cognitive Model”; Petty 2006; Petty et al. 2007), attitudes are associations between objects and “evaluative knowledge” of those objects. MODE posits one singular representation underlying the behavioral effects measured by implicit and explicit tests. Thus, MODE denies the distinction between implicit and explicit attitudes. The difference between implicit and explicit measures, then, reflects a difference in the control that subjects have over the measured behavior. Control is understood in terms of motivation and opportunity to deliberate. When an agent has low motivation or opportunity to engage in deliberative thought, her automatically activated attitudes—which might be thought of as her “true” attitudes—will guide her behavior and judgment. Implicit measures manufacture this situation (of low control due to low motivation and/or opportunity to deliberate). Explicit measures, by contrast, increase non-attitudinal contributions to test performance. MODE therefore provides empirically-testable predictions about the conditions under which a person’s performance on implicit and explicit measures will converge and diverge, as well as predictions about the conditions under which implicit and explicit measures will and will not predict behavior (see Gawronski & Brannon 2017 for review).

Influenced by dual process theories of mind, RIM (“Reflective-Impulsive Model”; Strack & Deutsche 2004) and APE (“Associative-Propositional Evaluation”; Gawronski & Bodenhausen 2006, 2011) suggest that implicit measures assess distinctive cognitive processes. The central distinction at the heart of both RIM and APE is between “associative” and “propositional” processes. Associative processes are said to underlie an impulsive system that functions according to classic associationist principles of similarity and contiguity. Implicit measures are thought of as assessing the momentary accessibility of elements or nodes of a network of associations. This network produces spontaneous evaluative responses to stimuli. Propositional processes, on the other hand, underlie a reflective system that validates the information provided by activated associations. Explicit measures are thought to capture this process of validation, which is said to operate according to agents’ syllogistic reasoning and judgments of logical consistency. In sum, the key distinction between associative and propositional processes according to RIM and APE is that propositional processing alone depends on an agent’s assessment of the truth of a given representation. [ 8 ] APE in particular aims to explain the interactions between and mutual influences of associative and propositional processes in judgment and behavior.

RIM and APE bear resemblance to the dual attitudes theories in philosophy discussed above. Indeed, Bodenhausen & Gawronski (2014: 957) write that the “distinction between associative and propositional evaluations is analogous to the distinction between ‘alief’ and belief in recent philosophy of epistemology.” It is important to keep in mind, however, that RIM and APE are not attitudinal theories. APE, for example, posits two distinct kinds of process—associative and propositional processes—that give rise to two kinds of evaluative responses to stimuli—implicit and explicit. It does not posit the existence of two distinct attitudes or two distinct co-existing representations of the same entity. It is also important to note that the distinction between associative and propositional processes can be understood in at least three distinct senses: as applying to the way in which information is learned, stored, or expressed (Gawronski et al. 2017). At present, evidence is mixed for dissociation between associative and propositional processing in the learning and storage of information, while it is stronger for dissociation in the behavioral expression of stored information (Brownstein et al. 2019).

2.3 Beliefs

Some have argued that familiar notions of belief, desire, and pretense can in fact explain what neologisms like “implicit attitudes” are meant to elucidate (Egan 2011; Kwong 2012; Mandelbaum 2013). Most defend some version of what Schwitzgebel (2010) calls Contradictory Belief (Egan 2008, 2011; Huebner 2009; Gertler 2011; Huddleston 2012; Muller & Bashour 2011; Mandelbaum 2013, 2014, forthcoming). [ 9 ] Drawing upon theories of the “fragmentation” of the mind (Lewis 1982; Stalnaker 1984), Contradictory Belief holds that implicit and explicit measures both reflect what a person believes, and that these different sets of beliefs may be causally responsible for different behavior in different contexts (Egan 2008). In short, if a person behaves in a manner consistent with the belief that black men are dangerous, it is because they believe that black men are dangerous (notwithstanding what they say they believe).

In the psychological literature, De Houwer and colleagues defend a view that can be thought of as supporting Contradictory Belief (Mitchell et al. 2009; Hughes et al. 2011; De Houwer 2014). On this model, propositions [ 10 ] have three defining features: (1) propositions are statements about the world that specify the nature of the relation between concepts (e.g., “I am good” and “I want to be good” are propositions that involve the same two concepts—“me” and “good”—but differ in the way that the concepts are related); (2) propositions can be formed rapidly on the basis of instructions or inferences; and (3) subjects are conscious of propositions (De Houwer 2014). On the basis of data consistent with these criteria—for example, responses on implicit measures are affected by one-shot instruction—De Houwer (2014) argues that implicit measures capture propositional states (i.e., beliefs). [ 11 ] This claim represents an application of Mitchell and colleagues’ (2009) broader argument that all learning is propositional (i.e., there is no case in which learning is the result of the automatic associative linking of mental representations). One reason philosophers have been interested in this view is due to its resonance with classic debates in the philosophy of mind between empiricists and rationalists, behaviorists and cognitivists, and so on.

Another belief-based approach argues that implicit biases should be understood as cognitive “schemas.” Schemas are clusters of culturally shared concepts and beliefs. More precisely, schemas are abstract knowledge structures that specify the defining features and attributes of a target (Fiske & Linville 1980). The term “mother”, for example, invokes a schema that attributes a collection of attributes to the person so labelled (Haslanger 2015). On some accounts, schemas are “coldly” cognitive (Valian 2005), and so in the psychologist’s sense, they are not attitudes. Rather, schemas are tools for social categorization, and while schemas may help to organize and interpret feelings and motivations, they are themselves affectless. One advantage of focusing on schemas is that doing so emphasizes that implicit bias is not a matter of straightforward antipathy toward members of socially stigmatized groups.

A separate version of the generic belief approach stems from recent work in the philosophy of language. This approach focuses on stereotypes that involve generalizing extreme or horrific behavior from a few individuals to groups. Such generalizations, such as “pit bulls maul children” or “Muslims are terrorists”, can be thought of as a particular kind of generic statement, which Leslie (2017) calls a “striking property generic”. This subclass of generics is defined by having predicates that express properties that people typically have a strong interest in avoiding. Building on earlier work on the cognitive structure and semantics of generics (Leslie 2007, 2008), Leslie notes a particularly insidious feature of social stereotyping: even if just a few members of what is perceived to be an essential kind (e.g., pit bulls, Muslims) exhibit a harmful or dangerous property, then a generic that attributes the property to the kind likely will be judged to be true. This is only the case with striking properties, however. As Leslie (2017) points out, it takes far fewer instances of murder for one to be considered a murderer than it does instances of anxiety to be considered a worrier. Striking property generics may thus illuminate some social stereotypes (e.g., “black men are rapists”) better than others (e.g., “black men are athletic”). Beeghly (2014), however, construes generics as expressions of cognitive schemas, which may broaden the scope of explanation by way of generic statements. In all of these cases, generics involve an array of doxastic properties. Generics involve inferences to dispositions, for example (Leslie 2017). That is, generic statements about striking properties will usually be judged true if and only if some members of the kind possess the property and other members of the kind are judged to be disposed to possess it.

The most explicit defense of Contradictory Belief has been via a theory of “Spinozan Belief Fixation” (SBF; Gilbert 1991; Egan 2008, 2011; Huebner 2009; Mandelbaum 2011, 2013, 2014, 2016). Proponents of SBF are inspired by Spinoza’s rejection of the concept of the will as a cause of free action (Huebner 2009: 68), an idea which is embodied in what they call the theory of “Cartesian Belief Fixation” (CBF). CBF holds that ordinary agents are capable of evaluating the truth of an idea (or representation, or proposition) delivered to the mind (via sensation or imagination) before believing or disbelieving it. Agents can choose to believe or disbelieve P , according to CBF, in other words, via deliberation or judgment. SBF, on the other hand, holds that as soon as an idea is presented to the mind, it is believed. Beliefs on this view are understood to be unconscious propositional attitudes that are formed automatically as soon as an agent registers or tokens their content. For example, one cannot entertain or consider or imagine the proposition that “dogs are made out of paper” without immediately and unavoidably believing that dogs are made out of paper, according to SBF (Mandelbaum 2014). More pointedly, one cannot entertain or imagine the stereotype that “women are bad at math” without believing that women are bad at math. As Mandelbaum (2014) puts it, the automaticity of believing according to SBF explains why people are likely to have many contradictory beliefs; in order to reject P , one must already believe P . [ 12 ]

SBF is strongly revisionist with respect to the ordinary concept of belief (but see Helton (forthcoming) for a similarly spirited but less revisionist view). [ 13 ] Notwithstanding this, the central line of debate about SBF’s account of implicit bias—as well as about belief-based accounts of implicit social cognition generally—focuses on the fact that people’s performance on implicit measures is sometimes unresponsive to the kinds of reinforcement learning based interventions that ought to affect associative processes and/or states; meanwhile, performance on implicit measures sometimes appears to be responsive to the kinds of logical and persuasion based interventions thought to affect doxastic states (e. g., de Houwer 2009, 2014; Hu et al. 2017; Mann & Ferguson 2017; Van Dessel et al. 2018; for additional discussion see Mandelbaum 2013, 2016; Gawronski et al. 2017; Brownstein et al. 2019). Caution is needed in drawing strong conclusions about cognitive structure from these behavioral data, however (Levy 2015; Madva 2016c; Byrd forthcoming; Brownstein et al 2019). As noted above (§1.2), implicit measures are not process-pure. Modeling technique for disentangling the multiple causal contributions to performance on implicit measures may help to move these debates forward (e.g., Conrey et al. 2005; Hütter & Sweldens 2018).

As is the case with terms like “attitude” and “propositional,” psychologists and philosophers tend to use the term “trait” in different ways. In psychology, trait-like constructs are stable over time and across situations. If you have always disliked eating pork, and never eat it no matter the context, then your feelings toward pork are trait-like. If you sometimes decline to eat pork but sometimes indulge, depending on the company or your mood, then your feelings are more “state”-like. In the psychologist’s sense, significant evidence suggests that implicit bias is more state-like than trait-like. Multiple longitudinal studies have found that individuals’ scores on implicit measures vary significantly over days, weeks, and months, much more so than individuals’ scores on corresponding explicit measures (Cooley & Payne 2017; Cunningham et al. 2001; Devine et al. 2012; Gawronski et al. 2017). Of course, the significance of this depends on one’s theory of implicit bias. If implicit measures are theorized to capture spontaneous affective reactions (as APE suggests; §2.2), then contextual and temporal variability in performance should be predicted (because, for example, one’s immediate reactions to images of women leaders will likely be different after watching a documentary about Ruth Bader Ginsburg than after watching Clueless ). However, if implicit measures are meant to “diagnose” stable features of individuals like political party affiliation, then far less variation should be expected. Another possibility is that measurement error contributes significantly to the instability of scores on implicit measures. The fact that methodological improvements have in some cases improved the temporal stability of participants’ performance supports this idea (e.g., Cooley and Payne 2017).

In philosophy, “trait” is used more often in the context of anti-representationalist, dispositional theories of mind. While representationalists define concepts like “belief” in terms of internal, representational structures of the mind, dispositionalists define concepts like “belief” in terms of tendencies to behave in certain ways (and perhaps also to feel and think in certain ways). Building upon Ryle (1949/2009), Schwitzgebel (2006/2010, 2010, 2013) advances a dispositional theory of attitudes (in the philosophical sense, that is, a theory that claims that beliefs, desires, hopes, etc. are dispositions). On his view, attitudes have a broad (or “multitrack”) profile, including dispositions to feel, think, and speak in specific ways. The dispositional profile of a given attitude is determined by the folk-psychological stereotype for having that attitude, not by what’s inside the agent’s metaphoric “belief box.” For example, to establish that Jordan believes that women make good philosophers, one would look to what Jordan says about women philosophers, to her judgments about which philosophers are good and which aren’t, to her hiring practices, her gut feelings around men and women philosophers, etc. Agents with implicit biases pose an interesting challenge to dispositionalists, since these agents often match only part of the relevant folk-psychological stereotypes. For example, Jordan might say that she believes that women make good philosophers but fail to read any women philosophers (or, recall Frank; §1 ). On Schwitzgebel’s “gradualist dispositionalism,” Jordan and Frank would be “in-between believers,” agents who partly match the relevant folk-psychological stereotypes for the attitudes in question.

A related trait-based approach treats the results of indirect measures as reflective of elements of attitudes, rather than as assessing attitudes or biases themselves (Machery 2016, 2017). On Machery’s view, attitudes (in the psychologist’s sense, that is, preferences) are dispositions and are comprised of various bases, including feelings, associations, behavioral impulses, and propositional states like beliefs. (In contrast to Schwitzgebel, Machery holds a representationalist view of belief, but a dispositionalist view of attitudes.) To have a racist attitude, on this picture, is to be disposed to display the relevant mix of these bases, that is, to display the feelings, associations, etc. that together comprise the attitude. Implicit measures, then, are said to capture one of the psychological bases (e.g., her associations between concepts) of the agent’s overall attitude. Explicit questionnaire measures capture another psy­chological basis of the agent’s attitude, behavioral measures yet another basis, and so on. Implicit measures, then, do not assess “implicit attitudes,” and indeed, Machery denies that attitudes divide into implicit and explicit kinds. Rather, implicit measures quantify elements of attitudes. In part, this proposal is meant to explain some of the key psychometric properties of implicit measures, such as their instability over time and the fact that some implicit measures correlate poorly with each other (§5). These findings are consistent with the notion that different implicit measures quantify different psychological bases of attitudes, Machery argues.

One advantage of thinking of implicit biases as traits is that it is consistent with the way in which personality attributions readily admit of vague cases. Just as we might say that Frank is partly agreeable if he extols the virtues of compassion yet sometimes treats strangers rudely, we might say that Frank is partly prejudiced . Dispositional theories capture this intuition. On the other hand, trait-based theories of implicit bias face long-standing challenges to dispositionalism in the philosophy of mind. One such challenge is that traits are explanatory as generalizations, not as token causes of judgment and behavior ( Carruthers 2013). Another is the specter of circularity arising from the simultaneous use of an agent’s behavior to both define her disposition and to point to what her disposition predicts ( Bandura, 1971; Cervone et al. 2015; Mischel 1968; Payne et al. 2017). In both cases, the question for dispositionalism is whether it truly helps to explain the data, or merely repackages outwardly observed patterns in new terms.

The most common way people think and write about implicit biases is as attributes of persons . Another possibility, though, is that implicit biases are attributes of situations . Although psychologists have been debating person-based and situation-based explanations throughout the history of implicit social cognition research (Payne & Gawronski 2010; Murphy & Walton 2013; Murphy et al. 2018), the situationist approach has gained steam due to Payne and colleagues’ (2017) “bias of crowds” model. Borrowing from the concept of the “wisdom of crowds,” this approach suggests that differences between situations explains the variance of scores on implicit measures, rather than differences between individuals. A helpful metaphor used by Payne and colleagues is doing “the wave” at a baseball game. Where a person is sitting in the bleachers, in combination with where the wave is at a given time, is likely to outperform most individual differences (e.g., implicit or explicit feelings about the wave) in predicting whether a person sits or stands. Likewise, what predicts implicit bias are features of people’s situations, not features of their personality. For example, living in a highly residentially segregated neighborhood might be expected to outpredict racial implicit bias compared to individual-level factors, such as beliefs and personality.

The bias of crowds model is aimed at making sense of five features of implicit bias which are otherwise challenging to make sense of together, namely: (1) average group-level scores of implicit bias are very robust and stable; (2) children’s average scores of implicit bias are nearly identical to adults’ average scores; (3) aggregate levels of implicit bias at the population level (e.g., regions, states, and countries) are both highly stable and strongly associated with discriminatory outcomes and group-based disparities; yet, (4) individual differences in implicit bias have small-to-medium zero-order correlations with discriminatory behavior; and (5) individual test-retest reliability is low over weeks and months. (See Payne et al. 2017 for references.) Another advantage of the bias of crowds model is that it coalesces well with calls in philosophy for focusing more on “structural” or “systemic” bias, rather than on the biases in the heads of individuals (§5).

One challenge for the bias of crowds model is explaining how systemic biases interact with and affect the minds of individuals, however. Payne and colleagues appeal to the idea of the “accessibility” of concepts in individuals’ minds, that is, the “likelihood that a thought, evaluation, stereotype, trait, or other piece of information” becomes activated and poised to influence behavior. The lion’s share of evidence, they argue, suggests that the concepts related to implicit bias are activated due to situational causes. This may be, but it does not explain (a) how situations activate concepts in individuals’ minds (Payne and colleagues are explicitly agnostic about the format of cognitive representations that underlie implicit bias); and (b) how situational factors interact with individual factors to give rise to biased actions (Gawronski & Bodenhausen 2017; Brownstein et al. 2019).

3. Epistemology

Philosophical work on the epistemology of implicit bias has focused on three related questions. [ 14 ] First, do we have knowledge of our own implicit biases, and if so, how? Second, do the emerging data on implicit bias demand that we become skeptics about our perceptual beliefs or our overall status as epistemic agents? And third, are we faced with a dilemma between our epistemic and ethical values due to the pervasive nature of implicit bias?

Implicit bias is typically thought of as unconscious (§2.1.1), but what exactly does this mean? There are several possibilities: there might be no phenomenology associated with the relevant mental states or dispositions; agents might be unaware of the content of the representations underlying their performance on implicit measures, or they might be unaware of the source of their implicit biases or the effects those biases have on their behavior; agents might be unaware of the relations between their relevant states (e.g., that their implicit and explicit evaluations of a given target conflict); and agents might have different modes of awareness of their own minds (e.g., “access” vs. “phenomenal” awareness; Block 1995). Gawronski and colleagues (2006) argue that agents typically lack “source” and “impact” awareness of their implicit biases, but typically have “content” awareness. [ 15 ] Evidence for content awareness stems from “bogus pipeline” experiments (e.g., Nier 2005) in which participants are led to believe that inaccurate self-reports will be detected by the experimenter. In these experiments, participants’ scores on implicit and explicit measures come to be more closely correlated, suggesting that participants are aware of the content of those judgments detected by implicit measures and shift their reports when they believe that the experimenter will notice discrepancies. Additional evidence for content awareness is found in studies in which experimenters bring implicit measures and self-reports into conceptual alignment (e.g., Banse et al. 2001) and studies in which agents are asked to predict their own implicit biases (Hahn et al. 2014). Indeed, Hahn and colleagues (2014) and Hahn and Gawronski (2019) have found that people are good at predicting their own IAT scores regardless of how the test is described, how much experience they have taking the test, and how much explanation they are given about the test before taking it. Moreover, people have unique insight into how they will do on the test, insight which is not explained by their beliefs about how people in general will perform.

Hahn and colleagues’ data do not determine, however, whether agents come to be aware of the content of their implicit biases through introspection, by drawing inferences from their own behavior, or from some other source (see Berger forthcoming for discussion). This is important for determining whether the awareness agents have of their implicit biases constitutes self-knowledge. If our awareness of the content of our implicit biases derives from inferences we make based on (for example) our behavior, then the question is whether these inferences are justified, assuming knowledge entails justified true belief. Some have suggested that the facts about implicit bias warrant a “global” skepticism toward our capacities as epistemic agents (Saul 2012; see §3.2.2 ). If this is right, then we ought to worry that our inferences about the content of our implicit biases, from all the ways we behave on a day-to-day basis, are likely to be unjustified. Others, however, have argued that people are typically very good interpreters of their own minds (e.g., Carruthers 2009; Levy 2012), in which case it may be more likely that our inferences about the content of our implicit biases are well-justified. But whether the inferences we make about our own minds are well-justified would be moot if it were shown that we have direct introspective access to our biases.

3.2 Skepticism

One sort of skeptical worry stems from research on the effects of implicit bias on perception ( §3.2.1 ). This leads to a worry about the status of our perceptual beliefs. A second kind of skeptical worry focuses on what implicit bias may tell us about our capacities as epistemic agents in general ( §3.2.2 ).

Compared with participants who were first shown pictures of white faces, those who were primed with black faces in Payne (2001) were faster to identify pictures of guns as guns and were more likely to misidentify pictures of tools as guns. This finding has been directly and conceptually replicated (e.g., Payne et al. 2002; Conrey et al. 2005) and is an instance of a broader set of findings about the effects of attitudes and beliefs on perception (e.g., Barrick et al. 2002; Proffitt 2006). Payne’s findings are chilling particularly in light of police shootings of unarmed black men in recent years, such as Amadou Diallo and Oscar Grant. The findings suggest that agents’ implicit associations between “black men” and “guns” may affect their judgment and behavior by affecting what they see. In addition to the moral implications, this may be cause for a particular kind of epistemic concern. As Siegel (2012, 2017, forthcoming) puts it, the worry is that implicit bias introduces a circular structure into belief formation. If an agent believes that black men are more likely than white men to have or use guns, and this belief causes the agent to more readily see ambiguous objects in the hands of black men as guns, then when the agent relies upon visual perception as evidence to confirm her beliefs, she will have moved in a vicious circle.

Whether implicit biases are cause for this sort of epistemic concern depends on what sort of causal influence social attitudes have on visual perception. Payne’s weapons bias findings would be a case of “cognitive penetration” if the black primes make the images of tools look like images of guns, via an effect on perceptual experience itself (Siegel 2012, 2017, forthcoming). This would certainly introduce a circular structure in belief formation. Other scenarios raise the possibility of illicit belief formation without genuine cognitive penetration. Consider what Siegel calls “perceptual bypass”: the black primes do not cause the tools to look like guns (i.e., the prime does not cause a change in perceptual experience), yet some state in the agent, such as a heightened state of anxiety, is affected by the black prime and causes the agent to make a classification error. This will count as a case of illicit belief formation inasmuch as the agent’s social attitudes cause her to be insensitive to her visual stimuli in a way that confirms her antecedent attitudes (Siegel 2012). Other scenarios might allay the worry about illicit belief formation. For example, what Siegel calls “disowned behavior” proposes the same route to the classification error as “perceptual bypass,” except that the agent antecedently regards her error as an error. Empirical evidence can help to sort through these possibilities, though perhaps not settle between them conclusively (e.g., Correll et al. 2015).

A broader worry is that research on implicit bias should cause agents to mistrust their knowledge-seeking faculties in general. “Bias-related doubt” (Saul 2012) is stronger than traditional forms of skepticism (e.g., external world skepticism) in the sense that it suggests that our epistemic judgments are not just possibly but often likely mistaken. Implicit biases are likely to degrade our judgments across many domains, e.g., professors’ judgments about student grades, journal submissions, and job candidates. [ 16 ] Moreover, as Fricker (2007) points out, the testimony of members of stigmatized groups is likely to be discounted due to implicit bias, which, Saul suggests, can magnify these epistemic failures as well as create others, such as failing to recognize certain questions as relevant for inquiry (Hookway 2010). The key point about these examples is that our judgments are likely to be affected by implicit biases even when “we think we’re making judgments of scientific or argumentative merit” (Saul 2012: 249; see also Welpinghus forthcoming). Moreover, unlike errors of probabilistic reasoning, these effects generalize across many areas of day-to-day life. We should be worried, Saul argues,

whenever we consider a claim, an argument, a suggestion, a question, etc from a person whose apparent social group we’re in a position to recognize. (Saul 2012: 250).

Bias-related doubt may be diminished if successful interventions can be developed to correct for epistemic errors caused by implicit bias. In some cases, the fix may be simple, such as anonymous review of job candidate dossiers. But other contexts will certainly be more challenging. [ 17 ] More generally, Saul’s account of bias-related doubt takes a strongly pessimistic stance toward the normativity of our unreflective habits. “It is difficult to see”, she writes, “how we could ever properly trust [our habits] again once we have reflected on implicit bias” (2012: 254). Others, however, have stressed the ways in which unreflective habits can have epistemic virtues (e.g., Arpaly 2004; Railton 2014; Brownstein & Madva 2012a,b; Nagel 2012; Antony 2016). Squaring the reasons for pessimism about the epistemic status of our habits with these streams of thought will be important in future research.

Gendler (2011) and Egan (2011) argue that implicit bias creates a conflict between our ethical and epistemic aims. Concern about ethical/epistemic dilemmas is at least as old as Pascal, as Egan points out, but is also incarnated in contemporary research on the value of positive illusions (i.e., beliefs like “I am brilliant!” which may promote well-being despite being false; e.g., Taylor & Brown 1988). The dilemma surrounding implicit bias stems from the apparent unavoidability of stereotyping, which Gendler traces to the way in which social categorization is fundamental to our cognitive capacities. [ 18 ] For agents who disavow common social stereotypes for ethical reasons, this creates a conflict between what we know and what we value. As Gendler puts it,

if you live in a society structured by racial categories that you disavow, either you must pay the epistemic cost of failing to encode certain sorts of base-rate or background information about cultural categories, or you must expend epistemic energy regulating the inevitable associations to which that information—encoded in ways to guarantee availability—gives rise. (2011: 37)

Gender considers forbidden base rates, for example, which are useful statistical generalizations that utilize problematic social knowledge. People who are asked to set insurance premiums for hypothetical neighborhoods will accept actuarial risk as a justification for setting higher premiums for particular neighborhoods but will not do so if they are told that actuarial risk is correlated with the racial composition of that neighborhood (Tetlock et al. 2000). This “epistemic self-censorship on non-epistemic grounds” makes it putatively impossible for agents to be both rational and equitable (Gendler 2011: 55, 57).

Egan (2011) raises problems for intuitive ways of diffusing this dilemma, settling instead on the idea that making epistemic sacrifices for our ethical values may simply be worth it. Others have been more unwilling to accept that implicit bias does in fact create an unavoidable ethical-epistemic dilemma (Mugg 2013; Beeghly 2014; Madva 2016b; Lassiter & Ballantyne 2017; Puddifoot 2017). One way of diffusing the dilemma, for example, is to suggest that it is not social knowledge per se that has costs, but rather that the accessibility of social knowledge in the wrong circumstances has cognitive costs (Madva 2016b). The solution to the dilemma, then, is not ignorance, but the situation-specific regulation of stereotype accessibility. For example, the accessibility of social knowledge can be regulated by agents’ goals and habits (Moskowitz & Li 2011). Readers interested in ethical-epistemic dilemmas due to implicit bias should also consider related scholarship on “moral encroachment” (e.g., Basu & Schroeder 2018; Gardiner 2018).

Most philosophical writing on the ethics of implicit bias has focused on two distinct (but related) questions. First, are agents morally responsible for their implicit biases ( §4.1 )? Second, can agents change their implicit biases or control their effects on their judgments and behavior ( §4.2 )?

4.1 Moral Responsibility

Researchers working on moral responsibility for implicit bias often make two key distinctions. First, they distinguish responsibility for attitudes from responsibility for judgments and behavior. One can, that is, ask whether agents are responsible for their putative (§2) implicit attitudes as such, or whether agents are responsible for the effects of their implicit attitudes on their judgments and behavior. Most have focused on the latter question, as will I. A second important distinction is between being responsible and holding responsible. This distinction can be glossed in a number of different but related ways. It can be glossed as a distinction between blameworthiness and actual expressions of blame; between backward- and forward-looking responsibility (i.e., responsibility for things one has done in the past versus responsibility for doing certain things in the future); and between responsibility as a form of judgment versus responsibility as a form of sanction. Most have focused on the former of these disjuncts (being responsible, blameworthiness, etc.) via three kinds of approaches: arguments from the importance of awareness or knowledge of one’s implicit biases ( §4.1.1 ); arguments from the importance of control over the impact of one’s implicit biases on one’s judgment and behavior ( §4.1.2 ); and arguments from “attributionist” and “Deep Self” considerations ( §4.1.3 ; see Holroyd et al. 2017 for a more in-depth review of theories of moral responsibility and implicit bias).

It is plausible that conscious awareness of our implicit biases is a necessary condition for moral responsibility for those biases. Saul articulates the intuitive idea, suggesting that we

abandon the view that all biases against stigmatised groups are blameworthy … [because a] person should not be blamed for an implicit bias that they are completely unaware of, which results solely from the fact that they live in a sexist culture. (2013: 55, emphasis in original)

Saul’s claim appears to be in keeping with folk psychological attitudes about blameworthiness and implicit bias. Cameron and colleagues (2010) found that subjects were considerably more willing to ascribe moral responsibility to “John” when he was described as acting in discriminatory ways against black people despite “thinking that people should be treated equally, regardless of race” compared to when he was described as acting in discriminatory ways despite having a “sub-conscious dislike for African Americans” that he is “unaware of having”.

Recalling the evidence that people often do have awareness of their implicit biases ( §3.1 ), it would seem that typical agents are responsible for those biases on the basis of the argument from awareness. However, if the question is whether agents are blameworthy for behaviors affected by implicit biases (rather than for having biases themselves), then perhaps impact awareness is what matters most (Holroyd 2012). That said, lacking impact awareness of the effects of implicit bias on our behavior may not exculpate agents from responsibility even in principle. One possibility is that implicit biases are analogous to moods in the sense that being in an introspectively unnoticed bad mood can cause one to act badly (Madva 2018). There is debate about whether unnoticed moods are exculpatory (e.g., Korsgaard 1997; Levy 2011). One possibility is that bad moods and implicit biases both diminish blameworthiness, but do not undermine it as such. This claim depends in part on moral responsibility admitting of degrees.

One problem with focusing on impact awareness, however, as Holroyd (2012) points out, is that we may be unaware of the impact of a great many cognitive states on our behavior. The focus on impact awareness may lead to a global skepticism about moral responsibility, in other words. This suggests that impact awareness may not serve as a good criterion for distinguishing responsibility for implicit biases from responsibility for other cognitive states, notwithstanding whether global skepticism about moral responsibility is defensible.

A second way to unpack the argument from awareness is to focus on what agents ought to know about implicit bias, rather than what they do know. This approach indexes moral responsibility to one’s social and epistemic environment. For example, Kelly & Roedder (2008) argue that a “savvy grader” is responsible for adjusting her grades to compensate for her likely biases because she ought to be aware of and compelled by research on implicit bias. In a similar spirit, Washington & Kelly (2016) compare two hypothetical egalitarians with equivalent psychological profiles, the only difference between them being that the “Old School Egalitarian” is evaluating résumés in 1980 and the “New Egalitarian” is doing so in 2014. While neither has heard of implicit bias, Washington & Kelly argue that the New Egalitarian is morally culpable in a way that the Old School Egalitarian isn’t. Only the New Egalitarian could have, and ought to have, known about his likely implicit biases, given the comparative states of art of psychological research in 1980 and 2014. The underlying intuition here is that assessments of responsibility change with changes in an agent’s social and epistemic environment.

A third way of unpacking the argument from awareness is to focus on the way in which an attitude does or does not integrate with a variety of the agent’s other attitudes once it becomes conscious (Levy 2012; see §2.1 ). On this view, attitudes that cause responsible behavior are available to a broad range of cognitive systems. For example, in cognitive dissonance experiments (e.g., Festinger 1956), agents attribute confabulatory reasons to themselves and then tend to act in accord with those self-attributed reasons. The self-attribution of reasons in this case, according to Levy (2012), has an integrating effect on behavior, and thus can be thought of as underwriting the sort of agency required for moral responsibility. Crucially, it is when the agent becomes conscious of her self-attributed reasons that they have this integrating effect. This provides grounds for claiming that attitudes for which agents are responsible are those that integrate behavior when the agent becomes aware of the content of those attitudes. Implicit attitudes are not like this, according to Levy. What’s morally important is that

awareness of the content of our implicit attitudes fails to integrate them into our person level concerns in the manner required for direct moral responsibility. (Levy 2012: 9).

The fact that implicit processes are often defined in contrast to “controlled” cognitive processes (§2.2) implies that they may affect behavior in a way that bypasses a person’s agential capacities. The fact that implicit biases seem to “rebound” in response to intentional efforts to suppress them supports this interpretation (Huebner 2009; Follenfant & Ric 2010). Early research suggesting that implicit biases reflect mere awareness of stereotypes, rather than personal attitudes, also implies that these states reflect processes that “happen to” agents. More recently, however, philosophers have questioned the ramifications of these and other data for the notion of control relevant to moral responsibility.

Perhaps the most familiar way of understanding control in the responsibility literature is in terms of a psychological mechanism that would allow an agent to act differently than she otherwise would act when there is sufficient reason to do so (Fischer & Ravizza 2000). The question facing this sort of reasons-responsiveness view of control is whether automatized behaviors—which unfold in the absence of explicit reasoning—should be thought of as under an agent’s control. Some have argued that automaticity and control are not mutually exclusive. Holroyd & Kelly (2016) advance a notion of “ecological control”, and Suhler and Churchland (2009) offer an account of nonconscious control that underwrites automaticity itself, yet is ostensibly sufficient for underwriting responsibility. Others have distinguished between automaticity and automatisms (e.g., sleepwalking); in this sense, the relevant moral distinction might be drawn in terms of agents’ ability to “pre-program” their automatic actions (but not automatistic actions) via previous controlled choices (e.g., Wigley 2007); it might be drawn in terms of agents’ ability to consciously monitor their automatic actions (e.g., Levy & Bayne, 2004); or it might simply be the case that putative implicit attitudes are not automatic because they are readily changeable (e.g., Buckwalter forthcoming). [ 19 ] Others still have distinguished between “indirect” and “direct” control over one’s attitudes or behavior (e.g., Holroyd 2012; Levy & Mandelbaum 2014; Sie & Voorst Vader-Bours 2016). Holroyd (2012) argues that there are many things over which we do not hold direct and immediate control, yet for which we are commonly held responsible, such as learning a skill, speaking a foreign language, and even holding certain beliefs. None of these abilities or states can be had by fiat of will; rather, they take time and effort to obtain. This suggests that we can be held responsible for attitudes or behaviors over which we only have indirect long-range control. The question, then, of course, is whether agents can exercise indirect long-range control over their implicit biases. Mounting evidence suggests that we can ( §4.2 ).

“Attributionist” and Deep Self theories of moral responsibility represent an alternative to arguments from awareness and control. According to these theories, for an agent to be responsible for an action is for that action to “reflect upon” the agent “herself”. A common way of speaking is to say that responsibility-bearing actions are attributable to agents in virtue of reflecting upon the agent’s “deep self”, where the deep self represents the person’s fundamental evaluative stance (Sripada 2016). Although there is much disagreement in the literature about what the deep self really is, as well as what it means for an attitude or action to reflect upon it, attributionists agree that people can be morally responsible for actions that are non-conscious (e.g., “failure to notice” cases), non-voluntary (e.g., actions stemming from strong emotional reactions), or otherwise divergent from an agent’s will (Frankfurt 1971; Watson 1975, 1996; Scanlon 1998; A. Smith 2005, 2008, 2012; Hieronymi 2008; Sher 2009; and H. Smith 2011).

One influential view developed in recent years is that agents are responsible for just those actions or attitudes that stem from, or are susceptible to modification by, the agent’s “evaluative” or “rational” judgments, which are judgments for which it is appropriate (in principle) to ask the agent her reasons (in a justifying sense) for holding (Scanlon 1998; A. Smith 2005, 2008, 2012). A. Smith suggests that implicit biases stem from rational judgments, because

a person’s explicitly avowed beliefs do not settle the question of what she regards as a justifying consideration. (2012: 581–582, fn 10)

An alternative approach sees the source of the “deep self” in an agent’s “cares” rather than in her rational judgments (Shoemaker 2003, 2011; Jaworska 2007; Sripada 2016). Cares have been described in different ways, but in this context are thought of as psychological states with motivational, affective, and evaluative dispositional properties. It is an open question whether implicit biases are reflective of an agent’s cares (Brownstein 2016a, 2018). It is also possible that even in cases in which an implicit bias is not attributable to an agent’s deep self, it may still be appropriate to hold the agent responsible for violating some duty or obligation she holds due to her implicit biases (Zheng 2016). Glasgow (2016) similarly argues for responsibility for implicit biases that may not be attributable to agents. His view unfolds in terms of responsibility for actions from which agents are nevertheless alienated. Glasgow defends this view on the basis of “Content-Sensitive Variantism” and “Harm-Sensitive Variantism”, a pair of views according to which alienation exculpates depending on extra-agential features of an action, such as the content of the action or the kind of harm it creates. These variantist views are fairly strongly revisionist with respect to traditional conceptions of responsibility in the 20 th century philosophical literature. Some have argued that research on implicit bias calls for revisionism of this sort (Vargas 2005; Faucher 2016).

4.2 Interventions

Researchers working in applied ethics may be less concerned with questions about in-principle culpability and more concerned with investigating how to change or control our implicit biases. Of course, anyone committed to fighting against prejudice and discrimination will likely share this interest. Policymakers and workplace managers may also be concerned with finding effective interventions, given that they are already directing tremendous public and private resources toward anti-discrimination programs in workplaces, universities, and other domains affected by intergroup conflict. Yet as Paluck and Green (2009) suggest, the effectiveness of many of the strategies commonly used remains unclear. Most studies on prejudice reduction are non-experimental (lacking random assignment), are performed without control groups, focus on self-report surveys, and gather primarily qualitative (rather than quantitative) data.

An emerging body of laboratory-based research suggests that strategies are available for regulating implicit biases, however. One way to class these strategies is in terms of those that purport to change the apparent associations underlying agents’ implicit biases, compared with those that purport to leave implicit associations intact but enable agents to control the effects of their biases on their judgment and behavior (Stewart & Payne 2008; Mendoza et al. 2010; Lai et al. 2013). For example, a “change-based” strategy might reduce individuals’ automatic associations of “white” with “good” while a “control-based” strategy might enable individuals to prevent that association from affecting their behavior. Below, I briefly describe some of these interventions. For comparison of the data on their effectiveness, see Lai and colleagues (2014, 2016), and for discussion of their significance for theories of the metaphysics of implicit bias, including a helpful appendix listing “debiasing” experiments, see Byrd (forthcoming).

Intergroup contact (Aberson et al. 2008; Dasgupta & Rivera 2008; Anderson 2010 for discussion): long studied for its effects on explicit prejudice (e.g., Allport 1954; Pettigrew & Tropp 2006), interaction between members of different social groups appears to diminish implicit bias as well, albeit under some moderating conditions (e.g., equal status interaction) and not under others.

Approach training (Kawakami et al. 2007, 2008; Phills et al. 2011): participants repeatedly “negate” stereotypes and “affirm” counter-stereotypes by pressing a button labelled “NO!” when they see stereotype-consistent images (e.g., of a black face paired with the word “athletic”) or “YES!” when they see stereotype-inconsistent images (e.g., of a white face paired with the word “athletic”). Other experimental scenarios have had participants push a joystick away from themselves to “negate” stereotypes and pull the joystick toward themselves to “affirm” counter-stereotypes.

Evaluative conditioning (Olson & Fazio 2006; De Houwer 2011): a widely used technique whereby an attitude object (e.g., a picture of a black face) is paired with another valenced attitude object (e.g., the word “genius”), which shifts the valence of the first object in the direction of the second.

Counter-stereotype exposure (Blair et al. 2001; Dasgupta & Greenwald 2001): increasing individuals’ exposure to images, film clips, or even mental imagery depicting members of stigmatized groups acting in stereotype-discordant ways (e.g., images of female scientists).

Implementation intentions (Gollwitzer & Sheeran 2006; Stewart & Payne 2008; Mendoza et al. 2010; Webb et al. 2012): “if-then” plans that specify a goal-directed response that an individual plans to perform on encountering an anticipated cue. For example, in a “Shooter Bias” test, where participants are given the goal to “shoot” all and only those individuals shown holding guns in a computer simulation, participants may be asked to adopt the plan, “if I see a black face, I will think ‘safe!’” [ 20 ]

“Cues for control” (Monteith 1993; Monteith et al. 2002): techniques for noticing prejudiced responses, in particular the affective discomfort caused by the inconsistency of those responses with participants’ egalitarian goals.

Priming goals, moods, and motivations (Huntsinger et al. 2010; Moskowitz & Li 2011; Mann & Kawakami 2012): priming egalitarian goals, multicultural ideologies, or particular moods can lower scores of prejudice on implicit measures.

There is some doubt about this way of categorizing interventions, as some control-based interventions may also change agents’ underlying associations and some association-based interventions may also promote control (Stewart & Payne 2008; Mendoza et al. 2010). More significant though are concerns about the efficacy of these interventions over time (Lai et al. 2016), their practical feasibility (Bargh 1999; Schneider 2004), and the possibility that they may distract from broader problems of economic and institutional forms of injustice (Anderson 2010; Dixon et al. 2012; see §5 ). Of course, most of the research on interventions like these is recent, so it is simply not clear yet which strategies, or combination of strategies (Devine et al. 2012), will or won’t be effective. Some have voiced optimism about the role lab-based interventions like these can play as elements of broader efforts to combat prejudice and discrimination (e.g., Kelly et al. 2010a; Madva 2017).

5. Critical Responses

Research on implicit bias has been criticized in several ways. Below are brief descriptions of, and discussion about, prominent lines of critique. [ 21 ] I leave aside critical assessments of specific implicit measures.

Research on implicit bias has received a lot of attention, not only in philosophy and psychology, but in politics, journalism, jurisprudence, business, and medicine as well. Some have worried that this attention is excessive, such that the explanatory power of research on implicit bias has been overstated (e.g., Singal 2017; Jussim 2018 (Other Internet Resources); Blanton & Ikizer 2019).

While the difficulty of public science communication is pervasive (i.e., not limited to implicit bias research), and the most egregious cases are found in the popular press, it is true that some researchers have overhyped the importance of implicit bias for explaining social phenomena. Hype can have disastrous consequences, such as creating public distrust in science. One important point to bear in mind, however, is that the challenges facing science communication and the challenges facing a body of research are distinct. That is, one question is whether the science is strong, and it is a separate question whether the strength of the science, such as it is, is accurately communicated to the public. Overhyped research may create incentives for scientists to do flashy but weak work—and this is a problem—but problems with hype are nevertheless distinct from problems with the science itself.

Some have argued that explicit bias can explain much of what implicit bias purports to explain (e.g., Hermanson 2017a,b, 2018 (Other Internet Resources); Singal 2017; Buckwalter 2018). Jesse Singal (2017), for example, denies that implicit bias is more important than explicit bias, pointing to the United States Department of Justice’s findings about intentional race-based discrimination in Ferguson, MO and to the fact that the United States elected a relatively explicitly racist President in 2016.

Singal and others are surely right that explicit bias and outright prejudice are persistent and, in some places, pervasive. It is, however, unclear who, if anyone, thinks that implicit bias is more important than explicit bias. Philosophers in particular have been interested in implicit bias because, despite the persistence and pervasiveness of explicit bias, there are many people—presumably many of those reading this article—who aim to think and act in unprejudiced ways, and yet are susceptible to the kinds of biased behavior implicit bias researchers have studied. This is not only an important phenomenon in its own right, but also may contribute causally to the mainstream complacence toward the very outrageous instances of bigotry Singal discusses. Implicit bias may also contribute causally to explicit bias, particularly in environments suffused with prejudiced norms (Madva 2019).

A related worry is that there is not agreement in the literature about what “implicit” means. Arguably the most common understanding is that “implicit” means “unconscious.” But whatever is assessed by implicit measures is arguably not unconscious (§3.1).

It is true that there is no widespread agreement about the meaning of “implicit,” and it is also true that no theory of implicit social cognition is consistent with all the current data. To what extent this is a problem depends on background theories about how science progresses. It is also crucial to recognize that implicit measures are not high-fidelity assessments of any one distinct “part” of the mind. They are not process pure (§1.2). This means that they capture a mix of various cognitive and affective processes. Included in this mix are people’s beliefs and explicit attitudes. Indeed, researchers have known for some time that the best way to predict a person’s scores on an implicit measure like the IAT is to ask them their opinions about the IAT’s targets. This does not mean that implicit measures lack “discriminant validity,” however (i.e., that they are redundant with existing measures). By analogy, you are likely to find that people who say that cilantro is disgusting are likely to have aversive reactions to it, but this doesn’t mean that their aversive reactions are an invalid construct. Indeed, one of the leading theories of the dynamics and processes of implicit social cognition since 2006—APE (§2.2)—is based on a set of predictions about this process impurity (i.e., about the interactions of implicit and explicit evaluative processes).

Several meta-analyses have found that, according to standard conventions, the correlation between implicit measures and behavior is small to medium. Average correlations have ranged from approximately .14 to .37 ( Cameron et al. 2012; Greenwald et al. 2009; Oswald et al. 2013; Kurdi et al. 2019 ). This variety is due to several factors, including the type of measures, type of attitudes measured (e.g., attitudes in general vs. intergroup attitudes in particular), inclusion criteria for meta-analyses, and statistical meta-analytic techniques. From these data, critics have concluded that implicit measures are poor predictors of behavior. Oswald and colleagues write, “the IAT provides little insight into who will discriminate against whom, and provides no more insight than explicit measures of bias” (2013, 18). Focusing on implicit bias research more broadly, Buckwalter suggests that a review of the evidence “casts doubt on the claim that implicit attitudes will be found to be significant causes of behavior” (2018, 11).

Several background questions must be considered in order to assess these claims. Should implicit measures be expected to have small, medium, or large unconditional (or “zero-order”) correlations with behavior? Zero-order correlations are those that obtain between two variables when no additional variable has been controlled for. Since the 1970s, research on self-reported attitudes has largely focused on when —under what conditions—attitudes predict behavior, not whether attitudes predict behavior just as such. For example, attitudes better predict behavior when there is clear correspondence between the attitude object and the behavior in question ( Ajzen & Fishbein 1977). While generic attitudes toward the environment do not predict recycling behavior very well, for instance, specific attitudes toward recycling do ( Oskamp et al. 1991). In the 1970s and 1980s, a consensus emerged that attitude-behavior relations depend in general on the particular behavior being measured (e.g., political judgments vs. racial judgments), the conditions under which the behavior is performed (e.g., under time pressure or not), and the person who is performing the behavior (e.g., personality; Zanna & Fazio 1982). A wealth of theoretical models of attitude-behavior relations take these facts into account to make principled predictions about when attitudes do and do not predict behavior (e.g., Fazio 1990 ). Similar work is underway focusing on implicit social cognition (for review see Gawronski & Hahn 2019 and Brownstein et al. ms).

In a related vein, it is also important to keep in mind that large zero-order correlations are rarely found in social science, let alone in attitude research. Large zero-order correlations should not be expected to be found in implicit bias research, either ( Gawronski, forthcoming ). Indeed, the zero-order correlations between other familiar constructs and outcome measures is comparable to what has been found in meta-analyses of implicit measures: beliefs and stereotypes about outgroups and behavior ( r = .12; Talaska et al. 2008); IQ and income ( r = .2–.3; Strenze 2007); SAT scores and freshman grades in college ( r = .24; Wolfe and Johnson 1995); parents’ and their children’s socioeconomic status ( r = .2–.3; Strenze 2007). The fact that no meta-analysis of implicit measures has reported nonsignificant correlations close to zero or negative correlations with behavior further supports the conclusion that the relationship between implicit bias and behavior falls within the “zone” of the relationship between these more familiar constructs and relevant kinds of behavior. Whether this common pattern of findings in social science—of weak to moderate unconditional relations with behavior—is succor for supporters of implicit bias research or cause for concern about the social sciences in general is an important and open question (see, e.g., Greenwald et al. 2015; Oswald et al. 2015; Jost 2019; Gawronski forthcoming). [ 22 ] But note that the consistent finding of meta-analyses of implicit measures distinguishes this body of research from those that have been swept up in the social sciences’ ongoing “replication crisis.” That people, on average, display biases on implicit measures is one of the most stable and replicated findings in recent psychological science. [ 23 ] The debate described in this section pertains to interpreting the significance of this finding.

So-called “structuralist” critics (e.g., Banks & Ford 2009; Anderson 2010; Haslanger 2015; Ayala 2016, 2018; Mallon ms) have argued that researchers ought to pay more attention to systemic and institutional causes of injustice—such as poverty, housing segregation, economic inequality, etc.—rather than focusing on the biases inside the minds of individuals. One way to express the structuralist idea is that what happens in the minds of individuals, including their biases, is the product of social inequities rather than an explanation for them. Structuralists then tend to argue that our efforts to combat discrimination and inequity ought to focus on changing social structures themselves, rather than trying to change individual’s biases directly. For example, Ayala argues that “agents’ mental states [are] … not necessary to understand and explain” when considering social injustice (2016, 9). Likewise, in her call to combat segregation in the contemporary United States, Anderson (2010) is critical of what she sees as a distracting focus on the psychology of bias.

A strong version of the structuralist critique—that research on the psychology of prejudice is entirely useless, distracting, or even dangerous—is hard to defend. Large-scale demographic research makes clear that psychological prejudice is a key driver of (for example) economic inequality (e.g., Chetty et al. 2018 ) and inequities in the criminal justice system ( Center for Policing Equity 2016 ). More broadly, no matter how autonomously certain social structures operate, people must choose to accept or reject those structures, to vote for politicians who speak for or against them, and so on. How people assess these options is at least in part a psychological question.

A weaker version of the structuralist critique calls for needed attention to the ways in which psychological and structural phenomena interact to produce and entrench discrimination and inequity. This “interactionism” seeks to understand how bias operates differently in different contexts. If you wanted to combat housing segregation, for example, you would want to consider not only problematic institutional practices, such as “redlining” certain neighborhoods within which banks will not give mortgage loans, and not only psychological factors, such as the propensity to perceive low-income people as untrustworthy, but the interaction of the two. A low-income person from a redlined neighborhood might not be perceived as untrustworthy when they are interviewing for a job as a nanny, but might be perceived as untrustworthy when they are interviewing for a loan. Adopting the view that bias and structure interact to produce unequal outcomes does not mean that researchers must always account for both. Sometimes it makes sense to emphasize one kind of cause or the other.

An interactionist version of structuralism can incorporate research on prejudice into a wider understanding of inequity, rather than eschew it. One way to do so is to identify ways in which psychological biases (whether implicit or explicit) might be key contributors to social-structural phenomena. For example, structuralists sometimes point to the drug laws and sentencing guidelines that contribute to the mass incarceration of black men in the USA as examples of systemic biases. Sometimes, however, when these laws and policies change, discrimination persists. While arrests have declined for all racial groups in states that have decriminalized marijuana, black people continue to be arrested for marijuana-related offenses at a rate of about 10 times that of white people ( Drug Policy Alliance 2018 ). This suggests that psychological biases (belonging to officers, policy makers, or voters) are an ineliminable part of systemic inequity. Such interactionism is just one approach for blending individual and institutional approaches to intergroup discrimination (see, e.g., Madva 2016a, 2017; Davidson & Kelly forthcoming). Another idea is to incorporate research specifically on implicit bias into a wider understanding of the structural sources of inequity by using implicit measures to assess broad social patterns (rather than to assess the differences between individuals). The “Bias of Crowds” model (§2.5) argues that implicit bias is a feature of cultures and communities. For example, average scores on implicit measures of prejudice and stereotypes, when aggregated at the level of cities within the United States, predict racial disparities of shootings of citizens by police in those cities ( Hehman et al. 2017). Thus, while it is certainly true that most of the relevant literature and discussion conceptualizes implicit bias as way of differentiating between individuals, structuralists might utilize the data for differentiating regions, cultures, and so on.

Nosek and colleagues (2011) suggest that the second generation of research on implicit social cognition will come to be known as the “Age of Mechanism”. Several metaphysical questions fall under this label. One question crucial to the metaphysics of implicit bias is whether the relevant psychological constructs should be thought of as stable, trait-like features of a person’s identity or as momentary, state-like features of their current mindset or situation (§2.4). While current data suggest that implicit biases are more state-like than trait-like, methodological improvements may generate more stable, dispositional results on implicit measures. Ongoing research on additional psychometric properties of implicit measures—such as their discriminant validity and capacity to predict behavior—will also strengthen support for some theories of the metaphysics of implicit bias and weaken support for others. Another open metaphysical question is whether the mechanisms underlying different forms of implicit bias (e.g., implicit racial biases vs. implicit gender biases) are heterogeneous. Some have already begun to carve implicit social attitudes into kinds (Amodio & Devine 2006; Holroyd & Sweetman 2016; Del Pinal et al. 2017; Del Pinal & Spaulding 2018; Madva & Brownstein 2018). Future research on implicit bias in particular domains of social life may also help to illuminate this issue, such as research on implicit bias in legal practices (e.g., Lane et al. 2007; Kang 2009) and in medicine (e.g., Green et al. 2007; Penner et al. 2010), on the development of implicit bias in children (e.g., Dunham et al. 2013b), on implicit intergroup bias toward non-black racial minorities, such as Asians and Latinos (Dasgupta 2004), and cross-cultural research on implicit bias in non-Western countries (e.g., Dunham et al. 2013a).

Future research on epistemology and implicit bias may tackle a number of questions, for example: does the testimony of social and personality psychologists about statistical regularities justify believing that you are biased ? What can developments in vision science tell us about illicit belief formation due to implicit bias? In what ways is implicit bias depicted and discussed outside academia (e.g., in stand-up comedy focusing on social attitudes)? Also germane are future methodological questions, such as how research on implicit social cognition may interface with large-scale correlational sociological studies on social attitudes and discrimination (Lee 2016). Another crucial methodological question is whether and how theories of implicit bias—and more generally psychological approaches to understanding social phenomena—can come to be integrated with broader social theories focusing on race, gender, class, disability, etc. Important discussions have begun (e.g., Valian 2005; Kelly & Roedder 2008; Faucher & Machery 2009; Anderson 2010; Machery et al. 2010; Madva 2017), but there is no doubt that more connections must be drawn to relevant work on identity (e.g., Appiah 2005), critical theory (e.g., Delgado & Stefancic 2012), feminist epistemology (Grasswick 2013), and race and political theory (e.g., Mills 1999).

As with all of the above, questions in theoretical ethics about moral responsibility for implicit bias will certainly be influenced by future empirical research. One noteworthy intersection of theoretical ethics with forthcoming empirical research will focus on the interpersonal effects of blaming and judgments about blameworthiness for implicit bias. [ 24 ] This research aims to have practical ramifications for mitigating intergroup conflict as well, of course. On this front, arguably the most pressing question, however, is about the durability of psychological interventions once agents leave the lab. How long will shifts in biased responding last? Will individuals inevitably “relearn” their biases (cf. Madva 2017)? Is it possible to leverage the lessons of “situationism” in reverse, such that shifts in individuals’ attitudes create environments that provoke more egalitarian behaviors in others (Sarkissian 2010; Brownstein 2016b)? Moreover, what has (or has not) changed in people’s feelings, judgments, and actions now that research on implicit bias has received considerable public attention (e.g., Charlesworth & Banaji 2019)?

  • Aberson, C., M. Porter, & A. Gaffney, 2008, “Friendships predict Hispanic student’s implicit attitudes toward Whites relative to African Americans”, Hispanic Journal of Behavioral Sciences , 30: 544–556.
  • Ajzen, I., & M. Fishbein, 1977, “Attitude-behavior relations: A theoretical analysis and review of empirical research,” Psychological Bulletin, 84: 888–918.
  • Allport, G., 1954, The Nature of Prejudice , Reading: Addison-Wesley.
  • Amodio, D. & P. Devine, 2006, “Stereotyping and evaluation in implicit race bias: evidence for independent constructs and unique effects on behavior”, Journal of Personality and Social Psychology , 91(4): 652.
  • –––, 2009, “On the interpersonal functions of implicit stereotyping and evaluative race bias: Insights from social neuroscience”, in Attitudes: Insights from the new wave of implicit measures , R. Petty, R. Fazio, & P. Briñol (eds.), Hillsdale, NJ: Erlbaum, pp. 193–226.
  • Amodio, D. & K. Ratner, 2011, “A memory systems model of implicit social cognition”, Current Directions in Psychological Science , 20(3): 143–148.
  • Anderson, E., 2010, The Imperative of Integration , Princeton: Princeton University Press.
  • –––, 2012, “Epistemic justice as a virtue of social institutions”, Social Epistemology , 26(2): 163–173.
  • Antony, L., 2016, “Bias: friend or foe? Reflections on Saulish Skepticism”, in Brownstein & Saul (eds.) 2016A.
  • Appiah, A., 2005, The Ethics of Identity , Princeton: Princeton University Press.
  • Arkes, H. & P. Tetlock, 2004, “Attributions of implicit prejudice, or ‘would Jesse Jackson ‘fail’ the Implicit Association Test?’”, Psychological Inquiry , 15: 257–278.
  • Aronson, E. & C. Cope, 1968, “My Enemy’s Enemy is My Friend”, Journal of Personality and Social Psychology , 8(1): 8–12.
  • Arpaly, N., 2004, Unprincipled Virtue: An Inquiry into Moral Agency , Oxford: Oxford University Press.
  • Ashburn-Nardo, L., M. Knowles, & M. Monteith, 2003, “Black Americans’ implicit racial associations and their implications for intergroup judgment”, Social Cognition , 21:1, 61–87.
  • Ayala, S., 2016, “Speech affordances: A structural take on how much we can do with our words,” European Journal of Philosophy , 24: 879–891.
  • –––, 2018, “A Structural Explanation of Injustice in Conversations: It’s about Norms,” Pacific Philosophical Quarterly , 99(4): 726–748. doi:10.1111/papq.12244
  • Banaji, M. & A. Greenwald, 2013, Blindspot , New York: Delacorte Press.
  • Banaji, M. & C. Hardin, 1996, “Automatic stereotyping”, Psychological Science , 7(3): 136–141.
  • Banaji, M., C. Hardin, & A. Rothman, 1993, “Implicit stereotyping in person judgment”, Journal of personality and Social Psychology , 65(2): 272.
  • Bandura, A., 1978, “The self system in reciprocal determinism,” American Psychologist , 33: 344–358. Banks, R. R., & R.T. Ford, 2008, “(How) Does Unconscious Bias Matter: law, Politics, and Racial Inequality,” Emory LJ , 58: 1053–1152.
  • Banse, R., J. Seise, & N. Zerbes, 2001, “Implicit attitudes towards homosexuality: Reliability, validity, and controllability of the IAT”, Zeitschrift für Experimentelle Psychologie , 48: 145–160.
  • Bar-Anan, Y. & B. Nosek, 2014, “A Comparative Investigation of Seven Implicit Measures of Social Cognition”, Behavior Research Methods , 46(3): 668–688. doi:10.3758/s13428-013-0410-6
  • Bargh, J., 1994, “The four horsemen of automaticity: Awareness, intention, efficiency, and control in social cognition”, in Handbook of social cognition (2nd ed.) , R. Wyer, Jr. & T. Srull (eds.), Hillsdale, NJ: Lawrence Erlbaum Associates, Inc., pp 1–40.
  • –––, 1999, “The cognitive monster: The case against the controllability of automatic stereotype effects”, in Chaiken & Trope (eds.) 1999: 361–382.
  • Barrick, C., D. Taylor, & E. Correa, 2002, “Color sensitivity and mood disorders: biology or metaphor?”, Journal of affective disorders , 68(1): 67–71.
  • Basu, R. & M. Schroeder, 2018, “Doxastic Wronging,” in B. Kim & M. McGrath (eds), Pragmatic Encroachment in Epistemology , New York: Routledge.
  • Beeghly, E., 2014, Seeing Difference: The Epistemology and Ethics of Stereotyping , PhD diss., University of California, Berkeley, California.
  • Beeghly, E. & A. Madva, forthcoming, An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind , New York: Routledge.
  • Begby, E., 2013, “The Epistemology of Prejudice”, Thought: A Journal of Philosophy , 2(2): 90–99.
  • Berger, J., forthcoming, “Implicit attitudes and awareness,” Synthese , first online 14 March 2018. doi:10.1007/s11229-018-1754-3
  • Bertrand, M., D. Chugh, & S. Mullainathan, 2005, “Implicit discrimination”, American Economic Review , 94–98.
  • Bertrand, M. & S. Mullainathan, 2004, “Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market”, NBER Working Papers from National Bureau of Economic Research, Inc., No. 9873.
  • Blanton, H., & E.G. Ikizer, 2019, “Elegant Science Narratives and Unintended Influences: An Agenda for the Science of Science Communication,” Social Issues and Policy Review , 13(1): 154–181.
  • Blair, I., J. Ma, & A. Lenton, 2001, “Imagining Stereotypes Away: The Moderation of Implicit Stereotypes through Mental Imagery”, Journal of personality and social psychology , 81(5): 828–841.
  • Block, N., 1995, “On a confusion about a function of consciousness,” Behavioral and Brain Sciences , 18: 227–287.
  • Bodenhausen, G. & B. Gawronski, 2014, “Attitude Change”, in The Oxford Handbook of Cognitive Psychology , D. Reisberg (ed.), New York: Oxford University Press.
  • Brennan, S., 2013, “Rethinking the Moral Significance of Micro-Inequities: The Case of Women in Philosophy”, in Women in Philosophy: What Needs to Change?, F. Jenkins and K. Hutchinson (eds.), Oxford: Oxford University Press.
  • Brewer, M., 1999, “The psychology of prejudice: Ingroup love and outgroup hate?”, Journal of social issues , 55(3): 429–444.
  • Brownstein, M., 2016a, “Attributionism and moral responsibility for implicit bias, Review of Philosophy and Psychology , 7(4): 765–786.
  • –––, 2016b, “Implicit Bias, Context, and Character”, in Brownstein & Saul (eds.) 2016B.
  • –––, 2018, The Implicit Mind: Cognitive Architecture, the Self, and Ethics , New York: Oxford University Press.
  • –––, forthcoming, “Skepticism about Bias,” in Beeghly & Madva (eds.), forthcoming.
  • Brownstein, M. & A. Madva, 2012a, “Ethical Automaticity”, Philosophy of the Social Sciences , 42(1): 67–97.
  • –––, 2012b, “The Normativity of Automaticity”, Mind and Language , 27(4): 410–434.
  • Brownstein, M., Madva, A., & Gawronski, B., 2019, “What do implicit measures measure?,” WIREs Cognitive Science . doi:10.1002/wcs.1501
  • Brownstein, M. & J. Saul (eds.), 2016A, Implicit Bias & Philosophy: Volume I, Metaphysics and Epistemology , Oxford: Oxford University Press.
  • ––– (eds.), 2016B, Implicit Bias and Philosophy: Volume 2, Moral Responsibility, Structural Injustice, and Ethics , Oxford: Oxford University Press.
  • Buckwalter, W., forthcoming, “Implicit attitudes and the ability argument,” Philosophical Studies , first online 15 September 2018. doi:10.1007/s11098-018-1159-7 [ available online ]
  • Byrd, N., forthcoming, “What we can (and can’t) infer about implicit bias from debiasing experiments,” Synthese , first online 12 February 2019. doi:10.1007/s11229-019-02128-6
  • Cameron, C., B. Payne, & J. Knobe, 2010, “Do theories of implicit race bias change moral judgments?”, Social Justice Research , 23: 272–289.
  • Cameron, C.D., Brown-Iannuzzi, J. L., & B.K. Payne, B. K., 2012, “Sequential priming measures of implicit social cognition: A meta-analysis of associations with behavior and explicit attitudes,” Personality and Social Psychology Review , 16: 330–350.
  • Carruthers, P., 2009, “How we know our own minds: the relationship between mindreading and metacognition”, Behavioral and Brain Sciences , 32: 121–138.
  • –––, 2013, “On knowing your own beliefs: A representationalist account,” in N. Nottelmann (ed.), New essays on belief: Constitution, content and structure , pp. 145– 165, Basingstoke: Palgrave MacMillan.Center for Policing Equity, 2016, [ available online ].
  • Cervone, D. Caldwell, T. L., & N.D. Mayer, 2015, “Personality systems and the coherence of social behavior,” In. B. Gawronski & G. V. Bodenhausen (Eds.), Theory and explanation in social psychology , pp. 157–179, New York: Guilford.
  • Chaiken, S. & Y. Trope (eds.), 1999, Dual-process theories in social psychology , New York: Guilford Press.
  • Chetty, R., Hendren, N., Jones, M. R., & S.R. Porter, 2018, “Race and economic opportunity in the United States: An intergenerational perspective (No. w24441),” National Bureau of Economic Research , [ available online ].
  • Clark, A., 1997, Being There: Putting Brain, Body, and World Together Again , Cambridge, MA: MIT Press.Cooley, E., & B. K. Payne, 2017, “Using groups to measure intergroup prejudice. Personality and Social Psychology Bulletin ,” 43: 46–59.
  • Conrey, F., J. Sherman, B. Gawronski, K. Hugenberg, & C. Groom, 2005, “Separating multiple processes in implicit social cognition: The Quad-Model of implicit task performance”, Journal of Personality and Social Psychology , 89: 469–487.
  • Correll, J., B. Park, C. Judd, & B. Wittenbrink, 2002, “The police officer’s dilemma: Using race to disambiguate potentially threatening individuals”, Journal of Personality and Social Psychology , 83: 1314–1329.
  • Correll, J., Wittenbrink, B., Crawford, M., & M. Sadler, 2015, “Stereotypic Vision: How Stereotypes Disambiguate Visual Stimuli,” Journal of Personality and Social Psychology , 108(2): 219–233.
  • Cortina, L., 2008, “Unseen injustice: Incivility as modern discrimination in organizations”, Academy of Management Review , 33: 55–75.
  • Cortina, L., D. Kabat Farr, E. Leskinen, M. Huerta, & V. Magley, 2011, “Selective incivility as modern discrimination in organizations: Evidence and impact”, Journal of Management , 39(6): 1579–1605.
  • Cunningham, W. & P. Zelazo, 2007, “Attitudes and evaluations: A social cognitive neuroscience perspective”, Trends in cognitive sciences , 11(3): 97–104.
  • Cunningham, W., P. Zelazo, D. Packer, & J. Van Bavel, 2007, “The iterative reprocessing model: A multilevel framework for attitudes and evaluation”, Social Cognition , 25(5): 736–760.
  • Currie, G. & A. Ichino, 2012, “Aliefs don’t exist, but some of their relatives do”, Analysis , 72: 788–798.
  • Dasgupta, N., 2004, “Implicit Ingroup Favoritism, Outgroup Favoritism, and Their Behavioral Manifestations”, Social Justice Research , 17(2): 143–168.
  • Dasgupta, N. & A. Greenwald, 2001, “On the malleability of automatic attitudes: Combating automatic prejudice with images of admired and disliked individuals”, Journal of Personality and Social Psychology , 81: 800–814.
  • Dasgupta, N. & L. Rivera, 2008, “When social context matters: The influence of long-term contact and short-term exposure to admired group members on implicit attitudes and behavioral intentions”, Social Cognition , 26: 112–123.
  • Davidson, L. J., & D. Kelly, forthcoming, “Minding the gap: Bias, soft structures, and the double life of social norms,” Journal of Applied Philosophy , first online 23 December 2018. doi:10.1111/japp.12351
  • De Houwer, J., 2009, “The propositional approach to associative learning as an alternative for association formation models,” Learning & Behavior , 37: 1–20.
  • –––, 2011, “Evaluative Conditioning: A review of functional knowledge and mental process theories”, in Associative Learning and Conditioning Theory: Human and Non-Human Applications , T. Schachtman and S. Reilly (eds.), Oxford: Oxford University Press, pp. 399–417.
  • –––, 2014, “A Propositional Model of Implicit Evaluation”, Social Psychology and Personality Compass , 8(7): 342–353.
  • De Houwer, J., G. Crombez, E. Koster, & N. Beul, 2004, “Implicit alcohol-related cognitions in a clinical sample of heavy drinkers”, Journal of behavior therapy and experimental psychiatry , 35(4): 275–286.
  • De Houwer, J., S. Teige-Mocigemba, A. Spruyt, & A. Moors, 2009, “Implicit measures: A normative analysis and review”, Psychological bulletin , 135(3): 347.
  • Del Pinal, G., A. Madva & K. Reuter, 2017, “Stereotypes, conceptual centrality and gender bias: An empirical investigation,” Ratio , 30(4): 384–410.
  • Del Pinal, G., & S. Spaulding, 2018, “Conceptual centrality and implicit bias,” Mind & Language , 33(1): 95–111.
  • Delgado, R. & J. Stefancic, 2012, Critical race theory: An introduction , New York: NYU Press.
  • Devine, P., 1989, “Stereotypes and prejudice: Their automatic and controlled components”, Journal of Personality and Social Psychology , 56: 5–18.
  • Devine, P., P. Forscher, A. Austin, & W. Cox, 2012, “Long-term reduction in implicit race bias; a prejudice habit-breaking intervention”, Journal of Experimental Social Psychology , 48(6): 1267–1278.
  • Devine, P. & M. Monteith, 1999, “Automaticity and control in stereotyping”, in Chaiken & Trope (eds.) 1999: 339–360.
  • Dixon, J., M. Levine, S. Reicher, & K. Durrheim, 2012, “Beyond prejudice: Are negative evaluations the problem and is getting us to like one another more the solution?”, Behavioral and Brain Sciences , 35(6): 411–425.
  • Doggett, T., 2012, “Some questions for Tamar Szabó Gendler”, Analysis , 72: 764–774.
  • Dovidio, J. & S. Gaertner, 1986, Prejudice, Discrimination, and Racism: Historical Trends and Contemporary Approaches , Academic Press.
  • –––, 2004, “Aversive racism”, Advances in experimental social psychology , 36: 1–51.
  • Dovidio, J., K. Kawakami, & S. Gaertner, 2002, “Implicit and explicit prejudice and interracial interaction”, Journal of Personality and Social Psychology ,82: 62–68.
  • Dovidio, J., K. Kawakami, C. Johnson, B. Johnson, & A. Howard, 1997, “On the nature of prejudice: Automatic and controlled processes”, Journal of Experimental Social Psychology , 33: 510–540.
  • Dreyfus, H. & S. Dreyfus, 1992, “What is Moral Maturity? Towards a Phenomenology of Ethical Expertise”, in Revisioning Philosophy , J. Ogilvy (ed.), Albany: State University of New York.Drug Policy Alliance, 2018, “From Prohibition to Progress: A Status Report on Marijuana Legalization,” [ available online ].
  • Dunham, Y., M. Srinivasan, R. Dotsch, & D. Barner, 2013a, “Religion insulates ingroup evaluations: the development of intergroup attitudes in India”, Developmental Science , 17(2): 311–319. doi:10.1111/desc.12105
  • Dunham, Y., E. Chen, & M. Banaji, 2013b, “Two Signatures of Implicit Intergroup Attitudes Developmental Invariance and Early Enculturation”, Psychological science , 24(6) 860–868.
  • Egan, A., 2008, “Seeing and believing: perception, belief formation and the divided mind”, Philosophical Studies , 140(1): 47–63.
  • –––, 2011. “Comments on Gendler’s ‘The epistemic costs of implicit bias,’”, Philosophical Studies , 156: 65–79.
  • Faucher, L., 2016, “Revisionism and Moral Responsibility”, in Brownstein & Saul (eds.) 2016B.
  • Faucher, L. & E. Machery, 2009, “Racism: Against Jorge Garcia’s moral and psychological monism”, Philosophy of the Social Sciences , 39: 41–62.
  • Fazio, R., 1990, “Multiple processes by which attitudes guide behavior: The MODE model as an integrative framework”, Advances in experimental social psychology , 23: 75–109.
  • –––, 1995, “Attitudes as object-evaluation associations: Determinants, consequences, and correlates of attitude accessibility”, in Attitude strength: Antecedents and consequences ( Ohio State University series on attitudes and persuasion, Vol. 4 ), R. Petty & J. Krosnick (eds.), Hillsdale, NJ: Lawrence Erlbaum Associates, Inc., pp. 247–282.
  • Fazio, R. & T. Towles-Schwen, 1999, “The MODE model of attitude-behavior processes”, in Chaiken & Trope (eds.) 1999: 97–116.
  • Festinger, L., 1956, A theory of cognitive dissonance , Stanford, CA: Stanford University Press.
  • Fischer, J. & M. Ravizza, 2000, Responsibility and control: A theory of moral responsibility , Cambridge: Cambridge University Press.
  • Fiske, S. & P. Linville, 1980, “What does the schema concept buy us?”, Personality and Social Psychology Bulletin , 6(4): 543–557.
  • Follenfant, A. & F. Ric, 2010, “Behavioral Rebound following stereotype suppression”, European Journal of Social Psychology , 40: 774–782.
  • Frankfurt, H., 1971, “Freedom of the Will and the Concept of a Person”, The Journal of Philosophy , 68(1): 5–20.
  • Frankish, K., 2016, “Implicit bias, dual process, and metacognitive motivation”, in Brownstein & Saul (eds.) 2016A.
  • Fricker, M., 2007, Epistemic Injustice: Power & the Ethics of Knowing , Oxford: Oxford University Press.
  • Friese, M., W. Hofmann, & M. Wänke, 2008, “When impulses take over: Moderated predictive validity of explicit and implicit attitude measures in predicting food choice and consumption behavior”, British Journal of Social Psychology , 47(3): 397–419.
  • Galdi, S., L. Arcuri, & B. Gawronski, 2008, “Automatic mental associations predict future choices of undecided decision-makers”, Science , 321(5892): 1100–1102.
  • Gardiner, G., 2018, “Evidentialism and Moral Encroachment,” in K. McCain (ed.), Believing in Accordance with the Evidence , Cham, Switzerland: Springer.
  • Gawronski, B., forthcoming, “Six lessons for a cogent science of implicit bias and its criticism,” Perspectives on Psychological Science .
  • Gawronski, B. & G. Bodenhausen, 2006, “Associative and propositional processes in evaluation: an integrative review of implicit and explicit attitude change”, Psychological bulletin , 132(5): 692–731.
  • –––, 2011, “The associative-propositional evaluation model: Theory, evidence, and open questions”, Advances in Experimental Social Psychology , 44: 59–127.
  • –––, 2017, “Beyond persons and situations: An interactionist approach to implicit bias,” Psychological Inquiry , 28(4): 268–272.
  • Gawronski, B. & S. Brannon, 2017, “Attitudes and the Implicit-Explicit Dualism,” in The Handbook of Attitudes, Volume 1: Basic Principles (2 nd Edition) , D. Albarracín & B.T. Johnson (eds.), New York: Routledge, pp. 158–196.
  • Gawronski, B., R. Deutsch, S. Mbirkou, B. Seibt, & F. Strack, 2008, “When ‘Just Say No’ is not enough: Affirmation versus negation training and the reduction of automatic stereotype activation”, Journal of Experimental Social Psychology , 44: 370–377.
  • Gawronski, B. & A. Hahn, 2019, “Implicit Measures: Procedures, Use, and Interpretation,” in Measurement in Social Psychology , Blanton, H., LaCroix, J.M., and G.D. Webster (eds.), New York: Taylor & Francis, 29–55.
  • Gawronski, B., W. Hofmann, & C. Wilbur, 2006, “Are “implicit attitudes unconscious?”, Consciousness and Cognition , 15: 485–499.Gawronski, B., Morrison, M., Phills, C., & Galdi, S.,2017, “Temporal Stability of Implicit and Explicit Measures: A Longitudinal Analysis,” Personality and Social Psychology Bulletin, 43: 300–312.
  • Gawronski, B., E. Walther, & H. Blank, 2005, “Cognitive Consistency and the Formation of Interpersonal Attitudes: Cognitive Balance Affects the Encoding of Social Information”, Journal of Experimental Social Psychology , 41: 618–26.
  • Gendler, T., 2008a, “Alief and belief”, The Journal of Philosophy , 105(10): 634–663.
  • –––, 2008b, “Alief in action (and reaction)”, Mind and Language , 23(5): 552–585.
  • –––, 2011, “On the epistemic costs of implicit bias”, Philosophical Studies , 156: 33–63.
  • –––, 2012, “Between reason and reflex: response to commentators”, Analysis , 72(4): 799–811.
  • Gertler, B., 2011, “Self-Knowledge and the Transparency of Belief”, in Self-Knowledge , A. Hatzimoysis (ed.), Oxford: Oxford University Press.
  • Gilbert, D., 1991, “How mental systems believe”, American Psychologist , 46: 107–119.
  • Glaser, J. & E. Knowles, 2008, “Implicit motivation to control prejudice”, Journal of Experimental Social Psychology , 44: 164–172.
  • Glasgow, J., 2016, “Alienation and Responsibility”, in Brownstein & Saul (eds.) 2016B.
  • Gollwitzer, P. & P. Sheeran, 2006, “Implementation intentions and goal achievement: A meta-analysis of effects and processes”, in Advances in experimental social psychology , M. Zanna (ed.), Academic Press, pp. 69–119.
  • Grasswick, H., 2013, “Feminist Social Epistemology”, The Stanford Encyclopedia of Philosophy , (Spring 2013 edition), E. Zalta (ed.), < https://plato.stanford.edu/archives/spr2013/entries/feminist-social-epistemology/ >.
  • Green, A., D. Carney, D. Pallin, L. Ngo, K. Raymond, L. Lezzoni, & M. Banaji, 2007, “Implicit bias among physicians and its prediction of thrombolysis decisions for black and white patients”, Journal of General Internal Medicine , 22: 1231–1238.
  • Greenwald, A. & M. Banaji, 1995, “Implicit social cognition: attitudes, self-esteem, and stereotypes”, Psychological review , 102(1): 4.
  • Greenwald, A., M. Banaji, & B. Nosek, 2015, “Statistically Small Effects of the Implicit Association Test Can Have Societally Large Effects”, Journal of personality and social psychology , 108(4): 553–561.
  • Greenwald, A. & S. Farnham, 2000, “Using the implicit association test to measure self-esteem and self-concept”, Journal of personality and social psychology , 79(6): 1022–1038.
  • Greenwald, A., D. McGhee, & J. Schwartz, 1998, “Measuring individual differences in implicit cognition: The implicit association test”, Journal of Personality and Social Psychology , 74: 1464–1480.
  • Greenwald, A., B. Nosek, & M. Banaji, 2003, “Understanding and using the implicit association test: I. An improved scoring algorithm”, Journal of personality and social psychology , 85(2): 197–216.
  • Greenwald, A., T. Poehlman, E. Uhlmann, & M. Banaji, 2009, “Understanding and Using the Implicit Association Test: III Meta-Analysis of Predictive Validity”, Journal of Personality and Social Psychology , 97(1): 17–41.
  • Gregg A., B. Seibt, & M. Banaji, 2006, “Easier done than undone: Asymmetry in the malleability of implicit preferences”, Journal of Personality and Social Psychology , 90: 1–20.
  • Hahn, A. & B. Gawronski, 2019, “Facing One’s Implicit Biases: From Awareness to Acknowledgement,” Journal of Personality and Social Psychology: Interpersonal Relations and Group Processes , 116(5): 769–794.
  • Hahn, A., C. Judd, H. Hirsh, & I. Blair, 2014, “Awareness of Implicit Attitudes”, Journal of Experimental Psychology-General , 143(3): 1369–1392.
  • Han, H., M. Olson, & R. Fazio, 2006, “The influence of experimentally-created extrapersonal associations on the Implicit Association Test”, Journal of Experimental Social Psychology , 42: 259–272.
  • Harari, H. & J. McDavid, 1973, “Name stereotypes and teachers’ expectations”, Journal of Educational Psychology , 65(2): 222–225.
  • Haslanger, S., 2000, “Gender and race:(what) are they? (What) do we want them to be?”, Nous , 34(1): 31–55.
  • –––, 2015, “Social Structure, Narrative, and Explanation,” Canadian Journal of Philosophy , 45(1): 1–15.
  • Hehman, E., Flake, J. K., & J. Calanchini, J., 2017, “Disproportionate use of lethal force in policing is associated with regional racial biases of residents,” Social Psychological and Personality Science , 1948550617711229.
  • Heider, F., 1958, The Psychology of Interpersonal Relations , New York: Wiley.
  • Hermanson, S., 2017a, “Implicit Bias, Stereotype Threat, and Political Correctness in Philosophy,” Philosophies , 2(2): 12. doi:10.3390/philosophies2020012 [ available online ].
  • –––, 2017b, “Review of Implicit Bias and Philosophy (vol. 1 & 2), Edited by Michael Brownstein and Jennifer Saul, Oxford University Press, 2016,” The Journal of the Royal Institute of Philosophy , 315—322.
  • Helton, G., forthcoming, “If you can’t change what you believe, you don’t believe it,” Nous , first online 26 August 2018. doi:10.1111/nous.12265
  • Hieronymi, P., 2008, “Responsibility for believing”, Synthese , 161: 357–373.
  • Hofmann, W., Gawronski, B., Gschwendner, T., Le, H., & M. Schmitt, 2005, “A meta-analysis on the correlation between the Implicit Association Test and explicit self-report measures,” Personality and Social Psychology Bulletin , 31(10): 1369–1385.
  • Holroyd, J., 2012, “Responsibility for Implicit Bias”, Journal of Social Philosophy , 43(3): 274–306.
  • Holroyd, J. and D. Kelly, 2016, “Implicit Bias, Character, and Control.” in J. Webber and A. Masala (eds.) From Personality to Virtue , Oxford: Oxford University Press.
  • Holroyd, J., Scaife, R., & T. Stafford, T., 2017, “Responsibility for implicit bias,” Philosophy Compass , 12(3). doi:10.1111/phc3.12410
  • Holroyd, J. & J. Sweetman, 2016, “The Heterogeneity of Implicit Biases”, in Brownstein & Saul (eds.) 2016A.
  • Hookway, C., 2010, “Some Varieties of Epistemic Injustice: Response to Fricker”, Episteme , 7(2): 151–163.
  • Houben, K. & R. Wiers, 2008, “Implicitly positive about alcohol? Implicit positive associations predict drinking behavior”, Addictive behaviors , 33(8): 979–986.
  • Hu, X., Gawronski, B., & R. Balas, 2017, “Propositional versus dual-process accounts of evaluative conditioning: I. The effects of co-occurrence and relational information on implicit and explicit evaluations,” Personality and Social Psychology Bulletin , 43: 17–32.
  • Huddleston, A., 2012, “Naughty beliefs”, Philosophical studies , 160(2): 209–222.
  • Huebner, B., 2009, “Trouble with Stereotypes for Spinozan Minds”, Philosophy of the Social Sciences , 39: 63–92.
  • –––, 2016, “Implicit Bias, Reinforcement Learning, and Scaffolded Moral Cognition”, in Brownstein & Saul (eds.) 2016A.
  • Hughes, S., D. Barnes-Holmes, & J. De Houwer, 2011, “The dominance of associative theorizing in implicit attitude research: Propositional and behavioral alternatives”, The Psychological Record , 61(3): 465–498.
  • Hundleby, C., 2016, “The Status Quo Fallacy: Implicit Bias and Fallacies of Argumentation”, in Brownstein & Saul (eds.) 2016A.
  • Huntsinger, J., S. Sinclair, E. Dunn, & G. Clore, 2010, “Affective regulation of stereotype activation: It’s the (accessible) thought that counts”, Personality and Social Psychology Bulletin , 36(4): 564–577.
  • Hütter, M., & S. Sweldens, S., 2018, “Dissociating controllable and uncontrollable effects of affective stimuli on attitudes and consumption,” Journal of Consumer Research , 45: 320–349.
  • Jacoby, L. & M. Dallas, 1981, “On the relationship between autobiographical memory and perceptual learning”, Journal of Experimental Psychology: General , 110(3): 306.
  • James, W., 1890/1950, The Principles of Psychology, Volumes 1&2 , New York: Dover Books.
  • Jaworska, A., 2007, “Caring and Internality”, Philosophy and Phenomenological Research , 74(3): 529–568.
  • Jost, J. T., 2019, “The IAT is dead, long live the IAT: Context-sensitive measures of implicit attitudes are indispensable to social and political psychology,” Current Directions in Psychological Science , 28(1): 10–19.
  • Jost, J. T., Rudman, L. A., Blair, I. V., Carney, D. R., Dasgupta, N., Glaser, J., & C.D. Hardin, 2009, “The existence of implicit bias is beyond reasonable doubt: A refutation of ideological and methodological objections and executive summary of ten studies that no manager should ignore,” Research in organizational behavior , 29: 39–69.
  • Kang, J., 2009, “Implicit Bias: A Primer for Courts”, National Center for State Courts.
  • Kang, J., M. Bennett, D. Carbado, P. Casey, N. Dasgupta, D. Faigman, R. Godsil, A. Greenwald, J. Levinson, & J. Mnookin, 2012, “Implicit bias in the courtroom”, UCLA Law Review , 59(5): 1124–1186.
  • Kawakami, K., J. Dovidio, & S. van Kamp, 2007, “The Impact of Counterstereotypic Training and Related Correction Processes on the Application of Stereotypes”, Group Processes and Intergroup Relations , 10(2): 139–156.
  • Kawakami, K., J. Steele, C. Cifa, C. Phills, & J. Dovidio, 2008, “Approaching math increases math = me, math = pleasant”, Journal of Experimental Social Psychology , 44: 818–825.
  • Kelly, D., L. Faucher, & E. Machery, 2010a, “Getting Rid of Racism: Assessing Three Proposals in Light of Psychological Evidence”, Journal of Social Philosophy , 41(3): 293–322.
  • Kelly, D., E. Machery, & R. Mallon, 2010b, “Race and Racial Cognition”, in The Moral Psychology Handbook , J. Doris & the Moral Psychology Reading Group (eds.), Oxford: Oxford University Press, pp. 433–472.
  • Kelly, D. & E. Roedder, 2008, “Racial Cognition and the Ethics of Implicit Bias”, Philosophy Compass , 3(3): 522–540.
  • Korsgaard, C., 1997, “The Normativity of Instrumental Reason”, in Ethics and Practical Reason , G. Cullity & B. Gaut (eds.), Oxford: Clarendon Press, pp 27–68.
  • Krickel, B., 2018, “Are the states underlying implicit biases unconscious? A Neo-Freudian answer,” Philosophical Psychology , 31(7): 1007–1026.
  • Kurdi, B., Seitchik, A., Axt, J., Carroll, T., Karapetyan, A., Kaushik, N., Tomezsko, D., Greenwald, A., and M. Banaji, M., 2019, “Relationship between the Implicit Association Test and intergroup behavior: A meta-analysis, American Psychologist 74(5): 569–586. doi:10.1037/amp0000364
  • Kwong, J., 2012, “Resisting Aliefs: Gendler on Alief-Discordant Behaviors”, Philosophical Psychology , 25(1): 77–91.
  • Lai, C., K. Hoffman, & B. Nosek, 2013, “Reducing implicit prejudice”, Social and Personality Psychology Compass , 7: 315–330.
  • Lai, C., M. Marini, S. Lehr, C. Cerruti, J. Shin, J. Joy-Gaba, A. Ho, … & B. Nosek, 2014, “Reducing implicit racial preferences: I. A comparative investigation of 17 interventions”, Journal of Experimental Psychology: General , 143(4): 1765–1785.
  • Lai, C. K., Skinner, A. L., Cooley, E., Murrar, S., Brauer, M., Devos, T., … & S. Simon, 2016, “Reducing implicit racial preferences: II. Intervention effectiveness across time,” Journal of Experimental Psychology: General , 145(8): 1001–1016.
  • Lane, K., J. Kang, & M. Banaji, 2007, “Implicit Social Cognition and Law”, Annual Review of Law and Social Science , 3: 427–451.
  • Lassiter, C., & N. Ballantyne, N., 2017, “Implicit racial bias and epistemic pessimism,” Philosophical Psychology , 30(1-2): 79–101.
  • Lee, C., 2016, “Revisiting Current Causes of Women’s Underrepresentation in Science”, in Brownstein & Saul (eds.) 2016A.
  • Leslie, S., 2007, “Generics and the Structure of the Mind”, Philosophical Perspectives , 375–405.
  • –––, 2008, “Generics: Cognition and Acquisition”, Philosophical Review , 117(1): 1–49.
  • –––, 2017, “The original sin of cognition: Fear, prejudice, and generalization”, The Journal of Philosophy , 114(8): 393–421.
  • Levinson, J., 2007, “Forgotten Racial Equality: Implicit Bias, Decision making, and Misremembering”, Duke Law Journal , 57(2): 345–424.
  • Levy, N., 2011, “Expressing Who We Are: Moral Responsibility and Awareness of our Reasons for Action”, Analytic Philosophy , 52(4): 243–261.
  • –––, 2012, “Consciousness, Implicit Attitudes, and Moral Responsibility”, Noûs , 48: 21–40.
  • –––, 2015, “Neither fish nor fowl: Implicit attitudes as patchy endorsements”, Noûs , 49(4): 800–823.
  • Levy, N. & T. Bayne, 2004, “Doing without deliberation: automatism, automaticity, and moral accountability”, International Review of Psychiatry , 16(3): 209–215.
  • Levy, N. & E. Mandelbaum, 2014, “The Powers that Bind: Doxastic Voluntarism and Epistemic Obligation”, in The Ethics of Belief , J. Matheson & R. Vitz (eds.), Oxford: Oxford University Press.
  • Lewis, D., 1982, “Logic for Equivocators”, Nous , 431–441.
  • Machery, E., 2016, “De-Freuding Implicit Attitudes”, in Brownstein & Saul (eds.) 2016A.
  • –––, 2017, “Do Indirect Measures of Biases Measure Traits or Situations?,” Psychological Inquiry , 28(4): 288–291.
  • Machery, E. & L. Faucher, 2005, “Social construction and the concept of race”, Philosophy of Science , 72(5): 1208–1219.
  • Machery, E., Faucher, L. & D. Kelly, 2010, “On the alleged inadequacy of psychological explanations of racism”, The Monist , 93(2): 228–255.
  • Madva, A., 2012, The hidden mechanisms of prejudice: Implicit bias and interpersonal fluency , PhD dissertation, Columbia University.
  • –––, 2016a, “A plea for anti-anti-individualism: How oversimple psychology misleads social policy,” Ergo, an Open Access Journal of Philosophy , 3.
  • –––, 2016b, “Virtue, Social Knowledge, and Implicit Bias”, in Brownstein & Saul (eds.) 2016A.
  • –––, 2016c, “Why implicit attitudes are (probably) not beliefs,” Synthese , 193(8): 2659–2684.
  • –––, 2017, “Biased Against De-Biasing: On the Role of (Institutionally Sponsored) Self-Transformation in the Struggle Against Prejudice,” Ergo , 4: 145–179.
  • –––, 2018, “Implicit bias, moods, and moral responsibility,” Pacific Philosophical Quarterly , 99: 53–78.
  • –––, 2019, “Social Psychology, Phenomenology, and the Indeterminate Content of Unreflective Racial Bias,” in E. S. Lee (Ed.), Race as Phenomena . New York: Rowman and Littlefield.
  • Madva, A. & M. Brownstein, 2018, “Stereotypes, Prejudice, and the Taxonomy of the Implicit Social Mind,” Nous , 53(2): 611–644.
  • Mai, R., S. Hoffmann, J. Helmert, B. Velichkovsky, S. Zahn, D. Jaros, … & H. Rohm, 2011, “Implicit food associations as obstacles to healthy nutrition: the need for further research”, The British Journal of Diabetes & Vascular Disease , 11(4): 182–186.
  • Maison, D., A. Greenwald, & R. Bruin, 2004, “Predictive validity of the Implicit Association Test in studies of brands, consumer attitudes, and behavior”, Journal of Consumer Psychology , 14(4): 405–415.
  • Mandelbaum, E., 2011, “The architecture of belief: An essay on the unbearable automaticity of believing”, Doctoral dissertation, University of North Carolina, Carolina.
  • –––, 2013, “Against alief”, Philosophical Studies , 165:197–211.
  • –––, 2014, “Thinking is Believing”, Inquiry , 57(1): 55–96.
  • –––, 2016, “Attitude, Association, and Inference: On the Propositional Structure of Implicit Bias”, Noûs , 50(3): 629–658.
  • Mann, T. C., & M.J. Ferguson, 2017, “Reversing implicit first impressions through reinterpretation after a two-day delay,” Journal of Experimental Social Psychology , 68: 122–127.
  • Mann, N. & K. Kawakami, 2012, “The long, steep path to equality: Progressing on egalitarian goals”, Journal of Experimental Psychology: General , 141(1): 187.
  • McConahay, J., 1982, “Self-interest versus racial attitudes as correlates of anti-busing attitudes in Louisville: Is it the buses or the Blacks?”, The Journal of Politics , 44(3): 692–720.
  • McConahay, J., B. Hardee, & V. Batts, 1981, “Has racism declined in America? It depends on who is asking and what is asked”, Journal of conflict resolution , 25(4): 563–579.
  • Meissner, C. & J. Brigham, 2001, “Thirty years of investigating the own-race bias in memory for faces: A meta-analytic review”, Psychology, Public Policy, and Law , 7(1): 3–35.
  • Mekawi, Y., & K. Bresin, 2015, “Is the evidence from racial bias shooting task studies a smoking gun? Results from a meta-analysis,” Journal of Experimental Social Psychology , 61: 120–130.
  • Mendoza, S., P. Gollwitzer, & D. Amodio, 2010, “Reducing the Expression of Implicit Stereotypes: Reflexive Control Through Implementation Intentions”, Personality and Social Psychology Bulletin , 36(4): 512–523.
  • Merleau-Ponty, M., 1945/2013, Phenomenology of Perception , New York: Routledge.
  • Millikan, R., 1995, “Pushmi-pullyu representations”, Philosophical Perspectives , 9: 185–200.Mills, C., 1999, The Racial Contract , Ithaca, NY: Cornell University Press.
  • Mischel, W., 1968, Personality and Assessment . Hoboken, NJ: Wiley.
  • Mitchell, C., J. De Houwer, & P. Lovibond, 2009, “The propositional nature of human associative learning”, Behavioral and Brain Sciences , 32(2): 183–198.
  • Monteith, M., 1993, “Self-regulation of prejudiced responses: Implications for progress in prejudice-reduction efforts”, Journal of Personality and Social Psychology , 65(3): 469–485.
  • Monteith, M., L. Ashburn-Nardo, C. Voils, & A. Czopp, 2002, “Putting the brakes on prejudice: on the development and operation of cues for control”, Journal of personality and social psychology ”, 83(5): 1029–1050.
  • Moskowitz, G. & P. Li, 2011, “Egalitarian goals trigger stereotype inhibition: a proactive form of stereotype control”, Journal of Experimental Social Psychology , 47(1): 103–116.
  • Moss-Racusin, C., J. Dovidio, V. Brescoll, M. Graham, & J. Handelsman, 2012, “Science faculty’s subtle gender biases favor male students”, Proceedings of the National Academy of the Sciences , 109(41): 16474–16479. doi:10.1073/pnas.1211286109
  • Mugg, J., 2013, “What are the cognitive costs of racism? A reply to Gendler”, Philosophical studies , 166(2): 217–229.
  • Muller, H. & B. Bashour, 2011, “Why alief is not a legitimate psychological category”, Journal of Philosophical Research , 36: 371–389.
  • Murphy, M.C., K.M., Kroeper, & E.M. Ozier, 2018, “Prejudiced Places: How Contexts Shape Inequality and How Policy Can Change Them,” Policy Insights from the Behavioral and Brain Sciences , 5(1): 66–74.
  • Murphy, M.C. & G.M. Walton, 2013, “From prejudiced people to prejudiced places: A social-contextual approach to prejudice,” in C. Stangor & C. Crandall (eds.), Frontiers in Social Psychology Series: Stereotyping and Prejudice , Psychology Press: New York, NY.
  • Nagel, J., 2012, “Gendler on alief”, Analysis , 72(4): 774–788.
  • Nier, J., 2005, “How dissociated are implicit and explicit racial attitudes?: A bogus pipeline approach”, Group Processes & Intergroup Relations , 8: 39–52.
  • Nisbett, R. & T. Wilson, 1977, “Telling more than we can know: Verbal reports on mental processes”, Psychological review , 84(3): 231–259.
  • Nosek, B. & M. Banaji, 2001, “The go/no-go association task”, Social Cognition , 19(6): 625–666.
  • Nosek, B., M. Banaji, & A. Greenwald, 2002, “Harvesting intergroup implicit attitudes and beliefs from a demonstration website”, Group Dynamics , 6: 101–115.
  • Nosek, B., J. Graham, & C. Hawkins, 2010, “Implicit Political Cognition”, in Handbook of implicit social cognition: Measurement, theory, and applications , B. Gawronski & B. Payne (eds.), New York, NY: Guilford Press, pp. 548–564.
  • Nosek, B., A. Greenwald, & M. Banaji, 2005, “Understanding and using the Implicit Association Test: II. Method variables and construct validity”, Personality and Social Psychology Bulletin , 31(2): 166–180.
  • –––, 2007, “The Implicit Association Test at Age 7: A Methodological and Conceptual Review”, in Automatic Processes in Social Thinking and Behavior , J.A. Bargh (ed.), Philadelphia: Psychology Press.
  • Nosek, B., C. Hawkins, & R. Frazier, 2011, “Implicit social cognition: from measures to mechanisms”, Trends in cognitive sciences , 15(4): 152–159.
  • Olson, M. & R. Fazio, 2001, “Implicit attitude formation through classic conditioning”, Psychological Science , 12(5): 413–417.
  • –––, 2006, “Reducing automatically activated racial prejudice through implicit evaluative conditioning”, Personality and Social Psychology Bulletin , 32: 421–433.
  • –––, 2009, “Implicit and explicit measures of attitudes: The perspective of the MODE model”, Attitudes: Insights from the new implicit measures , 19–63.
  • Oskamp, S., Harrington, M. J., Edwards, T. C., Sherwood, D. L., Okuda, S. M., & D.C. Swanson, 1991, “Factors influencing household recycling behavior,” Environment and behavior , 23(4): 494–519.
  • Oswald, F., G. Mitchell, H. Blanton, J. Jaccard, & P. Tetlock, 2013, “Predicting Ethnic and Racial Discrimination: A Meta-Analysis of IAT Criterion Studies”, Journal of Personality and Social Psychology , 105(2): 171–192. doi: 10.1037/a0032734
  • –––, 2015, “Using the IAT to predict ethnic and racial discrimination: Small effect sizes of unknown societal significance,” 108(4): 562–571.
  • Paluck, E. & D. Green, 2009, “Prejudice Reduction: What Works? A Review and Assessment of Research and Practice”, Annual Review of Psychology , 60: 339–367.
  • Payne, B., 2001, “Prejudice and perception: The role of automatic and controlled processes in misperceiving a weapon”, Journal of Personality and Social Psychology , 81: 181–192.
  • Payne, B., C.M. Cheng, O. Govorun, & B. Stewart, 2005, “An inkblot for attitudes: Affect misattribution as implicit measurement”, Journal of Personality and Social Psychology , 89: 277–293.
  • Payne, B., & B. Gawronski, 2010, “A history of implicit social cognition: Where is it coming from? Where is it now? Where is it going?”, in Handbook of implicit social cognition: Measurement, theory, and applications , B. Gawronski, & B. Payne (eds.), New York, NY: Guilford Press, pp. 1–17.
  • Payne, B., A. Lambert, & L. Jacoby, 2002, “Best laid plans: Effects of goals on accessibility bias and cognitive control in race-based misperceptions of weapons”, Journal of Experimental Social Psychology , 38: 384–396.
  • Payne, B. K., Vuletich, H. A., & K.B. Lundberg, K. B., 2017, “The bias of crowds: How implicit bias bridges personal and systemic prejudice,” Psychological Inquiry , 28: 233–248.
  • Penner, L., J. Dovidio, T. West, S. Gaertner, T. Albrecht, R. Dailey, & T. Markova, 2010, “Aversive racism and medical interactions with Black patients: A field study”, Journal of Experimental Social Psychology , 46(2): 436–440.
  • Perkins, A. & M. Forehand, 2012, “Implicit self-referencing: The effect of nonvolitional self-association on brand and product attitude”, Journal of Consumer Research , 39(1): 142–156.
  • Pessoa, L., forthcoming, “The Cognitive-Emotional Brain”, Behavioral and Brain Sciences .
  • Peters, D. & S. Ceci, 1982, “Peer-review practices of psychological journals: The fate of published articles, submitted again”, Behavioral and Brain Sciences , 5(2): 187–195.
  • Pettigrew, T. & L. Tropp, 2006, “A Meta-Analytic Test of Intergroup Contact Theory”, Journal of Personality and Social Psychology , 90: 751–83.
  • Petty, R., 2006, “A metacognitive model of attitudes”, Journal of Consumer Research , 33(1): 22–24.
  • Petty, R., P. Briñol, & K. DeMarree, 2007, “The meta-cognitive model (MCM) of attitudes: Implications for attitude measurement, change, and strength”, Social Cognition , 25(5): 657–686.
  • Phills, C., K. Kawakami, E. Tabi, D. Nadolny, & M. Inzlicht, 2011, “Mind the Gap: Increasing the associations between the self and blacks with approach behaviors”, Journal of Personality and Social Psychology , 100: 197–210.
  • Proffitt, D., 2006, “Embodied perception and the economy of action”, Perspectives on psychological science , 1(2): 110–122.
  • Puddifoot, K., 2017, “Dissolving the epistemic/ethical dilemma over implicit bias,” Philosophical Explorations , 20(sup1): 73–93.
  • Railton, P., 2009, “Practical Competence and Fluent Agency”, in Reasons for Action , D. Sobel & S. Wall (eds.), Cambridge: Cambridge University Press, pp. 81–115.
  • –––, 2014, “The Affective Dog and its Rational Tale: Intuition and Attunement”, Ethics , 124(4): 813–859.
  • Richeson, J. & J. Shelton, 2003, “When prejudice does not pay effects of interracial contact on executive function”, Psychological Science , 14(3): 287–290.
  • –––, 2007, “Negotiating interracial interactions: Costs, consequences, and possibilities”, Current Directions in Psychological Science , 16: 316–320.
  • Ross, L., M. Lepper, & M. Hubbard, 1975, “Perseverance in Self-Perception and Social Perception: Biased Attributional Processes in the Debriefing Paradigm”, Journal of Personality and Social Psychology , 32(5): 880–802.
  • Ryle, G., 1949/2009, The Concept of Mind , New York: Routledge.
  • Sarkissian, H., 2010, “Minor tweaks, major payoffs: The problems and promise of situationalism in moral philosophy”, Philosopher’s Imprint , 10(9): 1–15.
  • Saul, J., 2012, “Skepticism and Implicit Bias”, Disputatio , Lecture, 5(37): 243–263.
  • –––, 2013, “Unconscious Influences and Women in Philosophy”, in Women in Philosophy: What Needs to Change?, F. Jenkins & K. Hutchison (eds.), Oxford: Oxford University Press.
  • Scanlon, T., 1998, What We Owe Each Other , Cambridge: Harvard University Press.
  • Schacter, D., 1987, “Implicit memory: History and current status”, Journal of Experimental Psychology: Learning, Memory, and Cognition , 13: 501–518.
  • Schneider, D., 2004, The Psychology of Stereotyping , New York: Guilford Press.
  • Schwitzgebel, E., 2002, “A Phenomenal, Dispositional Account of Belief”, Nous , 36: 249–275.
  • –––, 2006/2010, “Belief”, The Stanford Encyclopedia of Philosophy , (Winter 2010 edition), E. Zalta (ed.), URL = < https://plato.stanford.edu/archives/win2010/entries/belief/ >
  • –––, 2010, “Acting contrary to our professed beliefs, or the gulf between occurrent judgment and dispositional belief”, Pacific Philosophical Quarterly , 91: 531–553.
  • –––, 2013, “A Dispositional Approach to Attitudes: Thinking Outside of the Belief Box”, in New Essays on Belief , N. Nottelmann (ed.), New York: Palgrave Macmillan, pp. 75–99.
  • Sher, G., 2009, Who Knew? Responsibility without Awareness , Oxford: Oxford University Press.
  • Shiffrin, R. & W. Schneider, 1977, “Controlled and automatic human information processing: Perceptual learning, automatic attending, and a general theory”, Psychological Review , 84: 127–190.
  • Shoemaker, D., 2003, “Caring, Identification, and Agency”, Ethics , 118: 88–118.
  • –––, 2011, “Attributability, Answerability, and Accountability: Towards a Wider Theory of Moral Responsibility”, Ethics , 121: 602–632.
  • Sie, M. & N. Vorst Vader-Bours, 2016, “Personal Responsibility vis-à-vis Prejudice Resulting from Implicit Bias”, in Brownstein & Saul (eds.) 2016B.
  • Siegel, S., 2012, “Cognitive Penetrability and Perceptual Justification”, Nous , 46(2): 201–222.
  • –––, 2017, The Rationality of Perception , Oxford: Oxford University Press.
  • –––, forthcoming, “Bias and Perception, in Beeghly & Madva (eds.), forthcoming.
  • Singal, J., 2017, “Psychology’s Favorite Tool for Measuring Racism isn’t Up to the Job,” New York Magazine , [ available online ].
  • Smith, A., 2005, “Responsibility for attitudes: activity and passivity in mental life”, Ethics , 115(2): 236–271.
  • –––, 2008, “Control, responsibility, and moral assessment”, Philosophical Studies ,138: 367–392.
  • –––, 2012, “Attributability, Answerability, and Accountability: In Defense of a Unified Account”, Ethics , 122(3): 575–589.
  • Smith, H., 2011, “Non-Tracing Cases of Culpable Ignorance”, Criminal Law and Philosophy , 5: 115–146.
  • Snow, N., 2006, “Habitual Virtuous Actions and Automaticity”, Ethical Theory and Moral Practice , 9: 545–561.
  • Sripada, C., 2016, “Self-expression: A deep self theory of moral responsibility,” Philosophical Studies , 173(5): 1203–1232.
  • Stalnaker, R., 1984, Inquiry , Cambridge, MA: MIT Press.
  • Steele, C. & J. Aronson, 1995, “Stereotype threat and the intellectual test performance of African Americans”, Journal of personality and social psychology , 69(5): 797–811.
  • Stewart, B. & B. Payne, 2008, “Bringing Automatic Stereotyping under Control: Implementation Intentions as Efficient Means of Thought Control”, Personality and Social Psychology Bulletin , 34: 1332–1345.
  • Strack, F. & R. Deutsch, 2004, “Reflective and impulsive determinants of social behaviour”, Personality and Social Psychology Review , 8: 220–247.Strenze, T., 2007, “Intelligence and socioeconomic success: A meta-analytic review of longitudinal research ,” Intelligence, 35: 401–426.
  • Suhler, C. & P. Churchland, 2009, “Control: conscious and otherwise”, Trends in cognitive sciences , 13(8): 341–347.
  • Talaska, C., Fiske, S., and S. Chaiken, 2008, “Legitimating Racial Discrimination: Emotions, Not Beliefs, Best Predict Discrimination in a Meta-Analysis,” Social Justice Research 21(3): 263–296.
  • Taylor, S. & J. Brown, 1988, “Illusion and well-being: a social psychological perspective on mental health”, Psychological bulletin , 103(2): 193–210.
  • Teachman, B. A., & S.R. Woody, S. R., 2003, “Automatic processing in spider phobia: Implicit fear associations over the course of treatment,” Journal of Abnormal Psychology , 112(1): 100.
  • Tetlock, P., O. Kristel, B. Elson, M. Green, & J. Lerner, 2000, “The psychology of the unthinkable: Taboo trade-offs, forbidden base rates, and heretical counterfactuals”, Journal of Personality and Social Psychology , 78(5): 853–870.
  • Tetlock, P., & G. Mitchell, 2009, “Implicit bias and accountability systems: What must organizations do to prevent discrimination?”, Research in Organizational Behavior , 29: 3–38.
  • Trawalter, S. & J. Richeson, 2006, “Regulatory focus and executive function after interracial interactions”, Journal of Experimental Social Psychology , 42(3): 406–412.
  • Valian, V., 1998, Why so slow? The advancement of women , Cambridge, MA: M.I.T. Press.
  • –––, 2005, “Beyond gender schemas: Improving the advancement of women in academia”, Hypatia , 20: 198–213.
  • Van Dessel, P., De Houwer, J., & C.T. Smith, 2018, “Relational information moderates approach-avoidance instruction effects on implicit evaluation,” Acta Psychologica , 184: 137–143.
  • Vargas, M., 2005, “The Revisionist’s Guide to Responsibility”, Philosophical Studies , 125(3): 399–429.Vianello, M., Robusto, E., & P. Anselmi, 2010, “Implicit conscientiousness predicts academic performance,” Personality and Individual Differences, 48: 452–457.
  • Walther, E., 2002, “Guilty by Mere Association: Evaluative Conditioning and the Spreading Attitude Effect”, Journal of Personality and Social Psychology , 82(6): 919–34.
  • Washington, N. & D. Kelly, 2016, “Who’s responsible for this? Implicit bias and the knowledge condition”, in Brownstein & Saul (eds.) 2016B.
  • Watson, G., 1975, “Free Agency”, Journal of Philosophy , 72(8): 205–220.
  • –––, 1996, “Two faces of responsibility”, Philosophical Topics , 24(2): 227–248.
  • Webb, T., P. Sheeran, & A. Pepper, 2012, “Gaining control over responses to implicit attitude tests: Implementation intentions engender fast responses on attitude-incongruent trials”, British Journal of Social Psychology , 51(1): 13–32. doi:10.1348/014466610X532192
  • Welpinghus, A., forthcoming, “The imagination model of implicit bias,” Philosophical Studies , first online 11 March 2019. doi:10.1007/s11098-019-01277-1
  • Wigley, S., 2007, “Automaticity, Consciousness, and Moral Responsibility”, Philosophical Psychology , 20(2): 209–225.
  • Wolfe, R., & S. Johnson, 1995, “Personality as a Predictor of College Performance,” Education and Psychological Measurement, 55(2): 177–185.
  • Zanna, M. P., & R.H. Fazio, 1982, “The attitude-behavior relation: Moving toward a third generation of research,” in M. P. Zanna, E. T. Higgins, C. P. Herman (Eds.), Consistency in social behavior: The Ontario symposium (Vol. 2, pp. 283–301), Hillsdale, N.J.: Erlbaum.
  • Zeigler-Hill, V. & C. Jordan, 2010, “Two faces of self-esteem”, in Handbook of implicit social cognition: Measurement, theory, and applications , B. Gawronski & B. Payne (eds.), NY: Guilford Press, pp. 392–407.
  • Zheng, R., 2016, “Attributability, Accountability and Implicit Attitudes”, in Brownstein & Saul (eds.) 2016B.
  • Zimmerman, A., 2007, “The nature of belief”, Journal of Consciousness Studies , 14(11): 61–82.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

Other Internet Resources

  • Brownstein, M., Madva, A., and B. Gawronski, ms., “Understanding implicit bias: Putting the criticism into perspective” .
  • Johnson, G., ms, “The Structure of Bias”.
  • Mallon, R., ms, “Psychology, Accumulation Mechanisms, and Race”.
  • Hermanson, S., 2018, “Rethinking Implicit Bias: I want my money back,” [ available online ].
  • Jussim, L., 2018, “Comment on Hermanson, S., 2018, Rethinking Implicit Bias: I want my money back,” [ available online ].
  • Project Implicit (homepage of the IAT)
  • Climate for Women and Underrepresented Groups at Rutgers
  • MAP (Minorities and Philosophy)
  • Active Bystander Strategies
  • Tutorials for Change—Gender Schemas and Science
  • The Gender Equity Project
  • Philosophy of Brains Roundtable on the IAT
  • Peanut Butter, Jelly and Racism

belief | cognitive science | feminist philosophy, interventions: moral psychology | feminist philosophy, interventions: social epistemology | moral responsibility | race | self-knowledge

Acknowledgments

Many thanks to Yarrow Dunham, Jules Holroyd, Bryce Huebner, Daniel Kelly, Calvin Lai, Carole Lee, Alex Madva, Eric Mandelbaum, Jennifer Saul, and Susanna Siegel for invaluable suggestions and feedback. Thanks also to the Leverhulme Trust for funding the “Implicit Bias and Philosophy” workshops at the University of Sheffield from 2011–2013, and to Jennifer Saul for running the workshops and making them a model of scholarship and collaboration at its best.

Copyright © 2019 by Michael Brownstein < msbrownstein @ gmail . com >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Implicit Bias in the Workplace Essay

In the context of today’s rapidly changing world, the notion of discrimination has become unacceptable in any of its manifestations. Speaking of the legislative level of the issue, authorities from all over the world have made great progress in terms of bias prevention in the workplace, medical care, and basic social needs. However, the one full-scale problem tackling our society today is a demonstration of implicit bias in the workplace. The very notion of implicit bias presupposes that people experience stereotypical attitudes towards individuals of specific race, ethnicity, gender, or sexual affiliation without conscious perception of such discrimination. As a result, people who do their best at combating prejudice, still demonstrate discrimination against their colleagues. In today’s overall perception of equality in the workplace, the notions of gender and race are the ones that matter and stand out the most. However, I would like to dedicate more attention to the issue of racial bias, as I feel that the problem still has a lot to consider to eradicate them from the common consciousness.

The modern version of the workplace depiction has overcome many challenges to be in the place it is today, giving people the opportunity to find a job regardless of gender, race, and ethnicity. However, once people get certain positions within a workplace, it becomes clear that there is a kind of major discrepancy between the positions taken by white people and employees of a different race (Thomas, 2019). When speaking of efforts to achieve racially just workplace environments, workers struggle with implicit racism in several manifestations. First of all, African-American employees have to deal with aversive racism, which makes people change their behavioral patterns around them (Roberts & Mayo, n.d.). Another implicit racism expression is related to people’s expectations towards the occupations common for African-Americans, assuming that they do not frequently hold a high position within the enterprise. Finally, implicit racism in the workplace is affected by people who ignore the fact black people still face discrimination by reassuring that the modern labor market is open for any race or ethnicity on equal terms.

In fact, over the last years, considerably more African-Americans hold leading positions, creating an impression of racism eradication in the unit. However, besides all the workload, they are to additionally face mistreatment or tension coming from the employees, limiting one’s ability to become an efficient leader (Caver & Livers, 2002). Such an attitude is also known under the term “miasma,” causing high-stress rates among African-American employees across the state.

Bearing this information in mind, I tried to define which implicit bias in the workplace still interfere with my perception of racial equality. As a result, I understood that my relationship with African-American employees frequently includes minor manifestations of aversive racism, as I subconsciously pay too much attention to the way I act around them. As a result, while trying too hard to avoid potentially awkward situations, African-Americans might think of such an attitude as an implicit offense. Moreover, on the level of my subconscious, I often misinterpret the very notion of equality. Hence, instead of showing racial diversity recognition and respect, I sometimes try to diminish the cultural difference between races to perceive it as “equal.” I do understand the cultural affiliation and historical difference between us, which will never allow us to explicitly understand everything people have to go through daily. However, this understanding sometimes looks like it has been significantly undermined under the social pressure of constant attempts to equal the racial experiences.

Caver, K. A., & Livers, A. B. (2002). Dear white boss . Web.

Roberts, L. M., & Mayo, A. J. (n.d.). Towards a racially just workplace. Web.

Thomas, C. (2019). Is empathy the link? An exploration of implicit racial bias in the workplace (Doctoral dissertation, University of Pennsylvania).

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2022, February 8). Implicit Bias in the Workplace. https://ivypanda.com/essays/implicit-bias-in-the-workplace/

"Implicit Bias in the Workplace." IvyPanda , 8 Feb. 2022, ivypanda.com/essays/implicit-bias-in-the-workplace/.

IvyPanda . (2022) 'Implicit Bias in the Workplace'. 8 February.

IvyPanda . 2022. "Implicit Bias in the Workplace." February 8, 2022. https://ivypanda.com/essays/implicit-bias-in-the-workplace/.

1. IvyPanda . "Implicit Bias in the Workplace." February 8, 2022. https://ivypanda.com/essays/implicit-bias-in-the-workplace/.

Bibliography

IvyPanda . "Implicit Bias in the Workplace." February 8, 2022. https://ivypanda.com/essays/implicit-bias-in-the-workplace/.

  • Conditioned Response and Its Reinstatement
  • Positive Reinforcing in Education
  • Financial Risk Management in Islamic Banking
  • Sports Demands and Stress Management in Athletics
  • Neurophysiological and Evolutionary Theories
  • Drugs and Alcohol Effects and Behaviorism Help
  • Haemophilus Influenzae Type B Vaccination
  • The Fiction Character`s PTSD Diagnosis: Rambo
  • The Impact of Financial Circumstances on Student Health
  • Cognitive Dissonance and Self-Perception Theories
  • Sarbanes-Oxley Act: The End of the Unethical Behavior
  • Basic Thoughts on How to Make a Company Successful
  • Ethics in a Consumer Society: Bauman's Arguments
  • Negligence in Businesses: Heading off the Liability Headache
  • Business Ethics and Ethical Decision Making
  • How It Works
  • All Projects
  • Write my essay
  • Buy essay online
  • Custom coursework
  • Creative writing
  • Custom admission essay
  • College essay writers
  • IB extended essays
  • Buy speech online
  • Pay for essays
  • College papers
  • Do my homework
  • Write my paper
  • Custom dissertation
  • Buy research paper
  • Buy dissertation
  • Write my dissertation
  • Essay for cheap
  • Essays for sale
  • Non-plagiarized essays
  • Buy coursework
  • Term paper help
  • Buy assignment
  • Custom thesis
  • Custom research paper
  • College paper
  • Coursework writing
  • Edit my essay
  • Nurse essays
  • Business essays
  • Custom term paper
  • Buy college essays
  • Buy book report
  • Cheap custom essay
  • Argumentative essay
  • Assignment writing
  • Custom book report
  • Custom case study
  • Doctorate essay
  • Finance essay
  • Scholarship essays
  • Essay topics
  • Research paper topics
  • Top queries link

Best Bias Essay Examples

Implicit bias.

308 words | 2 page(s)

1. An implicit bias is a set of attitudes or beliefs regarding a specific group, such as an ethnic group, and age group, or a particular gender. These are not inherently positive or negative associations, but they all tend to be misguided and uninformed. The reason implicit biases are important to understand in regard to health disparities is that having certain biases can create or prolong these biases, due to how they might impact health care. Eliminating implicit biases can help reduce these disparities, resulting in more equitable care for all groups.

2. The IAT I completed was the skin-tone IAT, which included identifying pictures of individuals and then making associations of words that have either a positive or negative connotation. I found the IAT interesting, but I am not certain it identified whether I have any implicit biases of my own, as simply following directions will not reveal any biases. The test seemed to work by identifying any mistakes that would be made.

Use your promo and get a custom paper on "Implicit Bias".

3. The results did not surprise me too much, but I was also methodical in how I approached responses. I am glad that the test results showed I do not have any significant implicit biases regarding skin tone. The mistakes I made were based on uncertainty whether someone was considered light skinned or dark skinned, as there were a few that could be categorized as either.

4. Actions that I can take to mitigate the potential effects of implicit bias would be to not make assumptions about a particular group, whether this includes ethnicity, religion, gender, or age. For instance, one might have an implicit bias that the elderly are incapable of being active, and therefore might not recommend exercise as part of treatment, even if this is the best recommendation. Therefore, communicating with patients individually is the best way to avoid making decisions based on implicit bias.

Have a team of vetted experts take you to the top, with professionally written papers in every area of study.

We use cookies to enhance our website for you. Proceed if you agree to this policy or learn more about it.

  • Essay Database >
  • Essay Examples >
  • Essays Topics >
  • Essay on People

Implicit Bias Essay Sample

Type of paper: Essay

Topic: People , Discrimination , White , Democracy , Social Issues , Society , Community , Stereotypes

Words: 1250

Published: 09/08/2021

ORDER PAPER LIKE THIS

Implicit Bias can be referred to as an unconscious attitude or behavior towards a certain group in society. Education and awareness led people to believe that all groups and communities must be treated on an equal basis and discrimination of any sort must be avoided. The aim of educating the masses is to ensure that a society where all humans enjoy rights regardless of their color, occupation, race, and religion can be formed. However, old beliefs and values are hard to change and they continued to seep into minds from generation to generation. Although awareness played its role in making people understand the importance of equality and justice, some form of discrimination exists in the back of our minds. This is because the majority of people who form societies and norms still haven’t shown acceptance for minority groups openly.

It is usually assumed that the new generation will eliminate all forms of injustice and promote equality among all ethnic and religious groups however, it doesn’t work this way. Young minds are imparted knowledge on equality but the values they receive from the elders and members in the society somehow cause contradiction and they treat minority groups with a slight degree of inequality.

The term stereotype is significantly used when it comes to minority groups and the biased behaviors their members experience. A stereotype is when certain characteristics are assigned to all members of a group based on assumptions and past experiences. For instance, Black people are usually involved in crimes is a stereotypical statement because usually, they come from poor and illiterate backgrounds. Not all black people are criminals however, the same lens is used to view all blacks living in the society.

Decades ago when Africans migrated to the US to settle there for a bright future, they were never welcomed by the native whites. Blacks were always seen with disgust and hatred because of their skin color and look. Black and white people rarely go along because of race differences. Americans are regarded as one of the most progressive nations but when it comes to racism, they fail because to date, they have not given equal rights to blacks in the society.

It is quite heartbreaking to see that society does not approve when a black and white kid plays together, or when black and white families share neighborhood. Interracial marriages are rarely supported because it strengthens bonding between black and white families. Black people are highly discriminated against when they are denied jobs and promotions at work. Black people are not paid equally as their white colleagues because of racial discrimination. They are denied offers when purchasing or renting properties. The entire world was shocked when Barack Obama was chosen as the President of the United States of America because he was African American. Many party members who supported whites became strong oppositions for Obama however; he kept all racial differences aside and ruled the nation with dignity and capability.

It must be brought to attention that in societies where kids grow up surrounded by a lot of discrimination, they experience certain psychological issues that are later reflected in their behaviors. For instance, a black child is bullied at school by a group of white class fellows. He is made fun of for being dirty and dark in complexion. He cries badly when the bullies tell him that all blacks look alike. His confidence is shattered and his dreams are crushed. He becomes an inferior member of the society and afraid to move on in life like others.

Implicit Bias Test

An implicit bias test is a tool that helps to identify how individuals perceive minority groups in a society (Bartlett, 2017). Although people are accepting the fact that it the right of Africans or blacks to be treated equally in all respects but unconsciously they become biased. Growing up, people hear from their near ones that blacks should not be treated nicely but as adults, they realize that the opposite holds true. However, in certain situations, these same people unconsciously get involved in acts of inequality for minorities.

The implicit bias test allows people to self assess their approach towards minority groups. A list of questions is included in the test that revolves around the fact that good emotions and phrases are linked to white Americans or Europeans whereas negative emotions are linked to black people. White people are generally perceived as attractive, successful, intelligent, and trustworthy whereas black people are known to be ugly, ill mannered, less educated, less advanced and peace breakers.

The most interesting part of the test was a group of words as good or bad. In the next step, it was asked to group bad and Africans in one group whereas good and Americans in the other group. The third part was a tricky one where the same words and pictures were to be organized in right and left columns under bad and Americans one side and good and Africans on the other. It was quite shocking to learn that my fingers on the keyboard worked swiftly when I had to place bad and African in one group and good and Americans in the other. On the contrasting side, I was a bit slow while placing bad and Americans in one group and good and Africans in the other. The results revealed that I have a slight inclination towards automatic preference of Europen Americans over African Americans.

The test results suggested less accuracy however, it was an eye opener for me. For years, I have been seeing this discrimination going on between whites and blacks which depresses me sometimes. I believe all human beings are the creation of God and making a dark complexioned person feeling bad and inferior about his personality and confidence is totally wrong and must be condemned. White people have no right to feel superior only because they look beautiful physically and tend to have strong soft skills. However, such discrimination is difficult to eliminate.

It is the duty of parents and teachers to make the young kids and students understand that black and white are alike and must be treated equally. Superior treatment should only be given in terms of achievements and contributions to the society. Young minds can certainly break stereotypes and responsibly play their part in giving equal treatment to oppressed groups. The government authorities must also take certain measures to ensure that minority groups do not face any type of bullying or discrimination when they apply for jobs or start businesses or even rent apartments. There are several not for profit organizations that work to safeguard the rights of minorities. However, a proper check needs to be kept by the authorities to make certain that peace and equality are maintained in all societies.

Samples don't inspire you? Check out our cheap essay writing service and get a paper according to your requirements!

ORDER AN ESSAY

Bartlett, T. (2017). Can we really measure implicit bias? Maybe not.  The chronicle of higher education .

double-banner

Cite this page

Share with friends using:

Removal Request

Removal Request

Finished papers: 513

This paper is created by writer with

ID 285423988

If you want your paper to be:

Well-researched, fact-checked, and accurate

Original, fresh, based on current data

Eloquently written and immaculately formatted

275 words = 1 page double-spaced

submit your paper

Get your papers done by pros!

Other Pages

Split essays, oversight essays, litigation essays, requirement essays, vengeance essays, frank lloyd wright video review movie review, technology course work, course work on purpose and historical academic use of specific formatting styles, report on there are standard troubleshooting processes that allow to solve practically all, slavery essay, competitor analysis report report, labor in america by melvyn dubofsky book review, course work on language development process in children, essay on management, movie review on control room, essay on market segmentation, essay on secrets to success, essay on participative leadership empowerment, barbers expenses course work, the shaping of childhood essay, essay on a place to see and be seen and learn a little too, monopolistic competition and monopoly essay, influence of christian faith on morality essay example, example of the opportunistic networks course work, essay on war is better than a bargain, essay on reconciling islam and democracy in the middle east, persuasion argumentative essays, dementia argumentative essays, organ transplant argumentative essays, john locke argumentative essays, american football argumentative essays, thinking literature reviews, military literature reviews, treatment literature reviews, smoking literature reviews, socialism literature reviews, soil literature reviews, suicide literature reviews, supply chain literature reviews, venture capital literature reviews, usability literature reviews, socialization literature reviews, stakeholder literature reviews.

Password recovery email has been sent to [email protected]

Use your new password to log in

You are not register!

By clicking Register, you agree to our Terms of Service and that you have read our Privacy Policy .

Now you can download documents directly to your device!

Check your email! An email with your password has already been sent to you! Now you can download documents directly to your device.

or Use the QR code to Save this Paper to Your Phone

The sample is NOT original!

Short on a deadline?

Don't waste time. Get help with 11% off using code - GETWOWED

No, thanks! I'm fine with missing my deadline

  • Current Issue
  • Past Issues
  • Get New Issue Alerts
  • American Academy of Arts 
and Sciences

The Science of Implicit Race Bias: Evidence from the Implicit Association Test

implicit bias essay

Beginning in the mid-1980s, scientific psychology underwent a revolution—the implicit revolution —that led to the development of methods to capture implicit bias: attitudes, stereotypes, and identities that operate without full conscious awareness or conscious control. This essay focuses on a single notable thread of discoveries from the Race Attitude Implicit Association Test (RA-IAT) by providing 1) the historical origins of the research, 2) signature and replicated empirical results for construct validation, 3) further validation from research in sociocognitive development, neuroscience, and computer science, 4) new validation from robust association between regional levels of race bias and socially significant outcomes, and 5) evidence for both short- and long-term attitude change. As such, the essay provides the first comprehensive repository of research on implicit race bias using the RA-IAT. Together, the evidence lays bare the hollowness of current-day actions to rectify disadvantage experienced by Black Americans at individual, institutional, and societal levels.

Kirsten N. Morehouse is a PhD candidate in psychology at Harvard University. She uses computational and behavioral tools to study when and why humans harbor implicit associations that are in conflict with ground truth data and consciously held beliefs. She has published in such journals as Proceedings of the National Academy of Sciences , Current Research in Ecological and Social Psychology , and Journal of Personality and Social Psychology .

Mahzarin R. Banaji , a Fellow of the American Academy since 2008, is the Richard Clarke Cabot Professor of Social Ethics in the Department of Psychology and the first Carol K. Pforzheimer Professor at the Radcliffe Institute for Advanced Study at Harvard University; and the George A. and Helen Dunham Cowan Chair in Human Dynamics at the Santa Fe Institute. She is the author of Blindspot: Hidden Biases of Good People (with Anthony G. Greenwald, 2013).

The science of implicit race bias emerged from a puzzle. By the 1980s, laboratory experiments and surveys revealed clear and noteworthy reductions in expressions of racial animus by White Americans toward Black Americans. 1 But on every dimension that determines life’s opportunities and outcomes—housing, employment, education, health care, treatment by law and law enforcement—the presence of widespread racial inequality remained. Further, on surveys asking even slightly indirect questions, such as attitudes toward federal support for racial equality in employment, attitudes appeared to have regressed, with 38 percent support in 1964 dropping to 28 percent in 1996. 2 These inconsistencies demanded an answer from science.

In their search for an explanation, experimental psychologists recalled an interesting dissociation or disparity in beliefs recorded decades ago. During his travels through the Jim Crow South, Gunnar Myrdal, a Swedish economist engaged by the Carnegie Corporation to conduct a study on interracial relations in America, encountered an unexpected dilemma. The data from surveys and interviews of White Americans confirmed expected expressions of racism. And yet as Myrdal noted, other sentiments from the very same individuals spoke to their uneasy acknowledgment of a disparity between the cherished national ideal of equality and the history of slavery and the realities of racism, even decades after emancipation. These dissonant cognitions, expressed inside quiet homes and noisy factories, struck Myrdal as distinctive enough to serve as the motif for his classic treatise, An American Dilemma: The Negro Problem and Modern Democracy . 3

 Four decades later, psychologists responded to receding levels of “old-fashioned racism” by generating theories of “aversive racism” and measures of “modern racism.” 4 These ideas emerged as necessary acknowledgment that although race bias persists, modern racism manifests in more indirect and subtle ways than before. Indeed, experimental data emerging in the 1980s further highlighted the presence of automatic race bias in the minds of honest race egalitarians. 5 With accumulating evidence demonstrating that many judgments and decisions could operate outside conscious awareness or control, social psychologists Anthony G. Greenwald and Mahzarin R. Banaji proposed the idea of implicit bias and suggested that a tractable measure of implicit cognition was needed. 6 This essay reports on a thread of the development and discoveries of a singularly impor­tant test: the Race Attitude Implicit Association Test (hereafter, RA - IAT ), a measure designed to capture differential automatic attitudes, such as associations of “good” and “bad” with White and Black Americans. 7

In 1967, Martin Luther King Jr. gave the keynote address at the annual meeting of the American Psychological Association ( APA ), only months before his assassination. He seemed to be aware that his audience of largely White Americans was eager to learn how they could contribute to the success of the civil rights movement. But King’s speech clearly conveyed his perspective regarding the responsibility of the APA ’s scholars and clinicians. If they wished to support the movement, they should simply “‘tell it like it is.’” 8 This essay is a response to that call from more than fifty years ago, to emphasize the strength and pervasiveness of anti-Black bias today. We tell it like it is, believing that empirical knowledge production is indeed the responsibility of scientists with expertise in psychological and other sciences. However, the responsibility of addressing challenges to the ideal of racial justice sits squarely at the feet of the nation. In fact, it would be ill-advised to expect scientists—who generally lack knowledge of history, law, policy development, organizational behavior, and the modes of societal change—to be primarily responsible for imagining and constructing paths to social change. By telling it like it is, and remaining focused on the evidence itself, this report can, should the will exist, serve as a foothold to move America toward a solution to racial inequality.

History and Definitions

The science of implicit bias is rooted in experimental psychology. At the core of a particular family of measures is the concept of mental chronometry : studying the mind by measuring the time course of human information processing. 9 That is, rather than analyzing participants’ responses to a question, the critical unit of measurement is the response latency or the time it takes to react to a stimulus. In the 1970s, researchers conducted the first robust studies testing the automaticity of semantic memory . These studies indexed the strength of association between two concepts by using precisely timed stimuli and measuring an individual’s response latencies on the order of tens of milliseconds. 10 These procedures were soon adapted to test another important dimension of word meaning: valence , that is, the good-bad or pleasant-unpleasant dimension. Evidence soon emerged that, like semantic meaning, word or concept valence could be automatically extracted by relying on response latencies. 11 Today, this result is received wisdom, and evaluative priming is regarded to be a standard method to measure automatic attitudes . 12

This class of experimental procedures captured the attention of psychologists concerned with the limitation of self-report measures of racism: individuals can withhold their true beliefs in favor of more socially desirable responses. Moreover, even if the desire to speak forthrightly is assured, self-report measures are limited because humans have a desire to present a positive view of themselves, not just to others but even to themselves. Finally, even if such concerns about self and social desirability were removed, a great deal of research had demonstrated that access to mental content and process is vastly limited, making the problem less an issue of motivation and more one of inaccessibility. 13 These considerations, especially the latter, led psychologists to adapt mental chronometry to study automatic or implicit forms of bias. Race was a natural domain for exploration because of the inconsistency between conscious values in aspirational documents like the U.S. Constitution and the history of American racism.

A harbinger of the breakthrough to come appeared in a paper by psychologists John F. Dovidio, Nancy Evans, and Richard Tyler. 14 Diverging notably from previous research methods, these researchers sat their subjects before a computer screen on which the category labels “Black” or “White” appeared. After each of these primes, target words that represented positive and negative stereotypes of these groups (such as ambitious, sensitive, stubborn, lazy) appeared on the screen, and subjects were asked to decide rapidly if each stereotypic word could “ever be true” or was “always false” of the group. The results were clear: participants classified words more quickly when positive words followed “White” and when negative words followed “Black” primes, suggesting that the category White was more positive than Black in participants’ implicit cognition. Although this method lacked the components that are characteristic of standard measures of implicit cognition today (the response task still required deliberation), this study pointed toward the potential of nonreactive measurement of race bias.

Social psychologist Patricia Devine’s dissertation experiments hammered a second stake into the ground. 15 She subliminally presented words that captured negative Black stereotypes (in the experimental condition) or neutral words (in the control condition) and then requested evaluations of an ambiguously described person. Remarkably, those who were subliminally exposed to Black stereotypes as primes were more likely to view the ambiguously described person as hostile than those in the control condition. Equally remarkable, the degree of race bias on this more automatic measure of stereotypes was similar regardless of consciously reported levels of anti-Black prejudice.

Devine’s research demonstrated the first classic dissociation between more deliberate or explicit race attitudes and more automatic or implicit race attitudes, and it prompted a shift in thinking about the nature of race bias. If bias were hidden, even to the person who carried it, that would explain how racial animus could decrease on survey measures while bias embedded in individual minds, institutions, and long-standing societal structures persisted. The two were dissociated . From a research standpoint, it was clear that to gain access to race bias in all forms, experimental psychologists would need to develop and sharpen measures of implicit race bias.

Several measures of implicit cognition emerged, among them the Implicit Association Test ( IAT ). 16 The IAT followed in the tradition of its predecessors by relying on a single fundamental idea: when two things become paired in our experience (for instance, granny and cookies ), evoking one (granny) will automatically activate the other (cookies). In the context of race bias, the speed and accuracy with which we associate concepts like Black and White with attributes like good and bad provides an estimate of the strength of their mental association, in this case, an implicit attitude.

Today, decades after the first uses of terms such as implicit bias , implicit attitude , and implicit stereotype , these concepts have permeated scientific and scholarly writing as well as the public’s consciousness so effectively that they are rarely accompanied by a definition or explanation. 17 The earliest formal definition of implicit cognition reads: “The signature of implicit cognition is that traces of past experience affect some performance, even though the influential earlier experience is not remembered in the usual sense—that is, it is unavailable to self-report or introspection.” 18 A more colloquial definition of implicit bias has emerged as “a form of bias that occurs automatically and unintentionally, that nevertheless affects judgments, decisions, and behaviors.” 19

Both definitions are quite general, and wisely so, to be inclusive of any domain under investigation (such as self-perception, health decisions, and financial decisions). However, despite its generality, the greatest empirical attention has been devoted to one particular family of biases: those that concern attitudes (valence) and stereotypes (beliefs) about social groups (such as by age, gender, sexuality, race, ethnicity, social class, religion, or nationality). Among these, the test that has garnered the greatest scientific and public interest is the race test (as seen in the scientific record and from completion rates of the test online, where the RA - IAT outstrips all other tests in public interest). 20 Unsurprisingly, and for the same reasons, some resistance to the science of implicit race bias has also emerged, but such criticisms remain minor (2 percent of thousands of Google Alerts analyzed include any critical commentary). 21

Scope of the Essay

Although full-fledged research on implicit social cognition began only in the 1990s, thousands of research articles on implicit bias have since been published. In fact, Google Scholar returns over sixty-five thousand results in response to a query of implicit bias as of January 2024. This prolificacy, while notable, renders any complete review of the literature impossible. As such, this essay constrains coverage in four ways. First, we report research on implicit race attitudes, setting aside all other social categories (such as gender, age, sexuality, disability) with a focus on construct validity. Second, we highlight research on attitudes , setting aside research on race stereotypes . Third, we focus almost entirely on a single method, the IAT , because 1) it is the most widely used measure of implicit bias today (the original report by Greenwald, Debbie McGhee, and Jordan L. K. Schwartz has recorded over seventeen thousand citations on Google Scholar as of January 2024), and 2) the online presence and popularity of the RA - IAT at Project Implicit offer an unparalleled source of data to explore implicit race attitudes. 22 Surprisingly, the signature results from this most popular IAT over the last twenty-five years have not been presented in a single location before. We synthesize them here. Fourth and finally, given the mission of Dædalus to explore the frontiers of knowledge on issues of public importance, we prioritize coverage of questions about the nature of implicit race bias and its interpretation rather than questions of primarily scientific interest, such as the nature of the psychological processes underlying implicit bias, like whether the underlying representation is best viewed as associative or propositional in nature. 23

With these constraints and opportunities in mind, we introduce 1) streams of research from other sciences, notably cognitive development, neuroscience, and computer science, to provide convergent validation for the RA - IAT data; 2) new research providing predictive validity by demonstrating robust covariation between regional RA - IAT and racial disparities in health care, education, business, and treatment by law enforcement; and 3) evidence demonstrating the RA - IAT ’s malleability at the individual level (change within one person) and population level (change within the United States). Together, the data offer confidence in the concept of implicit race bias for use in two ways: as a foothold to an effort for broad-based programs and procedures to ensure racial equality, and as the basis for teaching about implicit bias in all educational settings, including schools, colleges, and the workplace.

The Race Attitude IAT: Early Discoveries and Signature Results Providing Validation

Evidence of implicit race bias using the IAT first emerged in the mid-1990s from small-scale, highly controlled experiments administered to college students, as was characteristic of research at that time. These initial experiments were important for benchmarking data that would soon arrive from exponentially larger and more diverse internet-based samples. In 1998, Yale University hosted a test of implicit race attitude, the RA - IAT , among a few other IAT s, and the site was immediately bombarded with participants. The RA - IAT was immediately the most popular test, and it remains so twenty-five years later. Today, the amount of research conducted and the diversity of empirical results obtained may appear insurmountable to the general reader. Here, we have created the first repository of the basic discoveries and signature results of the RA - IAT in easy-to-access percentages, histograms, and inferential statistics.

Implicit Social Cognition Terminology and IAT Components

The RA - IAT , following the general IAT procedure, consists of items that appear on a computer screen belonging to a pair of target categories (such as Black and White ) and a pair of target attributes (such as Good and Bad ). At the most basic level, the RA - IAT provides an index of implicit race bias by measuring the relative speed (on the order of milliseconds) it takes participants to sort stimuli when White and Good share a response key (and Black and Bad share a different response key), relative to when Black and Good share a response key (and White and Bad share a different response key). 24 The IAT score is captured by the statistic D , which is a measure of effect size, computed by taking the difference between response latencies in the two critical conditions (that is, Black + Good/White + Bad, and Black + Bad/ White + Good) and divided by the standard deviation across all blocks of the test.

Uninitiated readers may wish to take the test at https://implicit.harvard.edu/implicit/selectatest.html . Additionally, in Table 1, we provide descriptions and examples of the core terminology of implicit social cognition and the IAT more generally, even though our focus in this essay will remain on the concept of the attitude.

A table showing examples of labels and stimuli for categories of implicit social cognition theory, including concepts (race, gender, age, etc.), attributes (good-bad, pleasant-unpleasant, etc.), attitude (positive-negative), stereotypes (strong-weak, smart-dumb, etc.), and identity (me-not me, me-other, etc.).

Overall Levels of Explicit and Implicit Race Attitudes and Their Dissociation

An analysis of Project Implicit data from 3.3 million American respondents who completed the RA - IAT across fourteen years (2007–2020) shows robust evidence of implicit race bias: overall, 65 percent of respondents displayed a meaningful association of White with good relative to Black with good (“implicit pro-White bias”), whereas 19 percent of respondents displayed no preference (see Figure 1; for corresponding effect sizes, see Table 2). 25 That is, 2.1 of 3.3 million respondents automatically associated the attribute “Good” (relative to “Bad”) more so with White than Black Americans. By contrast, across all fourteen years, only 29 percent of respondents explicitly reported a preference for White over Black, and 60 percent of respondents reported equal liking for both groups. As the reader may anticipate, these overall scores are strongly modulated by the social group of the respondent; those data are presented in the next section.

A bell curve compares the rates of results on the Implicit Association Test that show implicit bias toward Black Americans (16%), white Americans (65%), and those who had no bias (19%). A bar graph shows nearly 60% of the same respondents explicitly reported having no racial bias when asked.

This divergence between mean levels of implicit and explicit race attitudes is striking and bolstered by a dissociation between implicit and explicit race attitudes within a single person. Specifically, modest correlations between implicit and explicit attitudes are typically observed across all participants (for example, r = 0.30 [95% CI : 0.308, 0.310]), and even weaker correlations often emerge for Black Americans (see Table 2). 26 Additional support for this dissociation has been derived from latent variable modeling. Unlike variables that can be directly observed or measured (like temperature), latent variables refer to constructs—such as race attitudes—that are inferred indirectly and can possess a degree of measurement error. These latent modeling techniques indicate that implicit and explicit attitudes are related, but distinct. That is, although the latent implicit and explicit attitude variables are correlated ( r = 0.47), a confirmatory factor analysis suggests that a two-factor solution fits the data better than a single-­factor solution with a single latent “attitude” variable. 27 In other words, this technique indicated that implicit and explicit attitudes are related, but psychometrically  distinct.

A table showing the test results of the Implicit Attitude Test, according to the respondents’ racial identity. There was an implicit bias toward white people among white, East Asian, and South Asian respondents, and a lack of implicit bias toward white people among Black, Hispanic, and multiracial respondents.

Together, this pattern of data—low levels of explicit race bias but high levels of implicit bias—is considered a key result of implicit intergroup cognition. The data also provide a conceptual replication of Devine’s early discovery that implicit race bias can emerge in defiance of stated egalitarian values. 28 However, unlike Devine’s work with subliminally presented stimuli, the IAT does not hide its intent; the two racial categories are in full view and the test is announced as one of race bias. Moreover, the IAT components are not shrouded in mystery and completing the task is so simple that even a child can participate. These features contribute to the surprise that often accompanies the IAT : if the task itself is easy, why can I not control my responses?

Nevertheless, after nearly a century of work based on almost purely explicit measures, these results lay bare the full extent of the challenge we face when confronting the status of race in America today. 29 Recall in Myrdal’s interviews during Jim Crow that respondents revealed a disparity between two consciously held beliefs: the American ideal of liberty and equality and America’s history of bondage and inequality. In a sense, that conflict is psychologically simple because both cognitions are conscious. By contrast, the dissociation between explicit and implicit race attitudes is especially challenging because implicit attitudes operate largely outside the purview of conscious awareness and control, and therefore may unwittingly produce behaviors that conflict with consciously held values and beliefs.

Explicit and Implicit Race Bias by Racial/Ethnic Group

Among psychology’s most ubiquitous results is the demonstration of in-group bias. Irrespective of whether the groups involved are “minimal” (based on a “minimal” preference, such as for the artist Klee over Kandinsky) or real, research has overwhelmingly demonstrated that humans show a preference for their own group relative to the out-group. 30 For example, Japanese Americans and Korean Americans, Yankee and Red Sox fans, and Yale and Harvard students all display clear and symmetric in-group preferences. 31 However, as visualized in Figure 2, the data across White and Black Americans paint a much more complex picture.

Four charts compare the rates of explicit and implicit bias toward Black Americans and white Americans. White respondents showed bias toward Black Americans at 12%, white Americans at 71%, and no implicit bias at 17%, but 62% claimed they had no bias. Black respondents showed implicit bias toward Black Americans at 12%, white Americans at 71%, and no implicit bias at 17%, but 39% claimed they had no bias.

Specifically, 71 percent of White Americans displayed an implicit pro-White bias, whereas only 33 percent of Black Americans displayed an implicit pro-Black bias. These data are in contrast with the robust in-group preferences among Japanese and Korean Americans, Red Sox and Yankee fans, and Yale and Harvard students, in which each group showed an equally robust preference for its own group. This lack of in-group preference among Black Americans is a second signature result and it extends beyond Black Americans to other less advantaged groups. That is, unlike members of socially advantaged groups, who consistently display implicit in-group preferences, members of socially disadvantaged groups typically do not.

On the measure of explicit bias, an almost opposite pattern emerges, making these data among the clearest examples of mental dissociation: the lack of consistency between two measures of the same concept , within the same mind. Only 34 percent of White Americans displayed an explicit pro-White bias, whereas 56 percent of Black Americans displayed an explicit pro-Black bias. These data highlight the role conscious values play on responses. White Americans, likely being aware of the history of race relations in America, report a far more muted in-group preference. Black Americans, equally likely aware of the history of race relations in America, report an overwhelming in-group preference.

When taken together, the data for White and Black Americans showed a double dissociation. On the one hand, White Americans report little in-group preference on the explicit measure but strong in-group preference on the implicit measure. On the other hand, Black Americans show a strong in-group preference on the explicit measure but no in-group preference on the implicit measure. We regard this result to be sufficiently important that we recommend that it play a role in any discussion of policies to ensure racial equality. Conscious attitudes need not follow such a pattern, but to the extent that attitudes and behavior are driven by both explicit and implicit cognition, the balance sheet of intergroup liking shows a striking lack of parity.

Interestingly, when third-party groups are tested (such as Asian Americans taking a White-Black IAT ), they consistently show an implicit pro-White bias (see Table 2). That is, rather than associating both out-groups with good equally, third-party respondents display an implicit preference for the socially dominant group. In fact, rivaling the degree of bias among White Americans, 65 percent of Asian Americans and 60 percent of Latinx Americans display an implicit pro-White preference.

Similar patterns also emerge on measures of implicit stereotyping . As one example, Morehouse and Banaji, with Keith Maddox, found that White Americans and third-party participants associate human (versus nonhuman attributes like “animal” and “robot”) more with their group, whereas nondominant groups (like Black Americans) display no “human = own group” bias. 32 This striking absence of in-group preference in members of disadvantaged groups points to the power of the social standing of groups in society, and has been interpreted to be consistent with system justification tendencies. 33

Explicit and Implicit Race Bias by Other Demographic Variables

Beyond race/ethnicity, do other demographic variables modulate the strength of implicit race bias? That is, will men and women, liberals and conservatives, or older and younger respondents show different levels of implicit race bias? To test this question, variation across five additional demographic characteristics was examined: religion, level of education, age, gender, and political ideology . Implicit race bias was largely stable across respondents’ religious affiliation and level of education. However, differences emerged across age, gender, and political ideology. Implicit pro-White preferences increased with age (each five-year increase translating roughly to a 3 percent increase in IAT D scores), and respondents over age sixty displayed levels of bias that were 15 percent stronger than individuals under age twenty. Further, the incidence of pro-White bias was 20 percent higher among self-identified conservatives relative to self-identified liberals, and 7 percent higher among men relative to women.

These results show how group membership is related to variation in implicit and explicit race attitudes. Later in this essay, we explore another potential determinant of attitude strength—participants’ local environment—and the relationship between regional levels of implicit race attitudes and socially significant outcomes (such as lethal use of force by police or health outcomes).

Origins of Implicit Race Bias: Evidence for Developmental Invariance

Over the past twenty years, researchers have gained a new understanding about the surprisingly early precursors of race encoding and race preference in infants and young children. Although far from biological and social maturity, infants and children show evidence of a mind that is already attuned to race but has the capacity to set racial groupings aside, even when attending to other social categories like gender and age, in other situations. 34

Human groups across the world, as much as they differ by language, culture, preferences, beliefs, and values, are all members of the same species. Is implicit bias a core capacity that unifies us as humans? If we look cross-culturally, a recent analysis of implicit race attitudes from thirty-four countries revealed that an implicit preference for White over Black appears in every country sampled (see Figure 3). 35

A dot chart depicts the rates of implicit pro-white bias in the results of the race attitudes section on the Implicit Associations Test between 2009 and 2019. The results are arranged according to country, and range from 0.275 to 0.49. The average score for respondents from the United States was 0.30.

Another way to test whether a particular attitude is fundamental is to observe whether it is present in infants and young children. Our interest here is not in children qua children, but rather in developing minds. Is implicit race bias present even in early stages of cognitive-affective development? The obvious prediction would be that, of course, given the massively different levels of personal experience and knowledge of the culture that children have acquired relative to adults, implicit race bias should differ based on age. But to the extent that the data show the opposite—similar patterns of implicit race bias in adults and children—we would learn that such biases require little time and experience in a culture to be acquired.

Much has been written about the development of race cognition in infancy. 36 From this work, we know that even infants prefer faces of members of their own group, an effect that likely emerges out of familiarity with their caregivers. For example, three-month-old Ethiopian infants in Ethiopia prefer African over European faces, Ashkenazi babies in Israel prefer European over African faces, and babies of Ethiopian Jews who have immigrated to Israel and have caregivers of both groups show no race preference. 37 Importantly, these preferences are early emerging but not hard-wired; they are absent at birth but present by three months of age. 38 In other words, these data show that the human brain is attuned to features, like race and gender, in the environment that can differentiate between in-group and out-group members.

Work with toddlers has been especially fruitful because the same method used to measure implicit race bias in adults could be adapted to measure implicit race bias in children. Specifically, psychologist Andrew Scott Baron and Banaji created a child version of the RA - IAT . 39 Given that children’s experiences and knowledge of racial groups vastly differ from adults’, the authors expected stark differences in the degree of implicit race bias expressed by children and adults. However, this is not what they found. The surprising result, now replicated many times, is that White six-year-olds, ten-year-olds, and adults show identical levels of implicit race bias.

Notably, and further mirroring the results obtained in adult samples, children’s implicit race bias was qualified by social status. By age three, White American children show an in-group preference, whereas Hispanic and Black American children show no in-group preference. 40 This result is remarkable because it teaches us that implicit attitudes are absorbed from the culture and into the minds of even young children. It also challenges the theoretical intuition that implicit attitudes are learned slowly over time. (For further discussion of the development of implicit racial bias, see Andrew N. Meltzoff and Walter S. Gilliam’s contribution to this volume.) 41

Converging Evidence from Neurons and Natural Language

Understanding how the mind works is not for the meek. The Nobel Prize—winning physicist Murray Gell-Mann seemed to understand this when he reputedly said, “Think how hard physics would be if particles could think.” Not only are beings who can think the object of our study, but the thinking under consideration is not easily available to their own conscious awareness. As such, building a case for an imperceptible yet consequential bias requires a multipronged, continuous, and iterative process of validation.

There is already deep and broad evidence for the construct validity of the IAT . For example, providing face validity, we know a priori that the concept “flower” is more positive than “insect,” and the IAT detects this implicit pro-flower preference in most humans. 42 Further evidence can be obtained by studying groups who are known to differ in attitude and observing whether expected differences emerge. Indeed, we have already reported that Black and White Americans show diverging implicit race attitudes, providing additional evidence for construct validity. As a third route, construct validation has been obtained by demonstrating that findings derived on the IAT are related to (but not redundant with) conceptually similar constructs. Indeed, we have shown that although implicit and explicit race attitudes are modestly correlated, latent variable modeling suggests that a two-factor solution (with “implicit bias” and “explicit bias” as separate latent factors) provided the best fit to the data. In fact, providing discriminant validation, implicit insect-flower attitudes did not hang together with implicit intergroup attitudes.

In the following sections, we will encounter construct validation in several new ways. In particular, we show that methods from other fields (including neuro-imaging and word embeddings) also demonstrate evidence of implicit race bias. Moreover, we explore the origins and consequences of implicit race bias to push the engine of construct validity further. Together, these various approaches have not only created a strong foundation for understanding the concept of implicit race bias, but have produced unexpected empirical findings that challenged and refined existing theory.

The Neural Basis of the RA-IAT

When the first pre- IAT measures of implicit attitudes were introduced, little discussion ensued about whether these alien measures should be considered measures of attitude . 43 However, when the IAT was introduced, the question of construct validity appeared immediately. 44 It became obvious that measures that directly interrogated the brain, especially those regions that had long been identified as playing a role in emotional learning (such as Pavlovian conditioning), could prove useful if correlations between IAT behavior and brain activation patterns in regions known to be evolved in emotional learning could be observed.

Research with neuroimaging methods like f MRI has long demonstrated that the amygdala, a subcortical brain structure, is involved in the continuous evaluation and integration of sensory information, with a special role for assigning values for valence and intensity. 45 Crucially, neuroscientist Elizabeth A. Phelps and colleagues showed that amygdala activation to Black faces of unknown individuals (relative to White) was significantly correlated with implicit race bias; no such correlation was observed with explicit race bias as measured by the Modern Racism Scale. 46 This suggested that whatever the RA - IAT detects has a core valence component, in line with the idea of “attitudes” as measuring evaluations or the dimension of positive and negative . A second study suggested that race-based responding is modulated by experience: when the faces of famous and generally liked Black (Denzel Washington) and White (Jerry Seinfeld) faces were used, this activation-implicit bias correlation disappeared. Put differently, this result indicated that familiarity can interrupt this relationship, providing two-pronged convergence.

In the decades that have followed, a plethora of evidence has linked implicit attitudes with neural responses to race-based in-group and out-group faces and more downstream decision-making to test the ability to control default, biased responding. 47 Results of relevance demonstrate that 1) the neural representation of race-based attitudes involve a range of overlapping and interacting brain systems, 2) race-based processing of in-group and out-group faces occurs early in the information-processing sequence starting at one hundred milliseconds upon encountering a face, 3) implicit bias observed in brain activity is malleable and responsive to task demands and context, and 4) individual differences exist in the ability to exert control over biased responses, and this control itself can be initiated without awareness as well as involve both inhibition of unwanted responses and the initiation and application of intentional behavior. 48 Crucially, this last piece of evidence highlights the need for proactive interventions. If bias can creep in, even during early visual processing, then it is unrealistic to expect even well-intentioned individuals to prevent bias from impacting their behavior in the moment. Instead, changes that alter the choice structure and prevent bias from entering the decision-making process are more likely to succeed.

Overall, neuroscientific evidence provided important construct validity for the IAT and its presumed measurement of expressions of value along a good-bad dimension. Moreover, it indicated that implicit race bias converges with multiple levels of information processing from the earliest stages of face detection to judgments of behavior.

Word Embeddings Based on Massive Language Corpora Converge with IAT Data

A long history of research on natural language processing ( NLP ) coupled with the availability of massive language corpora (such as the Common Crawl and Google Books) have created the opportunity to learn how social groups are represented in language on an unprecedented scale. Specifically, mirroring the logic of the IAT , computer scientist Aylin Caliskan and colleagues used word embeddings—a technique that maps words or phrases to a high-dimensional vector space—to understand the relative associations between targets (such as Black and White people) and attributes (such as Good and Bad). 49 Creating a parallel measure, the Word Embeddings Association Test ( WEAT ), they performed tests of group-attribute associations in language on a trained dataset of eight hundred and forty billion tokens from the internet. In doing so, they replicated the classic implicit race bias finding: European American names were more likely than African American names to be closer (semantically similar) to pleasant words than to unpleasant words.

These approaches have also enabled researchers to ask questions about human attitudes that are beyond the scope of behavioral tools. Experimental psychologist Tessa Charlesworth, Caliskan, and Banaji used trained databases of historical texts to demonstrate that attitudinal biases toward racial/ethnic groups have remained stable over the course of two centuries (1800—1999). 50 Moreover, just as neuroimaging data showed convergence between theoretically identified brain regions like the amygdala and the RA - IAT but not with explicit race bias, analyses of the biases embedded in language suggest that they are related to IAT s but not self-report data. 51 In other words, linguistic patterns represent a reservoir for collectively held or culturally imprinted beliefs. 52

In fact, recent work indicates that algorithms are even capable of refracting beliefs about racial purity. 53 Specifically, information scientist Robert Wolfe, Calis­kan, and Banaji showed that CLIP , an algorithm that relies on both image and text data, has learned the one-drop rule or hypodescent (that is, a legal principle prominent even in the twentieth century that held that a person with just one Black ancestor is to be considered Black). 54 Overall, these findings add to the burgeoning evidence that implicit bias embedded in human minds exists in language and that algorithms trained on these databases will carry, amplify, and even reproduce bias. 55

Covariation between Regional Implicit Race Bias and Socially Significant Outcomes

A growing number of “audit studies” have demonstrated group-based discrimination in controlled field settings. 56 These studies, typically conducted by economists and sociologists, create highly standardized but naturalistic situations to explore how specific variables (such as race/ethnicity) influence behavior. For example, economist Marianne Bertrand and computation and behavioral scientist Sendhil Mullainathan sent roughly five thousand fictitious résumés to employers in Boston and Chicago. 57 The résumés were identical in all ways except that the applicant’s name was either a White- or Black-sounding name. Despite their identical qualifications, résumés with White names received 50 percent more callbacks than résumés with Black names. In another example in the domain of employment, Devah Pager and colleagues demonstrated that, despite having equivalent résumés and being actors trained to respond identically to interview questions, Black applicants were half as likely to receive a callback than White applicants. 58 In fact, in an even more stunning demonstration of race bias, Black applicants were just as likely to receive a callback as White applicants with a felony record. These individual studies mirror a larger trend observed in a meta-analysis: hiring discrimination against African Americans remained stable over a twenty-five-year period (1989–2015). 59

These audit studies, like the perplexing disconnect between consciously reported prejudice and observed inequalities in society, require an explanation. How is it that the same résumé or qualifications can be evaluated more positively if they are attributed to a White person? We posit that implicit bias is the most likely explanation. The difficulty was that, until recently, no direct link between measures of implicit bias and large-scale race-based discrimination was available. However, a new line of research, now reaching a substantial number of demonstrations, provides the first persuasive evidence that implicit bias is indeed correlated with racial discrimination on socially significant outcomes ( SSB ) in domains like employment, health care, education, and law enforcement. 60

Specifically, a mounting body of research across laboratories and disciplines within the social sciences shows that U.S. regions with stronger implicit race bias (measured by the RA - IAT and stereotype IAT s) also have larger Black-White disparities in SSB s. In fact, this research has demonstrated covariation between regional implicit race bias and SSB s in four prominent domains: 1) education ­(including suspension rates and Black-White gaps in standardized test scores); 61 2) life and economic opportunity (adoption rates and upward mobility); 62 3) law enforcement (Black-White disparities in traffic stops and the use of lethal force); 63 and 4) health care (Medicaid spending and Black-White gaps in infant birth weight and preterm births). 64 These studies show that implicit bias, measured at the level of individual minds but aggregated across geographic space, reflects race discrimination that cannot otherwise be explained.

Evidence and Interventions for Implicit Attitude Change: Early Evidence of Malleability

With hindsight, we know that implicit bias is malleable. However, this was not always received knowledge or even expected. In the early years of research on implicit bias using the IAT , many primary investigators believed that implicit bias was intractable. 65 Yet even early work raised the possibility that implicit race attitudes were sensitive to perceivers’ motivations, goals, and strategies, as well as contextual manipulations. 66 For example, social psychologist Bernd Wittenbrink and colleagues found that negativity toward Black individuals was lower after watching a movie clip depicting Black Americans in a positive setting (relative to a negative setting). 67 Similarly, social psychologist Brian Lowery and colleagues demonstrated that White Americans displayed lower levels of negativity toward Black individuals in the presence of a Black (rather than White) experimenter. 68

Extending this work, psychologist Calvin Lai and colleagues conducted an important study exploring the comparative efficacy of seventeen interventions designed to reduce implicit race bias. 69 Although these interventions were roughly five-minutes long and only administered once, eight of the seventeen interventions were effective in reducing implicit race bias. The most effective interventions invoked high self-involvement and/or linked Black people with positivity and White people with negativity. 70 By contrast, interventions that required perspective-taking, asked participants to consider egalitarian values, or induced a positive emotion were ineffective. When participants’ attitudes were tested even a few hours after the intervention, none of the eight previously effective interventions produced a continued reduction in implicit race bias. 71 Of course, this temporary (but not durable) change is to be expected; implicit bias should snap back, rubber band–like, to some stable individual, situational, or broader cultural default. In fact, that single presentations of short interventions can produce any change is surprising.

But many “light” interventions, often involving a few counterattitudinal associations or a hypothetical written scenario (a paragraph long) presenting counterattitudinal information, do not show long-term change. To us, the lack of long-term change is hardly surprising given the weakness of the interventions. In fact, in such a case, implementing flimsy interventions and looking for long-term effects is a fool’s errand; yet well-intentioned investigators with the hope that a sentence or two should wipe out a lifetime of learning have tried them.

Change at the Societal Level

These laboratory studies provide excellent tests of specific interventions, but they are less equipped to test whether implicit bias has changed over the course of years or decades. As such, the key question of whether long-term change was possible remained. However, recent analyses by Charlesworth and Banaji challenged this idea. 72 Specifically, using time-series modeling, they traced almost three million Americans’ implicit race attitudes over the course of fourteen years (2007–2020). Crucially, they found evidence of pervasive change: across all participants, implicit race bias decreased by 26 percent, making it the second fastest changing implicit attitude after sexuality attitudes (anti-gay bias), which saw a dramatic 65 percent reduction during the same period. 73 In fact, if trends continue, implicit race attitudes could first touch neutrality in 2035.

Moreover, this change was not restricted to only certain segments of society (for instance, younger and more liberal participants). Rather, pointing to widespread societal change , men and women, older and younger, liberal and conservative, and more- and less-educated participants alike all moved toward neutrality. 74 The only exception was that, unlike White participants, who recorded a 27 percent reduction in implicit bias ( IAT D score reduced by 0.11 points), Black participants’ implicit attitudes remained relatively stable, only changing 0.03 IAT D score points over the fourteen-year period (see Table 3).

A table showing the change in results of the Implicit Association Test between 2007, when respondents initially took the test, and 2020, when they took the test again. For white respondents, rates of implicit bias decreased by 27%. For Hispanic respondents, by 38%. For East and South Asian respondents, by 28%. For Black respondents, rates of implicit bias increased by 33%. Overall, rates of implicit bias decreased by 27%.

This widespread change is remarkable, especially when one considers that not all implicit biases are changing. For example, implicit anti-elderly, anti-disability, and anti-fat biases remained relatively stable over the fourteen-year period. This change toward some social categories but not others begs an important question: what is the source of this change?

We pose this question because of its relevance to the different claims about how to reduce bias, and where resources earmarked for attitude change should be directed. On the one hand, some researchers and practitioners have criticized a focus on change at the individual level (such as deploying appeals of equality to change individual minds). On the other hand, past interventions targeting structural-level change have not eradicated racial inequalities as expected. 75 In fact, change through laws and acts of Congress, if resisted by individuals, may actually prompt reactance and undo progress. 76

We noted above that implicit anti-gay bias dropped dramatically (64 percent) between 2007 and 2020. What caused this surprising and especially rapid change? We propose that anti-gay bias may possess unique features that allowed such change. For one, sexuality is more easily concealed than a person’s race/ethnicity, gender, age, or weight. But we argue that another explanation warrants further investigation: anti-gay interventions occurred at three levels within the same fourteen-year period.

First, change occurred at the individual level as children (and adults of all ages) came out to parents, grandparents, friends, neighbors, and coworkers. Love, already in place, trumped even implicit bias. In other words, the concealable nature of sexuality forced individuals to reconcile their anti-gay attitudes with their positive feelings toward their loved ones; this choice architecture was not in place for attitudes about other social groups. Second, change occurred at the institutional level. Of course, such change was not adopted everywhere, and some organizations were directly hostile to nonheterosexual employees. However, many institutions, like the U.S. military, enacted policies that affirmed the status of same-sex relationships (such as extending health benefits to same-sex partners) even before the country did. Third, change occurred at the macro level. Massachusetts and other states legalized same-sex marriages in the early 2000s, and the Supreme Court of the United States followed suit in 2015. In our estimation, it is rare for interventions at all three levels—individual, institutional, and societal—to occur within a short period of time. To our knowledge, change at all three levels within a short time frame has not eventualized for other social groups.

Implicit race bias exists. Support for its presence is undergirded by evidence from other areas of psychology (cognitive, developmental, neuroscience) as well as other behavioral sciences using quite different methods. New evidence shows that regional implicit bias predicts socially significant outcomes of Black-White disparity along several important dimensions that determine life’s opportunities and outcomes. To bring hope, data also reveal that implicit bias is malleable. Overall, these data represent one of many robust streams of scientific evidence available today. Together, they call for a nationwide undertaking for change—at the individual, institutional, and societal levels.

  • 1 Howard Schuman, Charlotte Steeh, and Lawrence Bobo, Racial Attitudes in America: Trends and Interpretations (Cambridge, Mass.: Harvard University Press, 1985).
  • 2 Howard Schuman, Charlotte Steeh, Lawrence D. Bobo, and Maria Krysan, Racial Attitudes in America: Trends and Interpretations, rev. ed. (Cambridge, Mass.: Harvard University Press, 1997).
  • 3 Gunnar Myrdal, An American Dilemma: The Negro Problem and Modern Democracy, volumes 1 and 2 (Oxford: Harper, 1944).
  • 4 For aversive racism, see John F. Dovidio and Samuel L. Gaertner, “Prejudice, Discrimination, and Racism: Historical Trends and Contemporary Approaches,” in Prejudice, Discrimination, and Racism , ed. John F. Dovidio and Samuel L. Gaertner (San Diego: Academic Press, 1986), 1–34. For so-called modern racism, see John B. McConahay, “Modern Racism, Ambivalence, and the Modern Racism Scale,” in ibid.
  • 5 Patricia G. Devine, “ Stereotypes and Prejudice: Their Automatic and Controlled Components, ” Journal of Personality and Social Psychology 56 (1989): 5–18.
  • 6 Anthony G. Greenwald and Mahzarin R. Banaji, “Implicit Social Cognition: Attitudes, Self-Esteem, and Stereotypes,” Psychological Review 102 (1) (1995): 4.
  • 7 Anthony G. Greenwald, Debbie E. McGhee, and Jordan L. K. Schwartz, “ Measuring Individual Differences in Implicit Cognition: The Implicit Association Test, ” Journal of Personality and Social Psychology 74 (6) (1998): 1464–1480.
  • 8 “ King’s Challenge to the Nation’s Social Scientists, ” The APA Monitor 30 (1) (1999).
  • 9 R. Duncan Luce, Response Times: Their Role in Inferring Elementary Mental Organization (New York: Oxford University Press, 1986); and Michael I. Posner, Chronometric Explorations of Mind (Oxford: Lawrence Erlbaum, 1978).
  • 10 David E. Meyer and Roger W. Schvaneveldt, “ Facilitation in Recognizing Pairs of Words: Evidence of a Dependence between Retrieval Operations, ” Journal of Experimental Psychology 90 (1971): 227–234; and James H. Neely, “ Semantic Priming and Retrieval from Lexical Memory: Roles of Inhibitionless Spreading Activation and Limited-Capacity Attention, ” Journal of Experimental Psychology: General 106 (3) (1977): 226–254.
  • 11 Russell H. Fazio, David M. Sanbonmatsu, Martha Powell, and Frank R. Kardes, “ On the Automatic Activation of Attitudes, ” Journal of Personality and Social Psychology 50 (1986): 229–238.
  • 12 For a fuller treatment of the “implicit revolution,” see Anthony G. Greenwald and Mahzarin R. Banaji, “ The Implicit Revolution: Reconceiving the Relation between Conscious and Unconscious, ” American Psychologist 72 (9) (2017): 861–871.
  • 13 Greenwald and Banaji, “Implicit Social Cognition”; and Richard E. Nisbett and Timothy D. Wilson, “ Telling More than We Can Know: Verbal Reports on Mental Processes, ” Psychological Review 84 (1977): 231–259.
  • 14 John F. Dovidio, Nancy Evans, and Richard B. Tyler, “ Racial Stereotypes: The Contents of Their Cognitive Representations, ” Journal of Experimental Social Psychology 22 (1) (1986): 22–37.
  • 15 Devine, “Stereotypes and Prejudice.”
  • 16 For comprehensive reviews of measures of implicit cognition, see Bertram Gawronski and Jan De Houwer, “Implicit Measures in Social and Personality Psychology,” in Handbook of Research Methods in Social and Personality Psychology , ed. Harry T. Reis and Charles M. Judd (Cambridge: Cambridge University Press, 2014), 283–310; Brian A. Nosek, Carlee Beth Hawkins, and Rebecca S. Frazier, “ Implicit Social Cognition: From Measures to Mechanisms, ” Trends in Cognitive Sciences 15 (4) (2011): 152–159; and Bertram Gawronski, “Automaticity and Implicit Measures,” Handbook of Research Methods in Social and Personality Psychology , ed. Reis and Judd.
  • 17 Mahzarin R. Banaji, Curtis Hardin, and Alexander J. Rothman, “ Implicit Stereotyping in Person Judgment, ” Journal of Personality and Social Psychology 65 (1993): 272–281; and Greenwald and Banaji, “Implicit Social Cognition.”
  • 18 Greenwald and Banaji, “Implicit Social Cognition,” 4–5.
  • 19 “ Implicit Bias, ” National Institutes of Health, (accessed January 26, 2024).
  • 20 For an analysis of Google alerts on “implicit bias,” see Kirsten N. Morehouse, Swathi Kella, and Mahzarin R. Banaji, “Implicit Bias in the Public Eye: Using Google Alerts to Determine Public Sentiment” (in preparation).
  • 21 Jennifer L. Howell and Kate A. Ratliff, “ Not Your Average Bigot: The Better-than-­Average Effect and Defensive Responding to Implicit Association Test Feedback, ” British Journal of Social Psychology 56 (1) (2017): 125–145; and Alexander M. Czopp, Margo J. Monteith, and Aimee Y. Mark, “ Standing up for a Change: Reducing Bias through Interpersonal Confrontation, ” Journal of Personality and Social Psychology 90 (5) (2006): 784–803.
  • 22 Greenwald, McGhee, and Schwartz, “Measuring Individual Differences in Implicit Cognition.” As of May 2023, over thirty million completed IAT s have been sampled and over seventy million tests have been at least partially sampled on Project Implicit. See Project Implicit (accessed May 1, 2023).
  • 23 For more on the psychological processes, see Benedek Kurdi, Kirsten N. Morehouse, and Yarrow Dunham, “ How Do Explicit and Implicit Evaluations Shift? A Preregistered Meta-Analysis of the Effects of Co-Occurrence and Relational Information, ” Journal of Personality and Social Psychology 124 (6) (2022); and Benedek Kurdi and Mahzarin R. Banaji, “ Implicit Person Memory: Domain-General and Domain-Specific Processes of Learning and Change, ” PsyArXiv, October 18, 2021, last edited November 18, 2021.
  • 24 For a detailed review of the IAT , see Kate A. Ratliff and Colin Tucker Smith, “ The Implicit Association Test, ” Dœdalus 153 (1) (Winter 2024): 51–64.
  • 25 Standard interpretations regard 0 ± 0.15 as the null (no bias) interval. When using any deviation away from zero as the cutoff, 75 percent of respondents displayed an implicit White + Good/Black + Bad association. Tessa E. S. Charlesworth and Mahzarin R. Banaji, “ Patterns of Implicit and Explicit Attitudes: IV. Change and Stability from 2007 to 2020, ” Psychological Science 33 (9) (2022).
  • 26 Brian A. Nosek, Frederick L. Smyth, Jeffrey J. Hansen, et al., “ Pervasiveness and Correlates of Implicit Attitudes and Stereotypes, ” European Review of Social Psychology 18 (1) (2007): 36–88.
  • 27 William A. Cunningham, John B. Nezlek, and Mahzarin R. Banaji, “ Implicit and Explicit Ethnocentrism: Revisiting the Ideologies of Prejudice, ” Personality and Social Psychology Bulletin 30 (10) (2004): 1332–1346.
  • 28 Devine, “Stereotypes and Prejudice.”
  • 29 Mahzarin R. Banaji and Anthony G. Greenwald, Blindspot: Hidden Biases of Good People (New York: Delacorte Press, 2013).
  • 30 Henri Tajfel, Michael Billig, Robert P. Bundy, and Claude Flament, “Social Categorization and Intergroup Behaviour,” European Journal of Social Psychology 1 (2) (1971): 149–178.
  • 31 Steven A. Lehr, Meghan L. Ferreira, and Mahzarin R. Banaji, “ When Outgroup Negativity Trumps Ingroup Positivity: Fans of the Boston Red Sox and New York Yankees Place Greater Value on Rival Losses than Own-Team Gains, ” Group Processes & Intergroup Relations 22 (1) (2019): 26–42; Kristin A. Lane, Jason P. Mitchell, and Mahzarin R. Banaji, “ Me and My Group: Cultural Status Can Disrupt Cognitive Consistency, ” Social Cognition 23 (4) (2005): 353–386; and Greenwald, McGhee, and Schwartz, “Measuring Individual Differences in Implicit Cognition.”
  • 32 Kirsten N. Morehouse, Keith Maddox, and Mahzarin R. Banaji, “ All Human Social Groups Are Human, but Some Are More Human than Others: A Comprehensive Investigation of the Implicit Association of ‘Human’ to U.S. Racial/Ethnic Groups, ” Proceedings of the National Academy of Sciences 120 (22) (2023): e2300995120.
  • 33 John T. Jost, “ A Quarter Century of System Justification Theory: Questions, Answers, Criticisms, and Societal Applications, ” British Journal of Social Psychology 58 (2) (2019): 263–314; John T. Jost, Mahzarin R. Banaji, and Brian A. Nosek, “ A Decade of System Justification Theory: Accumulated Evidence of Conscious and Unconscious Bolstering of the Status Quo, ” Political Psychology 25 (6) (2004): 881–919; and John T. Jost and Mahzarin R. Banaji, “ The Role of Stereotyping in System-Justification and the Production of False Consciousness, ” British Journal of Social Psychology 33 (1) (1994): 1–27.
  • 34 For a review, see Tessa Charlesworth and Mahzarin R. Banaji, “The Development of Social Group Cognition in Infancy and Childhood,” in The Oxford Handbook of Social Cognition , 2nd edition, ed. Donal E. Carlston, K. Johnson, and Kurt Hugenberg (Oxford: Oxford University Press, in press).
  • 35 Tessa Charlesworth, Mayan Navon, Yoav Rabinovich, Nicole Lofaro, and Benedek Kurdi, “ The Project Implicit International Dataset: Measuring Implicit and Explicit Social Group Attitudes and Stereotypes Across 34 Countries (2009–2019), ” PsyArXiv, December 11, 2021, last edited March 21, 2022.
  • 36 For a review, see Charlesworth and Banaji, “The Development of Social Group Cognition in Infancy and Childhood.” See also Talee Ziv and Mahzarin R. Banaji, “ Representations of Social Groups in the Early Years of Life, ” in The SAGE Handbook of Social Cognition , ed. Susan Fiske and C. Macrae (London: SAGE Publications, 2012), 372–389.
  • 37 Yair Bar-Haim, Talee Ziv, Dominique Lamy, and Richard M. Hodes, “ Nature and Nurture in Own-Race Face Processing, ” Psychological Science 17 (2) (2006): 159–163.
  • 38 David J. Kelly, Paul C. Quinn, Alan M. Slater, et al., “ Three-Month-Olds, but Not Newborns, Prefer Own-Race Faces, ” Developmental Science 8 (6) (2005): F 31– F 36.
  • 39 Andrew Scott Baron and Mahzarin R. Banaji, “ The Development of Implicit Attitudes: Evidence of Race Evaluations from Ages 6 and 10 and Adulthood, ” Psychological Science 17 (1) (2006): 53–58.
  • 40 Yarrow Dunham, Andrew Scott Baron, and Mahzarin R. Banaji, “ Children and Social Groups: A Developmental Analysis of Implicit Consistency in Hispanic Americans, ” Self and Identity 6 (2–3) (2007): 238–255.
  • 41 For a further discussion of the development of implicit race bias, see Andrew N. Meltzoff and Walter S. Gilliam, “ Young Children & Implicit Racial Biases, ” Dædalus 153 (1) (Winter 2024): 65–83.
  • 42 To take a flower-insect IAT , visit https://outsmartingimplicitbias.org/module/iat .
  • 43 Fazio, Sanbonmatsu, Powell, and Kardes, “On the Automatic Activation of Attitudes.”
  • 44 Russell H. Fazio (in a personal communication, May 1, 2023) confirmed the easy acceptance of results from semantic priming methods that demonstrated automatic attitudes. The reason the IAT was held to higher standards is likely because its chosen attitude objects were not nonsocial entities like clouds and pizza but rather social categories like race, gender, sexuality, and age. It is likely that discovery of bias on these topics was simply less palatable, including to psychologists who were not familiar with the research tradition on implicit memory from which these measures were derived.
  • 45 Joseph E. LeDoux, “Emotion and the Amygdala,” in The Amygdala: Neurobiological Aspects of Emotion, Memory, and Mental Dysfunction (New York: Wiley-Liss, 1992), 339–351; and Goran Šimić, Mladenka Tkalčić, Vana Vukić, et al., “ Understanding Emotions: Origins and Roles of the Amygdala, ” Biomolecules 11 (6) (2021).
  • 46 Elizabeth A. Phelps, Kevin J. O’Connor, William A. Cunningham, et al., “ Performance on Indirect Measures of Race Evaluation Predicts Amygdala Activation, ” Journal of Cognitive Neuroscience 12 (5) (2000): 729–738.
  • 47 For reviews, see David M. Amodio and Mina Cikara, “ The Social Neuroscience of Prejudice, ” Annual Review of Psychology 72 (1) (2021): 439–469; Inga K. Rösler and David M. Amodio, “ Neural Basis of Prejudice and Prejudice Reduction, ” Biological Psychiatry: Cognitive Neuroscience and Neuroimaging 7 (12) (2022): 1200–1208; Jennifer T. Kubota, Mahzarin R. Banaji, and Elizabeth A. Phelps, “ The Neuroscience of Race, ” Nature Neuroscience 15 (7) (2012): 940–948; Pascal Molenberghs, “ The Neuroscience of In-Group Bias, ” Neuroscience & Biobehavioral Reviews 37 (8) (2013): 1530–1536; and Jennifer T. Kubota, “ Uncovering Implicit Racial Bias in the Brain: The Past, Present & Future, ” Dædalus 153 (1) (Winter 2024): 84–105.
  • 48 Amodio and Cikara, “The Social Neuroscience of Prejudice”; and Rösler and Amodio, “Neural Basis of Prejudice and Prejudice Reduction.”
  • 49 Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan, “ Semantics Derived Automatically from Language Corpora Contain Human-like Biases, ” Science 356 (6334) (2017): 183–186.
  • 50 The top ten traits associated with White (versus Black): critical, polite, hostile, decisive, friendly, diplomatic, understanding, philosophical, able, and belligerent. The top ten traits associated with Black (versus White): earthy, lonely, cruel, sensual, lifeless, deceitful, helpless, rebellious, meek, and lazy. Tessa E. S. Charlesworth, Aylin Caliskan, and Mahzarin R. Banaji, “ Historical Representations of Social Groups across 200 Years of Word Embeddings from Google Books, ” Proceedings of the National Academy of Sciences 119 (28) (2022): e2121798119.
  • 51 Sudeep Bhatia and Lukasz Walasek, “ Predicting Implicit Attitudes with Natural Language Data, ” Proceedings of the National Academy of Sciences 120 (25) (2023): e2220726120.
  • 52 For an exploration of gender biases embedded in internet texts, see Aylin Caliskan, Pimparkar Parth Ajay, Tessa Charlesworth, et al., “ Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics, ” in Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (New York: Association for Computing Machinery, 2022), 156–170.
  • 53 Arnold K. Ho, Jim Sidanius, Daniel T. Levin, et al., “ Evidence for Hypodescent and Racial Hierarchy in the Categorization and Perception of Biracial Individuals, ” Journal of Personality and Social Psychology 100 (3) (2011): 492–506.
  • 54 Robert Wolfe, Mahzarin R. Banaji, and Aylin Caliskan, “ Evidence for Hypodescent in Visual Semantic AI, ” in 2022 ACM Conference on Fairness, Accountability, and Transparency (New York: Association for Computing Machinery, 2022), 1293–1304.
  • 55 See also Darren Walker, “ Deprogramming Implicit Bias: The Case for Public Interest Technology, ” Dædalus 153 (1) (Winter 2024): 268–275; and Alice Xiang, “ Mirror, Mirror, on the Wall, Who’s the Fairest of Them All? ” Dædalus 153 (1) (Winter 2024): 250–267.
  • 56 For reviews, see S. Michael Gaddis, “ An Introduction to Audit Studies in the Social Sciences, ” in Audit Studies: Behind the Scenes with Theory, Method, and Nuance , ed. S. Michael Gaddis (Cham: Springer International Publishing, 2018), 3–44; and S. Michael Gaddis, “ Understanding the ‘How’ and ‘Why’ Aspects of Racial/Ethnic Discrimination: A Multi-Method Approach to Audit Studies, ” SSRN , July 25, 2019.
  • 57 Marianne Bertrand and Sendhil Mullainathan, “ Are Emily and Greg More Employable Than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination, ” American Economic Review 94 (4) (2004): 991–1013.
  • 58 Devah Pager, Bruce Western, and Bart Bonikowski, “ Discrimination in a Low-Wage Labor Market: A Field Experiment, ” American Sociological Review 74 (5) (2009): 777–799.
  • 59 Lincoln Quillian, Devah Pager, Ole Hexel, and Arnfinn H. Midtbøen, “ Meta-Analysis of Field Experiments Shows No Change in Racial Discrimination in Hiring over Time, ” Proceedings of the National Academy of Sciences 114 (41) (2017): 10870–10875.
  • 60 For a review, see Tessa E.S. Charlesworth and Mahzarin R. Banaji, “ The Relationship of Implicit Social Cognition and Discriminatory Behavior, ” prepublication chapter to appear in Handbook of Economics of Discrimination and Affirmative Action , ed. Ashwini Deshpande. For a discussion of the practical significance of these relationships, see Jerry Kang, “ Little Things Matter a Lot: The Significance of Implicit Bias, Practically & Legally, ” Dædalus 153 (1) (Winter 2024): 193–212; and Manuel J. Galvan and B. Keith Payne, “ Implicit Bias as a Cognitive Manifestation of Systemic Racism, ” Dædalus 153 (1) (Winter 2024): 106–122.
  • 61 Travis Riddle and Stacey Sinclair, “ Racial Disparities in School-Based Disciplinary Actions Are Associated with County-Level Rates of Racial Bias, ” Proceedings of the National Academy of Sciences 116 (17) (2019): 8255–8260; and Mark J. Chin, David M. Quinn, Tasminda K. Dhaliwal, and Virginia S. Lovison, “ Bias in the Air: A Nationwide Exploration of Teachers’ Implicit Racial Attitudes, Aggregate Bias, and Student Outcomes, ” Educational Researcher 49 (8) (2020): 566–578.
  • 62 Sarah Beth Bell, Rachel Farr, Eugene Ofosuc, et al., “ Implicit Bias Predicts Less Willingness and Less Frequent Adoption of Black Children More than Explicit Bias, ” The Journal of Social Psychology 163 (4) (2023): 554–565.1975619; and Raj Chetty, Nathaniel Hendren, Maggie R. Jones, and Sonya R. Porter, “ Race and Economic Opportunity in the United States: An Intergenerational Perspective, ” The Quarterly Journal of Economics 135 (2) (2020): 711–783.
  • 63 B. Keith Payne, Heidi A. Vuletich, and Jazmin L. Brown-Iannuzzi, “ Historical Roots of Implicit Bias in Slavery, ” Proceedings of the National Academy of Sciences 116 (24) (2019): 11693–11698; and Eric Hehman, Jessica K. Flake, and Jimmy Calanchini, “ Disproportionate Use of Lethal Force in Policing Is Associated with Regional Racial Biases of Residents, ” Social Psychological and Personality Science 9 (4) (2018): 393–401.
  • 64 Jordan B. Leitner, Eric Hehman, and Lonnie R. Snowden, “ States Higher in Racial Bias Spend Less on Disabled Medicaid Enrollees, ” Social Science & Medicine 208 (2018): 150–157; and Jacob Orchard and Joseph Price, “ County-Level Racial Prejudice and the Black-White Gap in Infant Health Outcomes, ” Social Science & Medicine 181 (2017): 191–198.
  • 65 Mahzarin R. Banaji, “ The Opposite of a Great Truth Is Also True: Homage of Koan #7, ” in Perspectivism in Social Psychology: The Yin and Yang of Scientific Progress (Washington, D.C.: American Psychological Association, 2004), 127–140.
  • 66 For a review, see Irene V. Blair, “The Malleability of Automatic Stereotypes and Prejudice,” Personality and Social Psychology Review 6 (3) (2002): 242–261.
  • 67 Bernd Wittenbrink, Charles M. Judd, and Bernadette Park, “ Evaluative versus Conceptual Judgments in Automatic Stereotyping and Prejudice, ” Journal of Experimental Social Psychology 37 (3) (2001): 244–252.
  • 68 Brian S. Lowery, Curtis D. Hardin, and Stacey Sinclair, “ Social Influence Effects on Automatic Racial Prejudice, ” Journal of Personality and Social Psychology 81 (2001): 842–855.
  • 69 Calvin K. Lai, Maddalena Marini, Steven A. Lehr, et al., “ Reducing Implicit Racial Preferences: I. A Comparative Investigation of 17 Interventions, ” Journal of Experimental Psychology: General 143 (4) (2014): 1765–1785.
  • 70 Past research suggests that exposure to only positive Black figures may be less effective at changing implicit racial attitudes than exposure to both positive Black and negative White exemplars. Jennifer A. Joy-Gaba and Brian A. Nosek, “ The Surprisingly Limited Malleability of Implicit Racial Evaluations, ” Social Psychology 41 (3) (2010): 137–146.
  • 71 Calvin K. Lai, Allison L. Skinner, Erin Cooley, et al., “ Reducing Implicit Racial Preferences: II. Intervention Effectiveness across Time, ” Journal of Experimental Psychology: General 145 (8) (2016): 1001–1016.
  • 72 Tessa E. S. Charlesworth and Mahzarin R. Banaji, “ Patterns of Implicit and Explicit Attitudes: I. Long-Term Change and Stability from 2007 to 2016, ” Psychological Science 30 (2) (2019): 174–192; Tessa E. S. Charlesworth and Mahzarin R. Banaji, “ Patterns of Implicit and Explicit Attitudes: IV. Change and Stability from 2007 to 2020, ” Psychological Science 33 (9) (2022); Tessa E. S. Charlesworth and Mahzarin R. Banaji, “ Patterns of Implicit and Explicit Attitudes II. Long-Term Change and Stability, Regardless of Group Membership, ” American Psychologist 76 (6) (2021): 851–869; and Tessa E. S. Charlesworth and Mahzarin R. Banaji, “ Patterns of Implicit and Explicit Stereotypes III: Long-Term Change in Gender Stereotypes, ” Social Psychological and Personality Science 13 (1) (2022): 14–26.
  • 73 Explicit race attitudes recorded a 98 percent reduction, shifting from a “‘slight’ preference for White Americans over Black Americans” to neutrality in the span of fifteen years, and making it the fastest changing explicit bias. Charlesworth and Banaji, “Patterns of Implicit and Explicit Attitudes II .”
  • 75 Brown v. Board of Education of Topeka , 347 U.S. 483 (1954),  Oyez  (accessed January 26, 2024). See also Alexandra Kalev and Frank Dobbin, “ Retooling Career Systems to Fight Workplace Bias: Evidence from U.S. Corporations, ” Dædalus 153 (1) (Winter 2024): 213–230.
  • 76 For a discussion of why legislation is often inadequate, see Wanda A. Sigur and Nicholas M. Donofrio, “ Implicit Bias versus Intentional Belief: When Morally Elevated Leadership Drives Transformational Change, ” Dædalus 153 (1) (Winter 2024): 231–249.

Advertisement

Issue Cover

  • Previous Article
  • Next Article

History and Definitions

Scope of the essay, the race attitude iat: early discoveries and signature results providing validation, implicit social cognition terminology and iat components, overall levels of explicit and implicit race attitudes and their dissociation, explicit and implicit race bias by racial/ethnic group, explicit and implicit race bias by other demographic variables, origins of implicit race bias: evidence for developmental invariance, converging evidence from neurons and natural language, the neural basis of the ra-iat, word embeddings based on massive language corpora converge with iat data, covariation between regional implicit race bias and socially significant outcomes, evidence and interventions for implicit attitude change: early evidence of malleability, change at the societal level, the science of implicit race bias: evidence from the implicit association test.

Kirsten N. Morehouse is a PhD candidate in psychology at Harvard University. She uses computational and behavioral tools to study when and why humans harbor implicit associations that are in conflict with ground truth data and consciously held beliefs. She has published in such journals as Proceedings of the National Academy of Sciences, Current Research in Ecological and Social Psychology , and Journal of Personality and Social Psychology .

Mahzarin R. Banaji , a Fellow of the American Academy since 2008, is the Richard Clarke Cabot Professor of Social Ethics in the Department of Psychology and the first Carol K. Pforzheimer Professor at the Radcliffe Institute for Advanced Study at Harvard University; and the George A. and Helen Dunham Cowan Chair in Human Dynamics at the Santa Fe Institute. She is the author of Blindspot: Hidden Biases of Good People (with Anthony G. Greenwald, 2013).

  • Cite Icon Cite
  • Open the PDF for in another window
  • Permissions
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Search Site

Kirsten N. Morehouse , Mahzarin R. Banaji; The Science of Implicit Race Bias: Evidence from the Implicit Association Test. Daedalus 2024; 153 (1): 21–50. doi: https://doi.org/10.1162/daed_a_02047

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Beginning in the mid-1980s, scientific psychology underwent a revolution – the implicit revolution – that led to the development of methods to capture implicit bias: attitudes, stereotypes, and identities that operate without full conscious awareness or conscious control. This essay focuses on a single notable thread of discoveries from the Race Attitude Implicit Association Test (RA-IAT) by providing 1) the historical origins of the research, 2) signature and replicated empirical results for construct validation, 3) further validation from research in sociocognitive development, neuroscience, and computer science, 4) new validation from robust association between regional levels of race bias and socially significant outcomes, and 5) evidence for both short- and long-term attitude change. As such, the essay provides the first comprehensive repository of research on implicit race bias using the RA-IAT. Together, the evidence lays bare the hollowness of current-day actions to rectify disadvantage experienced by Black Americans at individual, institutional, and societal levels.

The science of implicit race bias emerged from a puzzle. By the 1980s, laboratory experiments and surveys revealed clear and noteworthy reductions in expressions of racial animus by White Americans toward Black Americans. 1 But on every dimension that determines life's opportunities and outcomes – housing, employment, education, health care, treatment by law and law enforcement – the presence of widespread racial inequality remained. Further, on surveys asking even slightly indirect questions, such as attitudes toward federal support for racial equality in employment, attitudes appeared to have regressed, with 38 percent support in 1964 dropping to 28 percent in 1996. 2 These inconsistencies demanded an answer from science.

In their search for an explanation, experimental psychologists recalled an interesting dissociation or disparity in beliefs recorded decades ago. During his travels through the Jim Crow South, Gunnar Myrdal, a Swedish economist engaged by the Carnegie Corporation to conduct a study on interracial relations in America, encountered an unexpected dilemma. The data from surveys and interviews of White Americans confirmed expected expressions of racism. And yet as Myrdal noted, other sentiments from the very same individuals spoke to their uneasy acknowledgment of a disparity between the cherished national ideal of equality and the history of slavery and the realities of racism, even decades after emancipation. These dissonant cognitions, expressed inside quiet homes and noisy factories, struck Myrdal as distinctive enough to serve as the motif for his classic treatise, An American Dilemma: The Negro Problem and Modern Democracy . 3

Four decades later, psychologists responded to receding levels of “old-fashioned racism” by generating theories of “aversive racism” and measures of “modern racism.” 4 These ideas emerged as necessary acknowledgment that although race bias persists, modern racism manifests in more indirect and subtle ways than before. Indeed, experimental data emerging in the 1980s further highlighted the presence of automatic race bias in the minds of honest race egalitarians. 5 With accumulating evidence demonstrating that many judgments and decisions could operate outside conscious awareness or control, social psychologists Anthony G. Greenwald and Mahzarin R. Banaji proposed the idea of implicit bias and suggested that a tractable measure of implicit cognition was needed. 6 This essay reports on a thread of the development and discoveries of a singularly important test: the Race Attitude Implicit Association Test (hereafter, RA-IAT), a measure designed to capture differential automatic attitudes, such as associations of “good” and “bad” with White and Black Americans. 7

In 1967, Martin Luther King Jr. gave the keynote address at the annual meeting of the American Psychological Association (APA), only months before his assassination. He seemed to be aware that his audience of largely White Americans was eager to learn how they could contribute to the success of the civil rights movement. But King's speech clearly conveyed his perspective regarding the responsibility of the APA's scholars and clinicians. If they wished to support the movement, they should simply “‘tell it like it is.‘” 8 This essay is a response to that call from more than fifty years ago, to emphasize the strength and pervasiveness of anti-Black bias today. We tell it like it is, believing that empirical knowledge production is indeed the responsibility of scientists with expertise in psychological and other sciences. However, the responsibility of addressing challenges to the ideal of racial justice sits squarely at the feet of the nation. In fact, it would be ill-advised to expect scientists – who generally lack knowledge of history, law, policy development, organizational behavior, and the modes of societal change – to be primarily responsible for imagining and constructing paths to social change. By telling it like it is, and remaining focused on the evidence itself, this report can, should the will exist, serve as a foothold to move America toward a solution to racial inequality.

The science of implicit bias is rooted in experimental psychology. At the core of a particular family of measures is the concept of mental chronometry : studying the mind by measuring the time course of human information processing. 9 That is, rather than analyzing participants' responses to a question, the critical unit of measurement is the response latency or the time it takes to react to a stimulus. In the 1970s, researchers conducted the first robust studies testing the automaticity of semantic memory. These studies indexed the strength of association between two concepts by using precisely timed stimuli and measuring an individual's response latencies on the order of tens of milliseconds. 10 These procedures were soon adapted to test another important dimension of word meaning: valence , that is, the good-bad or pleasant-unpleasant dimension. Evidence soon emerged that, like semantic meaning, word or concept valence could be automatically extracted by relying on response latencies. 11 Today, this result is received wisdom, and evaluative priming is regarded to be a standard method to measure automatic attitudes. 12

This class of experimental procedures captured the attention of psychologists concerned with the limitation of self-report measures of racism: individuals can withhold their true beliefs in favor of more socially desirable responses. Moreover, even if the desire to speak forthrightly is assured, self-report measures are limited because humans have a desire to present a positive view of themselves, not just to others but even to themselves. Finally, even if such concerns about self and social desirability were removed, a great deal of research had demonstrated that access to mental content and process is vastly limited, making the problem less an issue of motivation and more one of inaccessibility. 13 These considerations, especially the latter, led psychologists to adapt mental chronometry to study automatic or implicit forms of bias. Race was a natural domain for exploration because of the inconsistency between conscious values in aspirational documents like the U.S. Constitution and the history of American racism.

A harbinger of the breakthrough to come appeared in a paper by psychologists John F. Dovidio, Nancy Evans, and Richard Tyler. 14 Diverging notably from previous research methods, these researchers sat their subjects before a computer screen on which the category labels “Black” or “White” appeared. After each of these primes, target words that represented positive and negative stereotypes of these groups (such as ambitious, sensitive, stubborn, lazy) appeared on the screen, and subjects were asked to decide rapidly if each stereotypic word could “ever be true” or was “always false” of the group. The results were clear: participants classified words more quickly when positive words followed “White” and when negative words followed “Black” primes, suggesting that the category White was more positive than Black in participants' implicit cognition. Although this method lacked the components that are characteristic of standard measures of implicit cognition today (the response task still required deliberation), this study pointed toward the potential of nonreactive measurement of race bias.

Social psychologist Patricia Devine's dissertation experiments hammered a second stake into the ground. 15 She subliminally presented words that captured negative Black stereotypes (in the experimental condition) or neutral words (in the control condition) and then requested evaluations of an ambiguously described person. Remarkably, those who were subliminally exposed to Black stereotypes as primes were more likely to view the ambiguously described person as hostile than those in the control condition. Equally remarkable, the degree of race bias on this more automatic measure of stereotypes was similar regardless of consciously reported levels of anti-Black prejudice.

Devine's research demonstrated the first classic dissociation between more deliberate or explicit race attitudes and more automatic or implicit race attitudes, and it prompted a shift in thinking about the nature of race bias. If bias were hidden, even to the person who carried it, that would explain how racial animus could decrease on survey measures while bias embedded in individual minds, institutions, and long-standing societal structures persisted. The two were dissociated. From a research standpoint, it was clear that to gain access to race bias in all forms, experimental psychologists would need to develop and sharpen measures of implicit race bias.

Several measures of implicit cognition emerged, among them the Implicit Association Test (IAT). 16 The IAT followed in the tradition of its predecessors by relying on a single fundamental idea: when two things become paired in our experience (for instance, granny and cookies ), evoking one (granny) will automatically activate the other (cookies). In the context of race bias, the speed and accuracy with which we associate concepts like Black and White with attributes like good and bad provides an estimate of the strength of their mental association, in this case, an implicit attitude.

Today, decades after the first uses of terms such as implicit bias, implicit attitude , and implicit stereotype , these concepts have permeated scientific and scholarly writing as well as the public's consciousness so effectively that they are rarely accompanied by a definition or explanation. 17 The earliest formal definition of implicit cognition reads: “The signature of implicit cognition is that traces of past experience affect some performance, even though the influential earlier experience is not remembered in the usual sense – that is, it is unavailable to self-report or introspection.” 18 A more colloquial definition of implicit bias has emerged as “a form of bias that occurs automatically and unintentionally, that nevertheless affects judgments, decisions, and behaviors.” 19

Both definitions are quite general, and wisely so, to be inclusive of any domain under investigation (such as self-perception, health decisions, and financial decisions). However, despite its generality, the greatest empirical attention has been devoted to one particular family of biases: those that concern attitudes (valence) and stereotypes (beliefs) about social groups (such as by age, gender, sexuality, race, ethnicity, social class, religion, or nationality). Among these, the test that has garnered the greatest scientific and public interest is the race test (as seen in the scientific record and from completion rates of the test online, where the RA-IAT outstrips all other tests in public interest). 20 Unsurprisingly, and for the same reasons, some resistance to the science of implicit race bias has also emerged, but such criticisms remain minor (2 percent of thousands of Google Alerts analyzed include any critical commentary). 21

Although full-fledged research on implicit social cognition began only in the 1990s, thousands of research articles on implicit bias have since been published. In fact, Google Scholar returns over sixty-five thousand results in response to a query of implicit bias as of January 2024. This prolificacy, while notable, renders any complete review of the literature impossible. As such, this essay constrains coverage in four ways. First, we report research on implicit race attitudes, setting aside all other social categories (such as gender, age, sexuality, disability) with a focus on construct validity. Second, we highlight research on attitudes , setting aside research on race stereotypes. Third, we focus almost entirely on a single method, the IAT, because 1) it is the most widely used measure of implicit bias today (the original report by Greenwald, Debbie McGhee, and Jordan L. K. Schwartz has recorded over seventeen thousand citations on Google Scholar as of January 2024), and 2) the online presence and popularity of the RA-IAT at Project Implicit offer an unparalleled source of data to explore implicit race attitudes. 22 Surprisingly, the signature results from this most popular IAT over the last twenty-five years have not been presented in a single location before. We synthesize them here. Fourth and finally, given the mission of Dædalus to explore the frontiers of knowledge on issues of public importance, we prioritize coverage of questions about the nature of implicit race bias and its interpretation rather than questions of primarily scientific interest, such as the nature of the psychological processes underlying implicit bias, like whether the underlying representation is best viewed as associative or propositional in nature. 23

With these constraints and opportunities in mind, we introduce 1) streams of research from other sciences, notably cognitive development, neuroscience, and computer science, to provide convergent validation for the RA-IAT data; 2) new research providing predictive validity by demonstrating robust covariation between regional RA-IAT and racial disparities in health care, education, business, and treatment by law enforcement; and 3) evidence demonstrating the RA-IAT's malleability at the individual level (change within one person) and population level (change within the United States). Together, the data offer confidence in the concept of implicit race bias for use in two ways: as a foothold to an effort for broad-based programs and procedures to ensure racial equality, and as the basis for teaching about implicit bias in all educational settings, including schools, colleges, and the workplace.

Evidence of implicit race bias using the IAT first emerged in the mid-1990s from small-scale, highly controlled experiments administered to college students, as was characteristic of research at that time. These initial experiments were important for benchmarking data that would soon arrive from exponentially larger and more diverse internet-based samples. In 1998, Yale University hosted a test of implicit race attitude, the RA-IAT, among a few other IATs, and the site was immediately bombarded with participants. The RA-IAT was immediately the most popular test, and it remains so twenty-five years later. Today, the amount of research conducted and the diversity of empirical results obtained may appear insurmountable to the general reader. Here, we have created the first repository of the basic discoveries and signature results of the RA-IAT in easy-to-access percentages, histograms, and inferential statistics.

The RA-IAT, following the general IAT procedure, consists of items that appear on a computer screen belonging to a pair of target categories (such as Black and White ) and a pair of target attributes (such as Good and Bad ). At the most basic level, the RA-IAT provides an index of implicit race bias by measuring the relative speed (on the order of milliseconds) it takes participants to sort stimuli when White and Good share a response key (and Black and Bad share a different response key), relative to when Black and Good share a response key (and White and Bad share a different response key). 24 The IAT score is captured by the statistic D, which is a measure of effect size, computed by taking the difference between response latencies in the two critical conditions (that is, Black + Good/White + Bad, and Black + Bad/White + Good) and divided by the standard deviation across all blocks of the test.

Uninitiated readers may wish to take the test at https://implicit.harvard.edu/implicit/selectatest.html . Additionally, in Table 1 , we provide descriptions and examples of the core terminology of implicit social cognition and the IAT more generally, even though our focus in this essay will remain on the concept of the attitude.

Core Terminology of Implicit Social Cognition Theory and the Implicit Association Test

Source: Descriptions and definitions by the authors.

An analysis of Project Implicit data from 3.3 million American respondents who completed the RA-IAT across fourteen years (2007–2020) shows robust evidence of implicit race bias: overall, 65 percent of respondents displayed a meaningful association of White with good relative to Black with good (“implicit pro-White bias”), whereas 19 percent of respondents displayed no preference (see Figure 1 ; for corresponding effect sizes, see Table 2 ). 25 That is, 2.1 of 3.3 million respondents automatically associated the attribute “Good” (relative to “Bad”) more so with White than Black Americans. By contrast, across all fourteen years, only 29 percent of respondents explicitly reported a preference for White over Black, and 60 percent of respondents reported equal liking for both groups. As the reader may anticipate, these overall scores are strongly modulated by the social group of the respondent; those data are presented in the next section.

Distributions of Implicit and Explicit Race Attitudes

Distributions of Implicit and Explicit Race Attitudes

IAT D scores range from −2.0 to 2.0, with 0 ± 0.15 serving as the null interval (“Little or No Bias”). Source: Created by the authors using Project Implicit data.

Implicit and Explicit Race Attitudes by Participants' Race/Ethnicity

IAT D scores range from −2 to +2, with positive values indicating an implicit pro-White bias.

Explicit preferences ranged from –3 (“I strongly prefer African Americans to White Americans”) to +3 (“I strongly prefer White Americans to African Americans”). The column “E-I” represents the correlation between IAT D scores and explicit preferences, with 95 percent confidence intervals reported in brackets. Source: Compiled by the authors using Project Implicit data.

This divergence between mean levels of implicit and explicit race attitudes is striking and bolstered by a dissociation between implicit and explicit race attitudes within a single person. Specifically, modest correlations between implicit and explicit attitudes are typically observed across all participants (for example, r = 0.30 [95% CI: 0.308, 0.310]), and even weaker correlations often emerge for Black Americans (see Table 2 ). 26 Additional support for this dissociation has been derived from latent variable modeling. Unlike variables that can be directly observed or measured (like temperature), latent variables refer to constructs – such as race attitudes – that are inferred indirectly and can possess a degree of measurement error. These latent modeling techniques indicate that implicit and explicit attitudes are related, but distinct. That is, although the latent implicit and explicit attitude variables are correlated ( r = 0.47), a confirmatory factor analysis suggests that a two-factor solution fits the data better than a single-factor solution with a single latent “attitude” variable. 27 In other words, this technique indicated that implicit and explicit attitudes are related, but psychometrically distinct.

Together, this pattern of data – low levels of explicit race bias but high levels of implicit bias – is considered a key result of implicit intergroup cognition. The data also provide a conceptual replication of Devine's early discovery that implicit race bias can emerge in defiance of stated egalitarian values. 28 However, unlike Devine's work with subliminally presented stimuli, the IAT does not hide its intent; the two racial categories are in full view and the test is announced as one of race bias. Moreover, the IAT components are not shrouded in mystery and completing the task is so simple that even a child can participate. These features contribute to the surprise that often accompanies the IAT: if the task itself is easy, why can I not control my responses?

Nevertheless, after nearly a century of work based on almost purely explicit measures, these results lay bare the full extent of the challenge we face when confronting the status of race in America today. 29 Recall in Myrdal's interviews during Jim Crow that respondents revealed a disparity between two consciously held beliefs: the American ideal of liberty and equality and America's history of bondage and inequality. In a sense, that conflict is psychologically simple because both cognitions are conscious. By contrast, the dissociation between explicit and implicit race attitudes is especially challenging because implicit attitudes operate largely outside the purview of conscious awareness and control, and therefore may unwittingly produce behaviors that conflict with consciously held values and beliefs.

Among psychology's most ubiquitous results is the demonstration of in-group bias. Irrespective of whether the groups involved are “minimal” (based on a “minimal” preference, such as for the artist Klee over Kandinsky) or real, research has overwhelmingly demonstrated that humans show a preference for their own group relative to the out-group. 30 For example, Japanese Americans and Korean Americans, Yankee and Red Sox fans, and Yale and Harvard students all display clear and symmetric in-group preferences. 31 However, as visualized in Figure 2 , the data across White and Black Americans paint a much more complex picture.

Distributions of Implicit and Explicit Race Attitudes for White and Black
                        Americans

Distributions of Implicit and Explicit Race Attitudes for White and Black Americans

Specifically, 71 percent of White Americans displayed an implicit pro-White bias, whereas only 33 percent of Black Americans displayed an implicit pro-Black bias. These data are in contrast with the robust in-group preferences among Japanese and Korean Americans, Red Sox and Yankee fans, and Yale and Harvard students, in which each group showed an equally robust preference for its own group. This lack of in-group preference among Black Americans is a second signature result and it extends beyond Black Americans to other less advantaged groups. That is, unlike members of socially advantaged groups, who consistently display implicit in-group preferences, members of socially disadvantaged groups typically do not.

On the measure of explicit bias, an almost opposite pattern emerges, making these data among the clearest examples of mental dissociation: the lack of consistency between two measures of the same concept , within the same mind. Only 34 percent of White Americans displayed an explicit pro-White bias, whereas 56 percent of Black Americans displayed an explicit pro-Black bias. These data highlight the role conscious values play on responses. White Americans, likely being aware of the history of race relations in America, report a far more muted in-group preference. Black Americans, equally likely aware of the history of race relations in America, report an overwhelming in-group preference.

When taken together, the data for White and Black Americans showed a double dissociation. On the one hand, White Americans report little in-group preference on the explicit measure but strong in-group preference on the implicit measure. On the other hand, Black Americans show a strong in-group preference on the explicit measure but no in-group preference on the implicit measure. We regard this result to be sufficiently important that we recommend that it play a role in any discussion of policies to ensure racial equality. Conscious attitudes need not follow such a pattern, but to the extent that attitudes and behavior are driven by both explicit and implicit cognition, the balance sheet of intergroup liking shows a striking lack of parity.

Interestingly, when third-party groups are tested (such as Asian Americans taking a White-Black IAT), they consistently show an implicit pro-White bias (see Table 2 ). That is, rather than associating both out-groups with good equally, third-party respondents display an implicit preference for the socially dominant group. In fact, rivaling the degree of bias among White Americans, 65 percent of Asian Americans and 60 percent of Latinx Americans display an implicit pro-White preference.

Similar patterns also emerge on measures of implicit stereotyping. As one example, Morehouse and Banaji, with Keith Maddox, found that White Americans and third-party participants associate human (versus nonhuman attributes like “animal” and “robot”) more with their group, whereas nondominant groups (like Black Americans) display no “human = own group” bias. 32 This striking absence of in-group preference in members of disadvantaged groups points to the power of the social standing of groups in society, and has been interpreted to be consistent with system justification tendencies. 33

Beyond race/ethnicity, do other demographic variables modulate the strength of implicit race bias? That is, will men and women, liberals and conservatives, or older and younger respondents show different levels of implicit race bias? To test this question, variation across five additional demographic characteristics was examined: religion, level of education, age, gender, and political ideology. Implicit race bias was largely stable across respondents' religious affiliation and level of education. However, differences emerged across age, gender, and political ideology. Implicit pro-White preferences increased with age (each five-year increase translating roughly to a 3 percent increase in IAT D scores), and respondents over age sixty displayed levels of bias that were 15 percent stronger than individuals under age twenty. Further, the incidence of pro-White bias was 20 percent higher among self-identified conservatives relative to self-identified liberals, and 7 percent higher among men relative to women.

These results show how group membership is related to variation in implicit and explicit race attitudes. Later in this essay, we explore another potential determinant of attitude strength – participants' local environment – and the relationship between regional levels of implicit race attitudes and socially significant outcomes (such as lethal use of force by police or health outcomes).

Over the past twenty years, researchers have gained a new understanding about the surprisingly early precursors of race encoding and race preference in infants and young children. Although far from biological and social maturity, infants and children show evidence of a mind that is already attuned to race but has the capacity to set racial groupings aside, even when attending to other social categories like gender and age, in other situations. 34

Human groups across the world, as much as they differ by language, culture, preferences, beliefs, and values, are all members of the same species. Is implicit bias a core capacity that unifies us as humans? If we look cross-culturally, a recent analysis of implicit race attitudes from thirty-four countries revealed that an implicit preference for White over Black appears in every country sampled (see Figure 3 ). 35

Implicit Race Attitudes by Country

Implicit Race Attitudes by Country

Country-level RA-IAT scores expressed in Cohen's d effect sizes, with positive effect sizes representing an implicit pro-White bias. For comparison, the average IAT D-Score for the United States for the same period (2009–2019) was 0.30. Source: Adapted from Tessa Charlesworth, Mayan Navon, Yoav Rabinovich, Nicole Lofaro, and Benedek Kurdi, “The Project Implicit International Dataset: Measuring Implicit and Explicit Social Group Attitudes and Stereotypes across 34 Countries (2009–2019),” Behavioral Research Methods 55 (3) (2023): 1413–1440.

Another way to test whether a particular attitude is fundamental is to observe whether it is present in infants and young children. Our interest here is not in children qua children, but rather in developing minds. Is implicit race bias present even in early stages of cognitive-affective development? The obvious prediction would be that, of course, given the massively different levels of personal experience and knowledge of the culture that children have acquired relative to adults, implicit race bias should differ based on age. But to the extent that the data show the opposite – similar patterns of implicit race bias in adults and children – we would learn that such biases require little time and experience in a culture to be acquired.

Much has been written about the development of race cognition in infancy. 36 From this work, we know that even infants prefer faces of members of their own group, an effect that likely emerges out of familiarity with their caregivers. For example, three-month-old Ethiopian infants in Ethiopia prefer African over European faces, Ashkenazi babies in Israel prefer European over African faces, and babies of Ethiopian Jews who have immigrated to Israel and have caregivers of both groups show no race preference. 37 Importantly, these preferences are early emerging but not hard-wired; they are absent at birth but present by three months of age. 38 In other words, these data show that the human brain is attuned to features, like race and gender, in the environment that can differentiate between in-group and out-group members.

Work with toddlers has been especially fruitful because the same method used to measure implicit race bias in adults could be adapted to measure implicit race bias in children. Specifically, psychologist Andrew Scott Baron and Banaji created a child version of the RA-IAT. 39 Given that children's experiences and knowledge of racial groups vastly differ from adults', the authors expected stark differences in the degree of implicit race bias expressed by children and adults. However, this is not what they found. The surprising result, now replicated many times, is that White six-year-olds, ten-year-olds, and adults show identical levels of implicit race bias.

Notably, and further mirroring the results obtained in adult samples, children's implicit race bias was qualified by social status. By age three, White American children show an in-group preference, whereas Hispanic and Black American children show no in-group preference. 40 This result is remarkable because it teaches us that implicit attitudes are absorbed from the culture and into the minds of even young children. It also challenges the theoretical intuition that implicit attitudes are learned slowly over time. (For further discussion of the development of implicit racial bias, see Andrew N. Meltzoff and Walter S. Gilliam's contribution to this volume.) 41

Understanding how the mind works is not for the meek. The Nobel Prize–winning physicist Murray Gell-Mann seemed to understand this when he reputedly said, “Think how hard physics would be if particles could think.” Not only are beings who can think the object of our study, but the thinking under consideration is not easily available to their own conscious awareness. As such, building a case for an imperceptible yet consequential bias requires a multipronged, continuous, and iterative process of validation.

There is already deep and broad evidence for the construct validity of the IAT. For example, providing face validity, we know a priori that the concept “flower” is more positive than “insect,” and the IAT detects this implicit pro-flower preference in most humans. 42 Further evidence can be obtained by studying groups who are known to differ in attitude and observing whether expected differences emerge. Indeed, we have already reported that Black and White Americans show diverging implicit race attitudes, providing additional evidence for construct validity. As a third route, construct validation has been obtained by demonstrating that findings derived on the IAT are related to (but not redundant with) conceptually similar constructs. Indeed, we have shown that although implicit and explicit race attitudes are modestly correlated, latent variable modeling suggests that a two-factor solution (with “implicit bias” and “explicit bias” as separate latent factors) provided the best fit to the data. In fact, providing discriminant validation, implicit insect-flower attitudes did not hang together with implicit intergroup attitudes.

In the following sections, we will encounter construct validation in several new ways. In particular, we show that methods from other fields (including neuroimaging and word embeddings) also demonstrate evidence of implicit race bias. Moreover, we explore the origins and consequences of implicit race bias to push the engine of construct validity further. Together, these various approaches have not only created a strong foundation for understanding the concept of implicit race bias, but have produced unexpected empirical findings that challenged and refined existing theory.

When the first pre-IAT measures of implicit attitudes were introduced, little discussion ensued about whether these alien measures should be considered measures of attitude. 43 However, when the IAT was introduced, the question of construct validity appeared immediately. 44 It became obvious that measures that directly interrogated the brain, especially those regions that had long been identified as playing a role in emotional learning (such as Pavlovian conditioning), could prove useful if correlations between IAT behavior and brain activation patterns in regions known to be evolved in emotional learning could be observed.

Research with neuroimaging methods like fMRI has long demonstrated that the amygdala, a subcortical brain structure, is involved in the continuous evaluation and integration of sensory information, with a special role for assigning values for valence and intensity. 45 Crucially, neuroscientist Elizabeth A. Phelps and colleagues showed that amygdala activation to Black faces of unknown individuals (relative to White) was significantly correlated with implicit race bias; no such correlation was observed with explicit race bias as measured by the Modern Racism Scale. 46 This suggested that whatever the RA-IAT detects has a core valence component, in line with the idea of “attitudes” as measuring evaluations or the dimension of positive and negative. A second study suggested that race-based responding is modulated by experience: when the faces of famous and generally liked Black (Denzel Washington) and White (Jerry Seinfeld) faces were used, this activation-implicit bias correlation disappeared. Put differently, this result indicated that familiarity can interrupt this relationship, providing two-pronged convergence.

In the decades that have followed, a plethora of evidence has linked implicit attitudes with neural responses to race-based in-group and out-group faces and more downstream decision-making to test the ability to control default, biased responding. 47 Results of relevance demonstrate that 1) the neural representation of race-based attitudes involve a range of overlapping and interacting brain systems, 2) race-based processing of in-group and out-group faces occurs early in the information-processing sequence starting at one hundred milliseconds upon encountering a face, 3) implicit bias observed in brain activity is malleable and responsive to task demands and context, and 4) individual differences exist in the ability to exert control over biased responses, and this control itself can be initiated without awareness as well as involve both inhibition of unwanted responses and the initiation and application of intentional behavior. 48 Crucially, this last piece of evidence highlights the need for proactive interventions. If bias can creep in, even during early visual processing, then it is unrealistic to expect even well-intentioned individuals to prevent bias from impacting their behavior in the moment. Instead, changes that alter the choice structure and prevent bias from entering the decision-making process are more likely to succeed.

Overall, neuroscientific evidence provided important construct validity for the IAT and its presumed measurement of expressions of value along a good-bad dimension. Moreover, it indicated that implicit race bias converges with multiple levels of information processing from the earliest stages of face detection to judgments of behavior.

A long history of research on natural language processing (NLP) coupled with the availability of massive language corpora (such as the Common Crawl and Google Books) have created the opportunity to learn how social groups are represented in language on an unprecedented scale. Specifically, mirroring the logic of the IAT, computer scientist Aylin Caliskan and colleagues used word embeddings – a technique that maps words or phrases to a high-dimensional vector space – to understand the relative associations between targets (such as Black and White people) and attributes (such as Good and Bad). 49 Creating a parallel measure, the Word Embeddings Association Test (WEAT), they performed tests of group-attribute associations in language on a trained dataset of eight hundred and forty billion tokens from the internet. In doing so, they replicated the classic implicit race bias finding: European American names were more likely than African American names to be closer (semantically similar) to pleasant words than to unpleasant words.

These approaches have also enabled researchers to ask questions about human attitudes that are beyond the scope of behavioral tools. Experimental psychologist Tessa Charlesworth, Caliskan, and Banaji used trained databases of historical texts to demonstrate that attitudinal biases toward racial/ethnic groups have remained stable over the course of two centuries (1800–1999). 50 Moreover, just as neuroimaging data showed convergence between theoretically identified brain regions like the amygdala and the RA-IAT but not with explicit race bias, analyses of the biases embedded in language suggest that they are related to IATs but not self-report data. 51 In other words, linguistic patterns represent a reservoir for collectively held or culturally imprinted beliefs. 52

In fact, recent work indicates that algorithms are even capable of refracting beliefs about racial purity. 53 Specifically, information scientist Robert Wolfe, Calis kan, and Banaji showed that CLIP, an algorithm that relies on both image and text data, has learned the one-drop rule or hypodescent (that is, a legal principle prominent even in the twentieth century that held that a person with just one Black ancestor is to be considered Black). 54 Overall, these findings add to the burgeoning evidence that implicit bias embedded in human minds exists in language and that algorithms trained on these databases will carry, amplify, and even reproduce bias. 55

A growing number of “audit studies” have demonstrated group-based discrimination in controlled field settings. 56 These studies, typically conducted by economists and sociologists, create highly standardized but naturalistic situations to explore how specific variables (such as race/ethnicity) influence behavior. For example, economist Marianne Bertrand and computation and behavioral scientist Sendhil Mullainathan sent roughly five thousand fictitious résumés to employers in Boston and Chicago. 57 The résumés were identical in all ways except that the applicant's name was either a White- or Black-sounding name. Despite their identical qualifications, résumés with White names received 50 percent more callbacks than résumés with Black names. In another example in the domain of employment, Devah Pager and colleagues demonstrated that, despite having equivalent résumés and being actors trained to respond identically to interview questions, Black applicants were half as likely to receive a callback than White applicants. 58 In fact, in an even more stunning demonstration of race bias, Black applicants were just as likely to receive a callback as White applicants with a felony record. These individual studies mirror a larger trend observed in a meta-analysis: hiring discrimination against African Americans remained stable over a twenty-five-year period (1989–2015). 59

These audit studies, like the perplexing disconnect between consciously reported prejudice and observed inequalities in society, require an explanation. How is it that the same résumé or qualifications can be evaluated more positively if they are attributed to a White person? We posit that implicit bias is the most likely explanation. The difficulty was that, until recently, no direct link between measures of implicit bias and large-scale race-based discrimination was available. However, a new line of research, now reaching a substantial number of demonstrations, provides the first persuasive evidence that implicit bias is indeed correlated with racial discrimination on socially significant outcomes (SSB) in domains like employment, health care, education, and law enforcement. 60

Specifically, a mounting body of research across laboratories and disciplines within the social sciences shows that U.S. regions with stronger implicit race bias (measured by the RA-IAT and stereotype IATs) also have larger Black-White disparities in SSBs. In fact, this research has demonstrated covariation between regional implicit race bias and SSBs in four prominent domains: 1) education (including suspension rates and Black-White gaps in standardized test scores); 61 2) life and economic opportunity (adoption rates and upward mobility); 62 3) law enforcement (Black-White disparities in traffic stops and the use of lethal force); 63 and 4) health care (Medicaid spending and Black-White gaps in infant birth weight and preterm births). 64 These studies show that implicit bias, measured at the level of individual minds but aggregated across geographic space, reflects race discrimination that cannot otherwise be explained.

With hindsight, we know that implicit bias is malleable. However, this was not always received knowledge or even expected. In the early years of research on implicit bias using the IAT, many primary investigators believed that implicit bias was intractable. 65 Yet even early work raised the possibility that implicit race attitudes were sensitive to perceivers' motivations, goals, and strategies, as well as contextual manipulations. 66 For example, social psychologist Bernd Wittenbrink and colleagues found that negativity toward Black individuals was lower after watching a movie clip depicting Black Americans in a positive setting (relative to a negative setting). 67 Similarly, social psychologist Brian Lowery and colleagues demonstrated that White Americans displayed lower levels of negativity toward Black individuals in the presence of a Black (rather than White) experimenter. 68

Extending this work, psychologist Calvin Lai and colleagues conducted an important study exploring the comparative efficacy of seventeen interventions designed to reduce implicit race bias. 69 Although these interventions were roughly five-minutes long and only administered once, eight of the seventeen interventions were effective in reducing implicit race bias. The most effective interventions invoked high self-involvement and/or linked Black people with positivity and White people with negativity. 70 By contrast, interventions that required perspective-taking, asked participants to consider egalitarian values, or induced a positive emotion were ineffective. When participants' attitudes were tested even a few hours after the intervention, none of the eight previously effective interventions produced a continued reduction in implicit race bias. 71 Of course, this temporary (but not durable) change is to be expected; implicit bias should snap back, rubber band–like, to some stable individual, situational, or broader cultural default. In fact, that single presentations of short interventions can produce any change is surprising.

But many “light” interventions, often involving a few counterattitudinal associations or a hypothetical written scenario (a paragraph long) presenting counterattitudinal information, do not show long-term change. To us, the lack of long-term change is hardly surprising given the weakness of the interventions. In fact, in such a case, implementing flimsy interventions and looking for long-term effects is a fool's errand; yet well-intentioned investigators with the hope that a sentence or two should wipe out a lifetime of learning have tried them.

These laboratory studies provide excellent tests of specific interventions, but they are less equipped to test whether implicit bias has changed over the course of years or decades. As such, the key question of whether long-term change was possible remained. However, recent analyses by Charlesworth and Banaji challenged this idea. 72 Specifically, using time-series modeling, they traced almost three million Americans' implicit race attitudes over the course of fourteen years (2007–2020). Crucially, they found evidence of pervasive change: across all participants, implicit race bias decreased by 26 percent, making it the second fastest changing implicit attitude after sexuality attitudes (anti-gay bias), which saw a dramatic 65 percent reduction during the same period. 73 In fact, if trends continue, implicit race attitudes could first touch neutrality in 2035.

Moreover, this change was not restricted to only certain segments of society (for instance, younger and more liberal participants). Rather, pointing to widespread societal change , men and women, older and younger, liberal and conservative, and more- and less-educated participants alike all moved toward neutrality. 74 The only exception was that, unlike White participants, who recorded a 27 percent reduction in implicit bias (IAT D score reduced by 0.11 points), Black participants' implicit attitudes remained relatively stable, only changing 0.03 IAT D score points over the fourteen-year period (see Table 3 ).

Change in Implicit Race Attitudes by Participants' Race/Ethnicity

“Start Value” refers to the mean IAT D score recorded in January 2007; “End Value” refers to the mean IAT D score recorded in December 2020. Source: Compiled by the authors using Project Implicit data.

This widespread change is remarkable, especially when one considers that not all implicit biases are changing. For example, implicit anti-elderly, anti-disability, and anti-fat biases remained relatively stable over the fourteen-year period. This change toward some social categories but not others begs an important question: what is the source of this change?

We pose this question because of its relevance to the different claims about how to reduce bias, and where resources earmarked for attitude change should be directed. On the one hand, some researchers and practitioners have criticized a focus on change at the individual level (such as deploying appeals of equality to change individual minds). On the other hand, past interventions targeting structural-level change have not eradicated racial inequalities as expected. 75 In fact, change through laws and acts of Congress, if resisted by individuals, may actually prompt reactance and undo progress. 76

We noted above that implicit anti-gay bias dropped dramatically (64 percent) between 2007 and 2020. What caused this surprising and especially rapid change? We propose that anti-gay bias may possess unique features that allowed such change. For one, sexuality is more easily concealed than a person's race/ethnicity, gender, age, or weight. But we argue that another explanation warrants further investigation: anti-gay interventions occurred at three levels within the same fourteen-year period.

First, change occurred at the individual level as children (and adults of all ages) came out to parents, grandparents, friends, neighbors, and coworkers. Love, already in place, trumped even implicit bias. In other words, the concealable nature of sexuality forced individuals to reconcile their anti-gay attitudes with their positive feelings toward their loved ones; this choice architecture was not in place for attitudes about other social groups. Second, change occurred at the institutional level. Of course, such change was not adopted everywhere, and some organizations were directly hostile to nonheterosexual employees. However, many institutions, like the U.S. military, enacted policies that affirmed the status of same-sex relationships (such as extending health benefits to same-sex partners) even before the country did. Third, change occurred at the macro level. Massachusetts and other states legalized same-sex marriages in the early 2000s, and the Supreme Court of the United States followed suit in 2015. In our estimation, it is rare for interventions at all three levels – individual, institutional, and societal – to occur within a short period of time. To our knowledge, change at all three levels within a short time frame has not eventualized for other social groups.

Implicit race bias exists. Support for its presence is undergirded by evidence from other areas of psychology (cognitive, developmental, neuroscience) as well as other behavioral sciences using quite different methods. New evidence shows that regional implicit bias predicts socially significant outcomes of Black-White disparity along several important dimensions that determine life's opportunities and outcomes. To bring hope, data also reveal that implicit bias is malleable. Overall, these data represent one of many robust streams of scientific evidence available today. Together, they call for a nationwide undertaking for change – at the individual, institutional, and societal levels.

Howard Schuman, Charlotte Steeh, and Lawrence Bobo, Racial Attitudes in America: Trends and Interpretations (Cambridge, Mass.: Harvard University Press, 1985).

Howard Schuman, Charlotte Steeh, Lawrence D. Bobo, and Maria Krysan, Racial Attitudes in America: Trends and Interpretations , rev. ed. (Cambridge, Mass.: Harvard University Press, 1997).

Gunnar Myrdal, An American Dilemma: The Negro Problem and Modern Democracy , volumes 1 and 2 (Oxford: Harper, 1944).

For aversive racism, see John F. Dovidio and Samuel L. Gaertner, “Prejudice, Discrimination, and Racism: Historical Trends and Contemporary Approaches,” in Prejudice, Discrimination, and Racism , ed. John F. Dovidio and Samuel L. Gaertner (San Diego: Academic Press, 1986), 1–34. For so-called modern racism, see John B. McConahay, “Modern Racism, Ambivalence, and the Modern Racism Scale,” in ibid.

Patricia G. Devine, “Stereotypes and Prejudice: Their Automatic and Controlled Components,” Journal of Personality and Social Psychology 56 (1989): 5–18, https://doi.org/10.1037/0022-3514.56.1.5 .

Anthony G. Greenwald and Mahzarin R. Banaji, “Implicit Social Cognition: Attitudes, Self-Esteem, and Stereotypes,” Psychological Review 102 (1) (1995): 4.

Anthony G. Greenwald, Debbie E. McGhee, and Jordan L. K. Schwartz, “Measuring Individual Differences in Implicit Cognition: The Implicit Association Test,” Journal of Personality and Social Psychology 74 (6) (1998): 1464–1480, https://doi.org/10.1037/0022-3514.74.6.1464 .

“King's Challenge to the Nation's Social Scientists,” The APA Monitor 30 (1) (1999), https://www.apa.org/topics/equity-diversity-inclusion/martin-luther-king-jr-challenge .

R. Duncan Luce, Response Times: Their Role in Inferring Elementary Mental Organization (New York: Oxford University Press, 1986); and Michael I. Posner, Chronometric Explorations of Mind (Oxford: Lawrence Erlbaum, 1978).

David E. Meyer and Roger W. Schvaneveldt, “Facilitation in Recognizing Pairs of Words: Evidence of a Dependence between Retrieval Operations,” Journal of Experimental Psychology 90 (1971): 227–234, https://doi.org/10.1037/h0031564 ; and James H. Neely, “Semantic Priming and Retrieval from Lexical Memory: Roles of Inhibitionless Spreading Activation and Limited-Capacity Attention,” Journal of Experimental Psychology: General 106 (3) (1977): 226–254, https://doi.org/10.1037/0096-3445.106.3.226 .

Russell H. Fazio, David M. Sanbonmatsu, Martha Powell, and Frank R. Kardes, “On the Automatic Activation of Attitudes,” Journal of Personality and Social Psychology 50 (1986): 229–238, https://doi.org/10.1037/0022-3514.50.2.229 .

For a fuller treatment of the “implicit revolution,” see Anthony G. Greenwald and Mahzarin R. Banaji, “The Implicit Revolution: Reconceiving the Relation between Conscious and Unconscious,” American Psychologist 72 (9) (2017): 861–871, https://doi.org/10.1037/amp0000238 .

Greenwald and Banaji, “Implicit Social Cognition”; and Richard E. Nisbett and Timothy D. Wilson, “Telling More than We Can Know: Verbal Reports on Mental Processes,” Psychological Review 84 (1977): 231–259, https://doi.org/10.1037/0033-295X.84.3.231 .

John F. Dovidio, Nancy Evans, and Richard B. Tyler, “Racial Stereotypes: The Contents of Their Cognitive Representations,” Journal of Experimental Social Psychology 22 (1) (1986): 22–37, https://doi.org/10.1016/0022-1031(86)90039-9 .

Devine, “Stereotypes and Prejudice.”

For comprehensive reviews of measures of implicit cognition, see Bertram Gawronski and Jan De Houwer, “Implicit Measures in Social and Personality Psychology,” in Handbook of Research Methods in Social and Personality Psychology , ed. Harry T. Reis and Charles M. Judd (Cambridge: Cambridge University Press, 2014), 283–310; Brian A. Nosek, Carlee Beth Hawkins, and Rebecca S. Frazier, “Implicit Social Cognition: From Measures to Mechanisms,” Trends in Cognitive Sciences 15 (4) (2011): 152–159, https://doi.org/10.1016/j.tics.2011.01.005 ; and Bertram Gawronski, “Automaticity and Implicit Measures,” Handbook of Research Methods in Social and Personality Psychology , ed. Reis and Judd.

Mahzarin R. Banaji, Curtis Hardin, and Alexander J. Rothman, “Implicit Stereotyping in Person Judgment,” Journal of Personality and Social Psychology 65 (1993): 272–281, https://doi.org/10.1037/0022-3514.65.2.272 ; and Greenwald and Banaji, “Implicit Social Cognition.”

Greenwald and Banaji, “Implicit Social Cognition,” 4–5.

“Implicit Bias,” National Institutes of Health, https://web.archive.org/web/20220716115620/ https://diversity.nih.gov/sociocultural-factors/implicit-bias (accessed January 26, 2024).

For an analysis of Google alerts on “implicit bias,” see Kirsten N. Morehouse, Swathi Kella, and Mahzarin R. Banaji, “Implicit Bias in the Public Eye: Using Google Alerts to Determine Public Sentiment” (in preparation).

Jennifer L. Howell and Kate A. Ratliff, “Not Your Average Bigot: The Better-than-Average Effect and Defensive Responding to Implicit Association Test Feedback,” British Journal of Social Psychology 56 (1) (2017): 125–145, https://doi.org/10.1111/bjso.12168 ; and Alexander M. Czopp, Margo J. Monteith, and Aimee Y. Mark, “Standing up for a Change: Reducing Bias through Interpersonal Confrontation,” Journal of Personality and Social Psychology 90 (5) (2006): 784–803, https://doi.org/10.1037/0022-3514.90.5.784 .

Greenwald, McGhee, and Schwartz, “Measuring Individual Differences in Implicit Cognition.” As of May 2023, over thirty million completed IATs have been sampled and over seventy million tests have been at least partially sampled on Project Implicit. See Project Implicit, https://implicit.harvard.edu (accessed May 1, 2023).

For more on the psychological processes, see Benedek Kurdi, Kirsten N. Morehouse, and Yarrow Dunham, “How Do Explicit and Implicit Evaluations Shift? A Preregistered Meta-Analysis of the Effects of Co-Occurrence and Relational Information,” Journal of Personality and Social Psychology 124 (6) (2022), https://doi.org/10.1037/pspa0000329 ; and Benedek Kurdi and Mahzarin R. Banaji, “Implicit Person Memory: Domain-General and Domain-Specific Processes of Learning and Change,” PsyArXiv, October 18, 2021, last edited November 18, 2021, https://doi.org/10.31234/osf.io/hqnfy .

For a detailed review of the IAT, see Kate A. Ratliff and Colin Tucker Smith, “The Implicit Association Test,” Dœdalus 153 (1) (Winter 2024): 51–64, https://www.amacad.org/publication/implicit-association-test .

Standard interpretations regard 0 ± 0.15 as the null (no bias) interval. When using any deviation away from zero as the cutoff, 75 percent of respondents displayed an implicit White + Good/Black + Bad association. Tessa E. S. Charlesworth and Mahzarin R. Banaji, “Patterns of Implicit and Explicit Attitudes: IV. Change and Stability from 2007 to 2020,” Psychological Science 33 (9) (2022), https://doi.org/10.1177/09567976221084257 .

Brian A. Nosek, Frederick L. Smyth, Jeffrey J. Hansen, et al., “Pervasiveness and Correlates of Implicit Attitudes and Stereotypes,” European Review of Social Psychology 18 (1) (2007): 36–88, https://doi.org/10.1080/10463280701489053 .

William A. Cunningham, John B. Nezlek, and Mahzarin R. Banaji, “Implicit and Explicit Ethnocentrism: Revisiting the Ideologies of Prejudice,” Personality and Social Psychology Bulletin 30 (10) (2004): 1332–1346, https://doi.org/10.1177/0146167204264654 .

Mahzarin R. Banaji and Anthony G. Greenwald, Blindspot: Hidden Biases of Good People (New York: Delacorte Press, 2013).

Henri Tajfel, Michael Billig, Robert P. Bundy, and Claude Flament, “Social Categorization and Intergroup Behaviour,” European Journal of Social Psychology 1 (2) (1971): 149–178, https://doi.org/10.1002/ejsp.2420010202 .

Steven A. Lehr, Meghan L. Ferreira, and Mahzarin R. Banaji, “When Outgroup Negativity Trumps Ingroup Positivity: Fans of the Boston Red Sox and New York Yankees Place Greater Value on Rival Losses than Own-Team Gains,” Group Processes & Intergroup Relations 22 (1) (2019): 26–42, https://doi.org/10.1177/1368430217712834 ; Kristin A. Lane, Jason P. Mitchell, and Mahzarin R. Banaji, “Me and My Group: Cultural Status Can Disrupt Cognitive Consistency,” Social Cognition 23 (4) (2005): 353–386, https://doi.org/10.1521/soco.2005.23.4.353 ; and Greenwald, McGhee, and Schwartz, “Measuring Individual Differences in Implicit Cognition.”

Kirsten N. Morehouse, Keith Maddox, and Mahzarin R. Banaji, “All Human Social Groups Are Human, but Some Are More Human than Others: A Comprehensive Investigation of the Implicit Association of ‘Human’ to U.S. Racial/Ethnic Groups,” Proceedings of the National Academy of Sciences 120 (22) (2023): e2300995120, https://doi.org/10.1073/pnas.2300995120 .

John T. Jost, “A Quarter Century of System Justification Theory: Questions, Answers, Criticisms, and Societal Applications,” British Journal of Social Psychology 58 (2) (2019): 263–314, https://doi.org/10.1111/bjso.12297 ; John T. Jost, Mahzarin R. Banaji, and Brian A. Nosek, “A Decade of System Justification Theory: Accumulated Evidence of Conscious and Unconscious Bolstering of the Status Quo,” Political Psychology 25 (6) (2004): 881–919, https://doi.org/10.1111/j.1467-9221.2004.00402.x ; and John T. Jost and Mahzarin R. Banaji, “The Role of Stereotyping in System-Justification and the Production of False Consciousness,” British Journal of Social Psychology 33 (1) (1994): 1–27, https://doi.org/10.1111/j.2044-8309.1994.tb01008.x .

For a review, see Tessa Charlesworth and Mahzarin R. Banaji, “The Development of Social Group Cognition in Infancy and Childhood,” in The Oxford Handbook of Social Cognition , 2nd edition, ed. Donal E. Carlston, K. Johnson, and Kurt Hugenberg (Oxford: Oxford University Press, in press).

Tessa Charlesworth, Mayan Navon, Yoav Rabinovich, Nicole Lofaro, and Benedek Kurdi, “The Project Implicit International Dataset: Measuring Implicit and Explicit Social Group Attitudes and Stereotypes Across 34 Countries (2009–2019),” PsyArXiv, December 11, 2021, last edited March 21, 2022, https://doi.org/10.31234/osf.io/sr5qv .

For a review, see Charlesworth and Banaji, “The Development of Social Group Cognition in Infancy and Childhood.” See also Talee Ziv and Mahzarin R. Banaji, “Representations of Social Groups in the Early Years of Life,” in The SAGE Handbook of Social Cognition , ed. Susan Fiske and C. Macrae (London: SAGE Publications, 2012), 372–389, https://doi.org/10.4135/9781446247631.n19 .

Yair Bar-Haim, Talee Ziv, Dominique Lamy, and Richard M. Hodes, “Nature and Nurture in Own-Race Face Processing,” Psychological Science 17 (2) (2006): 159–163, https://doi.org/10.1111/j.1467-9280.2006.01679.x .

David J. Kelly, Paul C. Quinn, Alan M. Slater, et al., “Three-Month-Olds, but Not Newborns, Prefer Own-Race Faces,” Developmental Science 8 (6) (2005): F31–F36, https://doi.org/10.1111/j.1467-7687.2005.0434a.x .

Andrew Scott Baron and Mahzarin R. Banaji, “The Development of Implicit Attitudes: Evidence of Race Evaluations from Ages 6 and 10 and Adulthood,” Psychological Science 17 (1) (2006): 53–58, https://doi.org/10.1111/j.1467-9280.2005.01664.x .

Yarrow Dunham, Andrew Scott Baron, and Mahzarin R. Banaji, “Children and Social Groups: A Developmental Analysis of Implicit Consistency in Hispanic Americans,” Self and Identity 6 (2–3) (2007): 238–255, https://doi.org/10.1080/15298860601115344 .

For a further discussion of the development of implicit race bias, see Andrew N. Meltzoff and Walter S. Gilliam, “Young Children & Implicit Racial Biases,” Dædalus 153 (1) (Winter 2024): 65–83, https://www.amacad.org/publication/young-children-implicit-racial-biases .

To take a flower-insect IAT, visit https://outsmartingimplicitbias.org/module/iat .

Fazio, Sanbonmatsu, Powell, and Kardes, “On the Automatic Activation of Attitudes.”

Russell H. Fazio (in a personal communication, May 1, 2023) confirmed the easy acceptance of results from semantic priming methods that demonstrated automatic attitudes. The reason the IAT was held to higher standards is likely because its chosen attitude objects were not nonsocial entities like clouds and pizza but rather social categories like race, gender, sexuality, and age. It is likely that discovery of bias on these topics was simply less palatable, including to psychologists who were not familiar with the research tradition on implicit memory from which these measures were derived.

Joseph E. LeDoux, “Emotion and the Amygdala,” in The Amygdala: Neurobiological Aspects of Emotion, Memory, and Mental Dysfunction (New York: Wiley-Liss, 1992), 339–351; and Goran Šimić, Mladenka Tkalčić, Vana Vukić, et al., “Understanding Emotions: Origins and Roles of the Amygdala,” Biomolecules 11 (6) (2021), https://pubmed.ncbi.nlm.nih.gov/34072960.2023 .

Elizabeth A. Phelps, Kevin J. O'Connor, William A. Cunningham, et al., “Performance on Indirect Measures of Race Evaluation Predicts Amygdala Activation,” Journal of Cognitive Neuroscience 12 (5) (2000): 729–738, https://doi.org/10.1162/089892900562552 .

For reviews, see David M. Amodio and Mina Cikara, “The Social Neuroscience of Prejudice,” Annual Review of Psychology 72 (1) (2021): 439–469, https://doi.org/10.1146/annurev-psych-010419-050928 ; Inga K. Rösler and David M. Amodio, “Neural Basis of Prejudice and Prejudice Reduction,” Biological Psychiatry: Cognitive Neuroscience and Neuroimaging 7 (12) (2022): 1200–1208, https://doi.org/10.1016/j.bpsc.2022.10.008 ; Jennifer T. Kubota, Mahzarin R. Banaji, and Elizabeth A. Phelps, “The Neuroscience of Race,” Nature Neuroscience 15 (7) (2012): 940–948, https://doi.org/10.1038/nn.3136 ; Pascal Molenberghs, “The Neuroscience of In-Group Bias,” Neuroscience & Biobehavioral Reviews 37 (8) (2013): 1530–1536, https://doi.org/10.1016/j.neubiorev.2013.06.002 ; and Jennifer T. Kubota, “Uncovering Implicit Racial Bias in the Brain: The Past, Present & Future,” Dædalus 153 (1) (Winter 2024): 84–105, https://www.amacad.org/publication/uncovering-implicit-racial-bias-brain-past-present-future .

Amodio and Cikara, “The Social Neuroscience of Prejudice”; and Rösler and Amodio, “Neural Basis of Prejudice and Prejudice Reduction.”

Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan, “Semantics Derived Automatically from Language Corpora Contain Human-like Biases,” Science 356 (6334) (2017): 183–186, https://doi.org/10.1126/science.aal4230 .

The top ten traits associated with White (versus Black): critical, polite, hostile, decisive, friendly, diplomatic, understanding, philosophical, able, and belligerent. The top ten traits associated with Black (versus White): earthy, lonely, cruel, sensual, lifeless, deceitful, helpless, rebellious, meek, and lazy. Tessa E. S. Charlesworth, Aylin Caliskan, and Mahzarin R. Banaji, “Historical Representations of Social Groups across 200 Years of Word Embeddings from Google Books,” Proceedings of the National Academy of Sciences 119 (28) (2022): e2121798119, https://doi.org/10.1073/pnas.2121798119 .

Sudeep Bhatia and Lukasz Walasek, “Predicting Implicit Attitudes with Natural Language Data,” Proceedings of the National Academy of Sciences 120 (25) (2023): e2220726120, https://doi.org/10.1073/pnas.2220726120 .

For an exploration of gender biases embedded in internet texts, see Aylin Caliskan, Pimparkar Parth Ajay, Tessa Charlesworth, et al., “Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics,” in Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (New York: Association for Computing Machinery, 2022), 156–170, https://doi.org/10.1145/3514094.3534162 .

Arnold K. Ho, Jim Sidanius, Daniel T. Levin, et al., “Evidence for Hypodescent and Racial Hierarchy in the Categorization and Perception of Biracial Individuals,” Journal of Personality and Social Psychology 100 (3) (2011): 492–506, https://doi.org/10.1037/a0021562 .

Robert Wolfe, Mahzarin R. Banaji, and Aylin Caliskan, “Evidence for Hypodescent in Visual Semantic AI,” in 2022 ACM Conference on Fairness, Accountability, and Transparency (New York: Association for Computing Machinery, 2022), 1293–1304, https://doi.org/10.1145/3531146.3533185 .

See also Darren Walker, “Deprogramming Implicit Bias: The Case for Public Interest Technology,” Dædalus 153 (1) (Winter 2024): 268–275, https://www.amacad.org/publication/deprogramming-implicit-bias-case-public-interest-technology ; and Alice Xiang, “Mirror, Mirror, on the Wall, Who's the Fairest of Them All?” Dædalus 153 (1) (Winter 2024): 250–267, https://www.amacad.org/publication/mirror-mirror-wall-whos-fairest-them-all .

For reviews, see S. Michael Gaddis, “An Introduction to Audit Studies in the Social Sciences,” in Audit Studies: Behind the Scenes with Theory, Method, and Nuance , ed. S. Michael Gaddis (Cham: Springer International Publishing, 2018), 3–44, https://doi.org/10.1007/978-3-319-71153-9_1 ; and S. Michael Gaddis, “Understanding the ‘How’ and ‘Why’ Aspects of Racial/Ethnic Discrimination: A Multi-Method Approach to Audit Studies,” SSRN, July 25, 2019, https://doi.org/10.2139/ssrn.3426846 .

Marianne Bertrand and Sendhil Mullainathan, “Are Emily and Greg More Employable Than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination,” American Economic Review 94 (4) (2004): 991–1013, https://doi.org/10.1257/0002828042002561 .

Devah Pager, Bruce Western, and Bart Bonikowski, “Discrimination in a Low-Wage Labor Market: A Field Experiment,” American Sociological Review 74 (5) (2009): 777–799, https://doi.org/10.1177/000312240907400505 .

Lincoln Quillian, Devah Pager, Ole Hexel, and Arnfinn H. Midtbøen, “Meta-Analysis of Field Experiments Shows No Change in Racial Discrimination in Hiring over Time,” Proceedings of the National Academy of Sciences 114 (41) (2017): 10870–10875, https://doi.org/10.1073/pnas.1706255114 .

For a review, see Tessa E.S. Charlesworth and Mahzarin R. Banaji, “The Relationship of Implicit Social Cognition and Discriminatory Behavior,” prepublication chapter to appear in Handbook of Economics of Discrimination and Affirmative Action , ed. Ashwini Deshpande, https://tessaescharlesworth.files.wordpress.com/2021/05/charlesworth_econ-handbook_final.pdf . For a discussion of the practical significance of these relationships, see Jerry Kang, “Little Things Matter a Lot: The Significance of Implicit Bias, Practically & Legally,” Dædalus 153 (1) (Winter 2024): 193–212, https://www.amacad.org/publication/little-things-matter-lot-significance-implicit-bias-practically-legally ; and Manuel J. Galvan and B. Keith Payne, “Implicit Bias as a Cognitive Manifestation of Systemic Racism,” Dædalus 153 (1) (Winter 2024): 106–122, https://www.amacad.org/publication/implicit-bias-cognitive-manifestation-systemic-racism .

Travis Riddle and Stacey Sinclair, “Racial Disparities in School-Based Disciplinary Actions Are Associated with County-Level Rates of Racial Bias,” Proceedings of the National Academy of Sciences 116 (17) (2019): 8255–8260, https://doi.org/10.1073/pnas.1808307116 ; and Mark J. Chin, David M. Quinn, Tasminda K. Dhaliwal, and Virginia S. Lovison, “Bias in the Air: A Nationwide Exploration of Teachers' Implicit Racial Attitudes, Aggregate Bias, and Student Outcomes,” Educational Researcher 49 (8) (2020): 566–578, https://doi.org/10.3102/0013189X20937240 .

Sarah Beth Bell, Rachel Farr, Eugene Ofosuc, et al., “Implicit Bias Predicts Less Willingness and Less Frequent Adoption of Black Children More than Explicit Bias,” The Journal of Social Psychology 163 (4) (2023): 554–565, https://doi.org/10.1080/00224545.2021.1975619 ; and Raj Chetty, Nathaniel Hendren, Maggie R. Jones, and Sonya R. Porter, “Race and Economic Opportunity in the United States: An Intergenerational Perspective,” The Quarterly Journal of Economics 135 (2) (2020): 711–783, https://doi.org/10.1093/qje/qjz042 .

B. Keith Payne, Heidi A. Vuletich, and Jazmin L. Brown-Iannuzzi, “Historical Roots of Implicit Bias in Slavery,” Proceedings of the National Academy of Sciences 116 (24) (2019): 11693–11698, https://doi.org/10.1073/pnas.1818816116 ; and Eric Hehman, Jessica K. Flake, and Jimmy Calanchini, “Disproportionate Use of Lethal Force in Policing Is Associated With Regional Racial Biases of Residents,” Social Psychological and Personality Science 9 (4) (2018): 393–401, https://doi.org/10.1177/1948550617711229 .

Jordan B. Leitner, Eric Hehman, and Lonnie R. Snowden, “States Higher in Racial Bias Spend Less on Disabled Medicaid Enrollees,” Social Science & Medicine 208 (2018): 150–157, https://doi.org/10.1016/j.socscimed.2018.01.013 ; and Jacob Orchard and Joseph Price, “County-Level Racial Prejudice and the Black-White Gap in Infant Health Outcomes,” Social Science & Medicine 181 (2017): 191–198, https://doi.org/10.1016/j.socscimed.2017.03.036 .

Mahzarin R. Banaji, “The Opposite of a Great Truth Is Also True: Homage of Koan #7,” in Perspectivism in Social Psychology: The Yin and Yang of Scientific Progress (Washington, D.C.: American Psychological Association, 2004), 127–140, https://doi.org/10.1037/10750-010 .

For a review, see Irene V. Blair, “The Malleability of Automatic Stereotypes and Prejudice,” Personality and Social Psychology Review 6 (3) (2002): 242–261, https://doi.org/10.1207/S15327957PSPR0603_8 .

Bernd Wittenbrink, Charles M. Judd, and Bernadette Park, “Evaluative versus Conceptual Judgments in Automatic Stereotyping and Prejudice,” Journal of Experimental Social Psychology 37 (3) (2001): 244–252, https://doi.org/10.1006/jesp.2000.1456 .

Brian S. Lowery, Curtis D. Hardin, and Stacey Sinclair, “Social Influence Effects on Automatic Racial Prejudice,” Journal of Personality and Social Psychology 81 (2001): 842–855, https://doi.org/10.1037/0022-3514.81.5.842 .

Calvin K. Lai, Maddalena Marini, Steven A. Lehr, et al., “Reducing Implicit Racial Preferences: I. A Comparative Investigation of 17 Interventions,” Journal of Experimental Psychology: General 143 (4) (2014): 1765–1785, https://doi.org/10.1037/a0036260 .

Past research suggests that exposure to only positive Black figures may be less effective at changing implicit racial attitudes than exposure to both positive Black and negative White exemplars. Jennifer A. Joy-Gaba and Brian A. Nosek, “The Surprisingly Limited Malleability of Implicit Racial Evaluations,” Social Psychology 41 (3) (2010): 137–146, https://doi.org/10.1027/1864-9335/a000020 .

Calvin K. Lai, Allison L. Skinner, Erin Cooley, et al., “Reducing Implicit Racial Preferences: II. Intervention Effectiveness across Time,” Journal of Experimental Psychology: General 145 (8) (2016): 1001–1016, https://doi.org/10.1037/xge0000179 .

Tessa E. S. Charlesworth and Mahzarin R. Banaji, “Patterns of Implicit and Explicit Attitudes: I. Long-Term Change and Stability From 2007 to 2016,” Psychological Science 30 (2) (2019): 174–192, https://doi.org/10.1177/0956797618813087 ; Tessa E. S. Charlesworth and Mahzarin R. Banaji, “Patterns of Implicit and Explicit Attitudes: IV. Change and Stability From 2007 to 2020,” Psychological Science 33 (9) (2022), https://doi.org/10.1177/09567976221084257 ; Tessa E. S. Charlesworth and Mahzarin R. Banaji, “Patterns of Implicit and Explicit Attitudes II. Long-Term Change and Stability, Regardless of Group Membership,” American Psychologist 76 (6) (2021): 851–869, https://doi.org/10.1037/amp0000810 ; and Tessa E. S. Charlesworth and Mahzarin R. Banaji, “Patterns of Implicit and Explicit Stereotypes III: Long-Term Change in Gender Stereotypes,” Social Psychological and Personality Science 13 (1) (2022): 14–26, https://doi.org/10.1177/1948550620988425 .

Explicit race attitudes recorded a 98 percent reduction, shifting from a “‘slight’ preference for White Americans over Black Americans” to neutrality in the span of fifteen years, and making it the fastest changing explicit bias. Charlesworth and Banaji, “Patterns of Implicit and Explicit Attitudes II.”

Brown v. Board of Education of Topeka , 347 U.S. 483 (1954), https://www.oyez.org/cases/1940-1955/347us483 . See also Alexandra Kalev and Frank Dobbin, “Retooling Career Systems to Fight Workplace Bias: Evidence from U.S. Corporations,” Dædalus 153 (1) (Winter 2024): 213–230, https://www.amacad.org/publication/retooling-career-systems-fight-workplace-bias-evidence-us-corporations .

For a discussion of why legislation is often inadequate, see Wanda A. Sigur and Nicholas M. Donofrio, “Implicit Bias versus Intentional Belief: When Morally Elevated Leadership Drives Transformational Change,” Dædalus 153 (1) (Winter 2024): 231–249, https://www.amacad.org/publication/implicit-bias-versus-intentional-belief-when-morally-elevated-leadership-drives .

Email alerts

Related articles, related book chapters, affiliations.

  • Online ISSN 1548-6192
  • Print ISSN 0011-5266

A product of The MIT Press

Mit press direct.

  • About MIT Press Direct

Information

  • Accessibility
  • For Authors
  • For Customers
  • For Librarians
  • Direct to Open
  • Open Access
  • Media Inquiries
  • Rights and Permissions
  • For Advertisers
  • About the MIT Press
  • The MIT Press Reader
  • MIT Press Blog
  • Seasonal Catalogs
  • MIT Press Home
  • Give to the MIT Press
  • Direct Service Desk
  • Terms of Use
  • Privacy Statement
  • Crossref Member
  • COUNTER Member  
  • The MIT Press colophon is registered in the U.S. Patent and Trademark Office

This Feature Is Available To Subscribers Only

Sign In or Create an Account

implicit bias essay

Opinion | How We’ve Taken the Bias Out of ‘Implicit Bias Training’

I s there any escape from woke indoctrination? I’ve heard versions of this question from countless doctors, nurses and other medical professionals. Healthcare has been captured by activists pushing divisive and discriminatory ideologies, especially through education and training. One of the most visible manifestations is mandatory “implicit bias training,” which seven states have adopted and at least 25 more are considering. In Michigan, medical professionals will soon be free.

On May 1, my organization will launch a continuing medical education course that fulfills Michigan’s implicit-bias-training mandate. Created by Gov. Gretchen Whitmer in 2021 and updated last year, the mandate requires regular indoctrination for the members of 26 medical fields—not only doctors and nurses, but athletic trainers, acupuncturists, massage therapists, midwives and many others. As many as 400,000 medical professionals are now required to learn about implicit bias every time they apply for or renew a license. Michigan’s mandate is one of the most expansive in the nation.

The authors of this policy no doubt want every medical professional in the state to accept the woke party line on race. But our course goes in a more ethical—and less political—direction. Instead of teaching implicit bias as fact, we’re telling medical professionals the truth—that this training is grounded in falsehood and is a direct threat to the health and well-being of patients.

How are we able to offer this course? The rule requires providers to be “a nationally-recognized or state-recognized health-related organization,” which we are. We also provide “information on implicit bias,” another requirement. And our course includes “strategies to reduce disparities in access to and delivery of health care services.”

We start with key facts about implicit bias, including the nature of the training that purports to combat it. As my colleague Laura L. Morganhas written in these pages, implicit-bias training is generally filled with insulting accusations against medical professionals, including overtly racist statements and assumptions. Medical professionals have been told in training that they contribute to “modern-day lynchings in the workplace” and that their “implicit bias kills.” This isn’t medical education. It’s ideological claptrap.

Is implicit bias even real? Of course some people have prejudices, but they also have free will—a reality the implicit-bias-industrial complex ignores. As Hal R. Arkes has shown, the most common test for implicit bias fails to meet the basic standard to qualify as an acceptable psychological assessment tool, while bias accounts for a vanishingly small percent of prejudicial behavior. The test’s creators and advocates have also admitted the test can’t predict behavior. Why should medical professionals be forced to learn about something that is neither accurately measured nor understood?

The real point of such mandates as Michigan’s is to steep medical professionals in a divisive worldview. Our course exposes how such training serves an “antiracist” agenda, which is to say, they encourage medical professionals to be racist, since “antiracism” demands discriminatory treatment based on skin color. Activists are already pushing medical professionals to provide preferential access to care based on race. Doctors, nurses, therapists and others should be taught to fight this evil idea, not implement it.

Our course also illuminates one of the most disturbing premises of wokeness in medicine. The concept of implicit bias is closely connected to the assertion that white doctors are hurting or even killing black patients, a simplistic and false argument based on the existence of real yet complicated health disparities. This argument is predicated on the belief—explicitly stated at medical schools and by major medical associations—that health outcomes improve when patients and physicians are matched by race. That’s false, as our course shows with a thorough review of the scholarly evidence. More to the point, “racial concordance,” as activists call it, is a thinly veiled argument for the resegregation of medicine.

Our course is available to every medical professional in Michigan. We hope it will become the default choice for new-license and renewal applicants alike. It may also meet continuing medical education requirements for medical professionals in other states, and it could be relevant for workers in other industries who have been told to get training about implicit bias. Medical professionals deserve to know they’re being primed to accept abominable ideas. They should be freed from woke brainwashing so they can focus their energy on the hard work of improving lives.

Dr. Goldfarb, a physician, is chairman of Do No Harm.

Opinion | How We’ve Taken the Bias Out of ‘Implicit Bias Training’

IMAGES

  1. ≫ Implicit Bias Free Essay Sample on Samploon.com

    implicit bias essay

  2. Case Study on Implicit Bias by Nurses Group Assignment

    implicit bias essay

  3. What is implicit bias?

    implicit bias essay

  4. Implicit Association Test Essay Example

    implicit bias essay

  5. Addressing Implicit Bias in Nursing

    implicit bias essay

  6. Bias Essay

    implicit bias essay

VIDEO

  1. Implicit Bias in Healthcare Presentation

  2. News Bias Essay

  3. Implicit Bias #mindblowingfacts #psychologyfacts #psychologymindfacts

  4. Implicit Bias Introductory Video

  5. Exploring Implicit Bias..The Science of Inequity

  6. Implicit Bias: Unconscious Influences

COMMENTS

  1. What Is Implicit Bias?

    Implicit bias is an unconscious preference for (or aversion to) a particular person or group. Although these feelings can be either positive or negative, they cause us to be unfair towards others. Affinity bias or the tendency to favor people who are similar to us, is an example of this unfair behaviour. However, any aspect of an individual's ...

  2. Implicit Bias: What It Is, Examples, & Ways to Reduce It

    Explicit Bias. Definition. Unconscious attitudes or stereotypes that affect our understanding, actions, and decisions. Conscious beliefs and attitudes about a person or group. How it manifests. Can influence decisions and behavior subconsciously. Usually apparent in a person's language and behavior. Example.

  3. Implicit Bias: Definition, Causes, Effects, and Prevention

    An implicit bias is an unconscious association, belief, or attitude toward any social group. Implicit biases are one reason why people often attribute certain qualities or characteristics to all members of a particular group, a phenomenon known as stereotyping. It is important to remember that implicit biases operate almost entirely on an ...

  4. The good, the bad, and the ugly of implicit bias

    The concept of implicit bias, also termed unconscious bias, and the related Implicit Association Test (IAT) rests on the belief that people act on the basis of internalised schemas of which they are unaware and thus can, and often do, engage in discriminatory behaviours without conscious intent.1 This idea increasingly features in public discourse and scholarly inquiry with regard to ...

  5. Taking a hard look at our implicit biases

    Banaji opened on Tuesday by recounting the "implicit association" experiments she had done at Yale and at Harvard. The assumptions underlying the research on implicit bias derive from well-established theories of learning and memory and the empirical results are derived from tasks that have their roots in experimental psychology and ...

  6. Implicit Bias

    Implicit Bias. First published Thu Feb 26, 2015; substantive revision Wed Jul 31, 2019. Research on "implicit bias" suggests that people can act on the basis of prejudice and stereotypes without intending to do so. While psychologists in the field of "implicit social cognition" study consumer products, self-esteem, food, alcohol ...

  7. Beyond Implicit Bias

    In their introduction to this edition of Dædalus, Goodwin Liu and Camara Phyllis Jones write that "it is unlikely that implicit bias can be effectively addressed by cognitive interventions alone, without broader institutional, legal, and structural reforms."They note that the genesis for the volume was a March 2021 workshop on the science of implicit bias convened by the Committee on ...

  8. Implicit bias

    Implicit bias. Implicit bias, also known as implicit prejudice or implicit attitude, is a negative attitude, of which one is not consciously aware, against a specific social group. Implicit bias is thought to be shaped by experience and based on learned associations between particular qualities and social categories, including race and/or gender.

  9. The Implicit Association Test

    Among the general public and behavioral scientists alike, the Implicit Association Test (IAT) is the best known and most widely used tool for demonstrating implicit bias: the unintentional impact of social group information on behavior. More than forty million IATs have been completed at the Project Implicit research website. These public datasets are the most comprehensive documentation of ...

  10. Implicit Bias as a Cognitive Manifestation of Systemic Racism

    Implicit bias has traditionally been considered an individual attitude. Implicit bias tests and sequential priming tasks were developed as individual difference measures. 10 Most theories of implicit bias posit that implicit attitudes were learned from cultural biases early in development and became rigid because of immense repetition. 11

  11. PDF Who, Me? Biased?: Understanding Implicit Bias TEACHING TIPS About This

    Dimension 2 - Students will be exposed to a range of sources about implicit bias, including research reports, news broadcasts, and opinions from advocacy organizations. Dimension 3 - Students will evaluate and reflect on the value of each source, connecting the sources' claims to their own perceptions and experiences of implicit bias ...

  12. The Growth of Implicit Bias: When and How

    Implicit bias is one of the most successful cases of an academic concept being translated into practice in recent memory and is widely used by advocates in the United States, Australia, and Europe to raise awareness about gender inequalities and make a case for organizational change (Jenkins 2018; Nielsen 2021).Project Implicit 1 was released in 1998 as an international collaboration (hosted ...

  13. PDF Implicit Bias and Policing

    The likely culprits are the implicit biases that operate outside of conscious awareness and control but nevertheless influence our behaviors (Greenwald, Poehlman, Uhlmann, & Banaji, 2009). The purpose of this article is to consider how implicit biases affect policing. Other essays provide clear and thorough explanations of

  14. Implicit Bias in the Workplace

    Implicit Bias in the Workplace Essay. In the context of today's rapidly changing world, the notion of discrimination has become unacceptable in any of its manifestations. Speaking of the legislative level of the issue, authorities from all over the world have made great progress in terms of bias prevention in the workplace, medical care, and ...

  15. PDF Exploring Implicit Bias

    Implicit bias refers to unconscious attitudes, reactions, stereotypes, and categories that affect behavior and understanding. (Boysen, et. al 2009). Erroneous thoughts that stick in our minds even when we consciously "know better". Leads to perceptual bias. Influences how we see, feel, remember, "know", judge.

  16. What's The Difference Between Implicit Bias and Racism?

    Each essay in this volume conveys important findings and ideas that merit careful consideration on their own. Collectively, the essays highlight three themes we find especially significant. First, thanks to three decades of research, the existence of implicit bias as a demonstrable and observable reality rests on a firm and wide-ranging ...

  17. Essay On Implicit Bias

    Implicit biases are "'habits of mind,' learned over time through repeated personal experiences and cultural socialization, which can be activated unintentionally, often outside of one's own awareness, and are difficult to control" (Burgess, Beach & Saha, 2017, p. 372). Implicit bias is seen in the medical field partly because of the ...

  18. An Essay on implicit bias.

    INTRODUCTION. The aim of this essay is to establish the nature of a bias. In the first half of the essay, we will show it is possible to hold that people have implicit biases. We will do this by ...

  19. Essay On Implicit Bias

    This bias is the unconscious level of prejudice, when making judgments of other ethnic groups. It is the automatic response to someone of a different race or ethnicity. A person may be unaware of their implicit bias because it lies in the subconscious. This bias is usually triggered when a person is in the presence of race.

  20. Implicit Bias

    Implicit Bias. 1. An implicit bias is a set of attitudes or beliefs regarding a specific group, such as an ethnic group, and age group, or a particular gender. These are not inherently positive or negative associations, but they all tend to be misguided and uninformed. The reason implicit biases are important to understand in regard to health ...

  21. Free Implicit Bias Essays

    Implicit Bias can be referred to as an unconscious attitude or behavior towards a certain group in society. Education and awareness led people to believe that all groups and communities must be treated on an equal basis and discrimination of any sort must be avoided. The aim of educating the masses is to ensure that a society where all humans ...

  22. The Science of Implicit Race Bias: Evidence from the Implicit

    Beginning in the mid-1980s, scientific psychology underwent a revolution—the implicit revolution—that led to the development of methods to capture implicit bias: attitudes, stereotypes, and identities that operate without full conscious awareness or conscious control. This essay focuses on a single notable thread of discoveries from the Race Attitude Implicit Association Test (RA-IAT) by ...

  23. The Science of Implicit Race Bias: Evidence from the Implicit

    Abstract. Beginning in the mid-1980s, scientific psychology underwent a revolution - the implicit revolution - that led to the development of methods to capture implicit bias: attitudes, stereotypes, and identities that operate without full conscious awareness or conscious control. This essay focuses on a single notable thread of discoveries from the Race Attitude Implicit Association Test ...

  24. How We've Taken the Bias Out of 'Implicit Bias Training'

    We start with key facts about implicit bias, including the nature of the training that purports to combat it. As my colleague Laura L. Morganhas written in these pages, implicit-bias training is ...

  25. How We've Taken the Bias Out of 'Implicit Bias Training'

    As many as 400,000 medical professionals are now required to learn about implicit bias every time they apply for or renew a license. Michigan's mandate is one of the most expansive in the nation.

  26. Understanding and Addressing Implicit Bias

    At our All-to-All meeting on February 26, 2024, Equal Employment Opportunity Specialist Jon LeGaux from the Lab's FAIR office presented a talk on recognizing and addressing implicit bias in recruitment.Implicit or unconscious bias refers to attitudes, prejudices, or judgments we unconsciously hold about people or groups. The presentation started with an insightful and informative video ...