• Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Kohlberg's Theory of Moral Development

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

moral and ethical development essay

Verywell / Bailey Mariner

  • Applications
  • Other Theories

Take This Pop Quiz

Kohlberg's theory of moral development is a theory that focuses on how children develop morality and moral reasoning. Kohlberg's theory suggests that moral development occurs in a series of six stages and that moral logic is primarily focused on seeking and maintaining justice.

Here we discuss how Kohlberg developed his theory of moral development and the six stages he identified as part of this process. We also share some critiques of Kohlberg's theory, many of which suggest that it may be biased based on the limited demographics of the subjects studied.

Test Your Knowledge

At the end of this article, take a fast and free pop quiz to see how much you've learned about Kohlberg's theory.

What Is Moral Development?

Moral development is the process by which people develop the distinction between right and wrong (morality) and engage in reasoning between the two (moral reasoning).

How do people develop morality? This question has fascinated parents, religious leaders, and philosophers for ages, but moral development has also become a hot-button issue in psychology and education. Do parental or societal influences play a greater role in moral development? Do all kids develop morality in similar ways?

American psychologist Lawrence Kohlberg developed one of the best-known theories exploring some of these basic questions. His work modified and expanded upon Jean Piaget's previous work but was more centered on explaining how children develop moral reasoning.

Kohlberg extended Piaget's theory, proposing that moral development is a continual process that occurs throughout the lifespan. Kohlberg's theory outlines six stages of moral development within three different levels.

In recent years, Kohlberg's theory has been criticized as being Western-centric with a bias toward men (he primarily used male research subjects) and for having a narrow worldview based on upper-middle-class value systems and perspectives.

How Kohlberg Developed His Theory

Kohlberg based his theory on a series of moral dilemmas presented to his study subjects. Participants were also interviewed to determine the reasoning behind their judgments in each scenario.

One example was "Heinz Steals the Drug." In this scenario, a woman has cancer and her doctors believe only one drug might save her. This drug had been discovered by a local pharmacist and he was able to make it for $200 per dose and sell it for $2,000 per dose. The woman's husband, Heinz, could only raise $1,000 to buy the drug.

He tried to negotiate with the pharmacist for a lower price or to be extended credit to pay for it over time. But the pharmacist refused to sell it for any less or to accept partial payments. Rebuffed, Heinz instead broke into the pharmacy and stole the drug to save his wife. Kohlberg asked, "Should the husband have done that?"

Kohlberg was not interested so much in the answer to whether Heinz was wrong or right but in the reasoning for each participant's decision. He then classified their reasoning into the stages of his theory of moral development.

Stages of Moral Development

Kohlberg's theory is broken down into three primary levels. At each level of moral development, there are two stages. Similar to how Piaget believed that not all people reach the highest levels of cognitive development, Kohlberg believed not everyone progresses to the highest stages of moral development.

Level 1. Preconventional Morality

Preconventional morality is the earliest period of moral development. It lasts until around the age of 9. At this age, children's decisions are primarily shaped by the expectations of adults and the consequences of breaking the rules. There are two stages within this level:

  • Stage 1 (Obedience and Punishment) : The earliest stages of moral development, obedience and punishment are especially common in young children, but adults are also capable of expressing this type of reasoning. According to Kohlberg, people at this stage see rules as fixed and absolute. Obeying the rules is important because it is a way to avoid punishment.
  • Stage 2 (Individualism and Exchange) : At the individualism and exchange stage of moral development, children account for individual points of view and judge actions based on how they serve individual needs. In the Heinz dilemma, children argued that the best course of action was the choice that best served Heinz’s needs. Reciprocity is possible at this point in moral development, but only if it serves one's own interests.

Level 2. Conventional Morality

The next period of moral development is marked by the acceptance of social rules regarding what is good and moral. During this time, adolescents and adults internalize the moral standards they have learned from their role models and from society.

This period also focuses on the acceptance of authority and conforming to the norms of the group. There are two stages at this level of morality:

  • Stage 3 (Developing Good Interpersonal Relationships) : Often referred to as the "good boy-good girl" orientation, this stage of the interpersonal relationship of moral development is focused on living up to social expectations and roles . There is an emphasis on conformity , being "nice," and consideration of how choices influence relationships.
  • Stage 4 (Maintaining Social Order) : This stage is focused on ensuring that social order is maintained. At this stage of moral development, people begin to consider society as a whole when making judgments. The focus is on maintaining law and order by following the rules, doing one’s duty, and respecting authority.

Level 3. Postconventional Morality

At this level of moral development, people develop an understanding of abstract principles of morality. The two stages at this level are:

  • Stage 5 (Social Contract and Individual Rights ): The ideas of a social contract and individual rights cause people in the next stage to begin to account for the differing values, opinions, and beliefs of other people. Rules of law are important for maintaining a society, but members of the society should agree upon these standards.
  • Stage 6 (Universal Principles) : Kohlberg’s final level of moral reasoning is based on universal ethical principles and abstract reasoning. At this stage, people follow these internalized principles of justice, even if they conflict with laws and rules.

Kohlberg believed that only a relatively small percentage of people ever reach the post-conventional stages (around 10 to 15%). One analysis found that while stages one to four could be seen as universal in populations throughout the world, the fifth and sixth stages were extremely rare in all populations.

Applications for Kohlberg's Theory

Understanding Kohlberg's theory of moral development is important in that it can help parents guide their children as they develop their moral character. Parents with younger children might work on rule obeyance, for instance, whereas they might teach older children about social expectations.

Teachers and other educators can also apply Kohlberg's theory in the classroom, providing additional moral guidance. A kindergarten teacher could help enhance moral development by setting clear rules for the classroom, and the consequences for violating them. This helps kids at stage one of moral development.

A teacher in high school might focus more on the development that occurs in stage three (developing good interpersonal relationships) and stage four (maintaining social order). This could be accomplished by having the students take part in setting the rules to be followed in the classroom, giving them a better idea of the reasoning behind these rules.

Criticisms for Kohlberg's Theory of Moral Development

Kohlberg's theory played an important role in the development of moral psychology. While the theory has been highly influential, aspects of the theory have been critiqued for a number of reasons:

  • Moral reasoning does not equal moral behavior : Kohlberg's theory is concerned with moral thinking, but there is a big difference between knowing what we ought to do versus our actual actions. Moral reasoning, therefore, may not lead to moral behavior.
  • Overemphasizes justice : Critics have pointed out that Kohlberg's theory of moral development overemphasizes the concept of justice when making moral choices. Factors such as compassion, caring, and other interpersonal feelings may play an important part in moral reasoning.
  • Cultural bias : Individualist cultures emphasize personal rights, while collectivist cultures stress the importance of society and community. Eastern, collectivist cultures may have different moral outlooks that Kohlberg's theory does not take into account.
  • Age bias : Most of his subjects were children under the age of 16 who obviously had no experience with marriage. The Heinz dilemma may have been too abstract for these children to understand, and a scenario more applicable to their everyday concerns might have led to different results.
  • Gender bias : Kohlberg's critics, including Carol Gilligan, have suggested that Kohlberg's theory was gender-biased since all of the subjects in his sample were male. Kohlberg believed that women tended to remain at the third level of moral development because they place a stronger emphasis on things such as social relationships and the welfare of others.

Gilligan instead suggested that Kohlberg's theory overemphasizes concepts such as justice and does not adequately address moral reasoning founded on the principles and ethics of caring and concern for others.

Other Theories of Moral Development

Kohlberg isn't the only psychologist to theorize how we develop morally. There are several other theories of moral development.

Piaget's Theory of Moral Development

Kohlberg's theory is an expansion of Piaget's theory of moral development. Piaget described a three-stage process of moral development:

  • Stage 1 : The child is more concerned with developing and mastering their motor and social skills, with no general concern about morality.
  • Stage 2 : The child develops unconditional respect both for authority figures and the rules in existence.
  • Stage 3 : The child starts to see rules as being arbitrary, also considering an actor's intentions when judging whether an act or behavior is moral or immoral.

Kohlberg expanded on this theory to include more stages in the process. Additionally, Kohlberg believed that the final stage is rarely achieved by individuals whereas Piaget's stages of moral development are common to all.

Moral Foundations Theory

Proposed by Jonathan Haidt, Craig Joseph, and Jesse Graham, the moral foundations theory is based on three morality principles:

  • Intuition develops before strategic reasoning . Put another way, our reaction comes first, which is then followed by rationalization.
  • Morality involves more than harm and fairness . Contained within this second principle are a variety of considerations related to morality. It includes: care vs. harm, liberty vs. oppression, fairness vs. cheating, loyalty vs. betrayal , authority vs. subversion, and sanctity vs. degradation.
  • Morality can both bind groups and blind individuals . When people are part of a group, they will tend to adopt that group's same value systems. They may also sacrifice their own morals for the group's benefit.

While Kohlberg's theory is primarily focused on help vs. harm, moral foundations theory encompasses several more dimensions of morality. However, this theory also fails to explain the "rules" people use when determining what is best for society.

Normative Theories of Moral Behavior

Several other theories exist that attempt to explain the development of morality , specifically in relation to social justice. Some fall into the category of transcendental institutionalist, which involves trying to create "perfect justice." Others are realization-focused, concentrating more on removing injustices.

One theory falling into the second category is social choice theory. Social choice theory is a collection of models that seek to explain how individuals can use their input (their preferences) to impact society as a whole. An example of this is voting, which allows the majority to decide what is "right" and "wrong."

See how much you've learned (or maybe already knew!) about Kohlberg's theory of moral development with this quick, free pop quiz.

While Kohlberg's theory of moral development has been criticized, the theory played an important role in the emergence of the field of moral psychology. Researchers continue to explore how moral reasoning develops and changes through life as well as the universality of these stages. Understanding these stages offers helpful insights into the ways that both children and adults make moral choices and how moral thinking may influence decisions and behaviors.

Lapsley D. Moral agency, identity and narrative in moral development .  Hum Dev . 2010;53(2):87-97. doi:10.1159/000288210

Elorrieta-Grimalt M. A critical analysis of moral education according to Lawrence Kohlberg .  Educación y Educadores . 2012;15(3):497-512. doi:10.5294/edu.2012.15.3.9

Govrin A. From ethics of care to psychology of care: Reconnecting ethics of care to contemporary moral psychology .  Front Psychol . 2014;5:1135. doi:10.3389/fpsyg.2014.01135

American Psychological Association. Heinz dilemma .

American Psychological Association. Kohlberg's theory of moral development .

Kohlberg L, Essays On Moral Development . Harper & Row; 1985.

Ma HK. The moral development of the child: An integrated model .  Front Public Health . 2013;1:57. doi:10.3389/fpubh.2013.00057

Gibbs J.  Moral Development And Reality . 4th ed. Oxford University Press; 2019.

Gilligan C.  In A Different Voice . Harvard University Press; 2016.

Patanella D. Piaget's theory of moral development . Encyclopedia of Child Behavior and Development . 2011. doi:10.1007/978-0-387-79061-9_2167

Dubas KM, Dubas SM, Mehta R. Theories of justice and moral behavior . J Legal Ethical Regulatory Issues . 2014;17(2):17-35.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Kohlberg’s Stages of Moral Development

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Key Takeaways

  • Lawrence Kohlberg formulated a theory asserting that individuals progress through six distinct stages of moral reasoning from infancy to adulthood.
  • He grouped these stages into three broad categories of moral reasoning, pre-conventional, conventional, and post-conventional. Each level is associated with increasingly complex stages of moral development.
  • Kohlberg suggested that people move through these stages in a fixed order and that moral understanding is linked to cognitive development . 

kohlberg moral development

Heinz Dilemma

Lawrence Kohlberg (1958) agreed with Piaget’s (1932) theory of moral development in principle but wanted to develop his ideas further.

He used Piaget’s storytelling technique to tell people stories involving moral dilemmas.  In each case, he presented a choice to be considered, for example, between the rights of some authority and the needs of some deserving individual unfairly treated.

After presenting people with various moral dilemmas, Kohlberg categorized their responses into different stages of moral reasoning.

Using children’s responses to a series of moral dilemmas, Kohlberg established that the reasoning behind the decision was a greater indication of moral development than the actual answer.

One of Kohlberg’s best-known stories (1958) concerns Heinz, who lived somewhere in Europe.

Heinz’s wife was dying from a particular type of cancer. Doctors said a new drug might save her. The drug had been discovered by a local chemist, and the Heinz tried desperately to buy some, but the chemist was charging ten times the money it cost to make the drug, and this was much more than the Heinz could afford. Heinz could only raise half the money, even after help from family and friends. He explained to the chemist that his wife was dying and asked if he could have the drug cheaper or pay the rest of the money later. The chemist refused, saying that he had discovered the drug and was going to make money from it. The husband was desperate to save his wife, so later that night he broke into the chemist’s and stole the drug. Should Heinz have broken into the laboratory to steal the drug for his wife? Why or why not?

moral dilemma heinz

Kohlberg asked a series of questions such as:

  • Should Heinz have stolen the drug?
  • Would it change anything if Heinz did not love his wife?
  • What if the person dying was a stranger, would it make any difference?
  • Should the police arrest the chemist for murder if the woman dies?

By studying the answers from children of different ages to these questions, Kohlberg hoped to discover how moral reasoning changed as people grew older.

The sample comprised 72 Chicago boys aged 10–16 years, 58 of whom were followed up at three-yearly intervals for 20 years (Kohlberg, 1984).

Each boy was given a 2-hour interview based on the ten dilemmas. Kohlberg was interested not in whether the boys judged the action right or wrong but in the reasons for the decision. He found that these reasons tended to change as the children got older.

Kohlberg identified three levels of moral reasoning: preconventional, conventional, and postconventional. Each level has two sub-stages.

People can only pass through these levels in the order listed. Each new stage replaces the reasoning typical of the earlier stage. Not everyone achieves all the stages. 

Kohlberg moral stages

Disequilibrium plays a crucial role in Kohlberg’s stages of moral development. A child encountering a moral issue may recognize limitations in their current reasoning approach, often prompted by exposure to others’ viewpoints. Improvements in perspective-taking are key to progressing through Kohlberg’s stages of moral development. As children mature, they increasingly understand issues from others’ viewpoints. For instance, a child at the preconventional level typically perceives an issue primarily in terms of personal consequences. In contrast, a child at the conventional level tends to consider the perspectives of others more substantially.

Level 1 – Preconventional Morality

Preconventional morality is the first level of moral development, lasting until approximately age 8. During this level, children accept the authority (and moral code) of others. 

Preconventional morality is when people follow rules because they don’t want to get in trouble or they want to get a reward. This level of morality is mostly based on what authority figures like parents or teachers tell you to do rather than what you think is right or wrong.

Authority is outside the individual, and children often make moral decisions based on the physical consequences of actions.

For example, if an action leads to punishment, it must be bad; if it leads to a reward, it must be good.

So, people at this level don’t have their own personal sense of right and wrong yet. They think that something is good if they get rewarded for it and bad if they get punished for it.

For example, if you get candy for behaving, you think you were good, but if you get a scolding for misbehaving, you think you were bad.

At the preconventional level, children don’t have a personal code of morality. Instead, moral decisions are shaped by the standards of adults and the consequences of following or breaking their rules.

Stage 1. Obedience and Punishment Orientation . The child/individual is good to avoid being punished. If a person is punished, they must have done wrong.
Stage 2. Individualism and Exchange . At this stage, children recognize that there is not just one right view handed down by the authorities. Different individuals have different viewpoints.

Level 2 – Conventional Morality

Conventional morality is the adolescent phase of moral development focused on societal norms and external expectations to discern right from wrong, often grounded in tradition, cultural practices, or established codes of conduct.

We internalize the moral standards of valued adult role models at the conventional level (most adolescents and adults).

Authority is internalized but not questioned, and reasoning is based on the group’s norms to which the person belongs.

A social system that stresses the responsibilities of relationships and social order is seen as desirable and must influence our view of right and wrong.

So, people who follow conventional morality believe that it’s important to follow society’s rules and expectations to maintain order and prevent problems.

For example, refusing to cheat on a test is a part of conventional morality because cheating can harm the academic system and create societal problems.

Stage 3. Good Interpersonal Relationships . The child/individual is good to be seen as being a good person by others. Therefore, answers relate to the approval of others.
Stage 4. Law and Order Morality . The child/individual becomes aware of the wider rules of society, so judgments concern obeying the rules to uphold the law and avoid guilt.

Level 3 – Postconventional Morality

Postconventional morality is the third level of moral development and is characterized by an individual’s understanding of universal ethical principles.

Postconventional morality is when people decide based on what they think is right rather than just following the rules of society. This means that people at this level of morality have their own ethical principles and values and don’t just do what society tells them to do.

At this level, people think about what is fair, what is just, and what values are important.

What is considered morally acceptable in any given situation is determined by what is the response most in keeping with these principles.

They also think about how their choices might affect others and try to make good decisions for everyone, not just themselves.

Values are abstract and ill-defined but might include: the preservation of life at all costs and the importance of human dignity. Individual judgment is based on self-chosen principles, and moral reasoning is based on individual rights and justice.

According to Kohlberg, this level of moral reasoning is as far as most people get.

Only 10-15% are capable of abstract thinking necessary for stage 5 or 6 (post-conventional morality). That is to say, most people take their moral views from those around them, and only a minority think through ethical principles for themselves.

Stage 5. Social Contract and Individual Rights . The child/individual becomes aware that while rules/laws might exist for the good of the greatest number, there are times when they will work against the interest of particular individuals. The issues are not always clear-cut. For example, in Heinz’s dilemma, the protection of life is more important than breaking the law against stealing.
Stage 6. Universal Principles . People at this stage have developed their own set of moral guidelines, which may or may not fit the law. The principles apply to everyone. E.g., human rights, justice, and equality.  The person will be prepared to act to defend these principles even if it means going against the rest of society in the process and having to pay the consequences of disapproval and or imprisonment. Kohlberg doubted few people had reached this stage.

Problems with Kohlberg’s Methods

1. the dilemmas are artificial (i.e., they lack ecological validity).

Most dilemmas are unfamiliar to most people (Rosen, 1980). For example, it is all very well in the Heinz dilemma, asking subjects whether Heinz should steal the drug to save his wife.

However, Kohlberg’s subjects were aged between 10 and 16. They have never been married, and never been placed in a situation remotely like the one in the story.

How should they know whether Heinz should steal the drug?

2. The sample is biased

Kohlberg’s (1969) theory suggested males more frequently progress beyond stage four in moral development, implying females lacked moral reasoning skills.

His research assistant, Carol Gilligan, disputed this, who argued that women’s moral reasoning differed, not deficient.

She criticized Kohlberg’s theory for focusing solely on upper-class white males, arguing women value interpersonal connections. For instance, women often oppose theft in the Heinz dilemma due to potential repercussions, such as separation from his wife if Heinz is imprisoned.

Gilligan (1982) conducted new studies interviewing both men and women, finding women more often emphasized care, relationships and context rather than abstract rules. Gilligan argued that Kohlberg’s theory overlooked this relational “different voice” in morality.

According to Gilligan (1977), because Kohlberg’s theory was based on an all-male sample, the stages reflect a male definition of morality (it’s androcentric).

Men’s morality is based on abstract principles of law and justice, while women’s is based on principles of compassion and care.

Further, the gender bias issue raised by Gilligan is a reminder of the significant gender debate still present in psychology, which, when ignored, can greatly impact the results obtained through psychological research.

3. The dilemmas are hypothetical (i.e., they are not real)

Kohlberg’s approach to studying moral reasoning relied heavily on his semi-structured moral judgment interview. Participants were presented with hypothetical moral dilemmas, and their justifications were analyzed to determine their stage of moral reasoning.

Some critiques of Kohlberg’s method are that it lacks ecological validity, removes reasoning from real-life contexts, and defines morality narrowly in terms of justice reasoning.

Psychologists concur with Kohlberg’s moral development theory, yet emphasize the difference between moral reasoning and behavior.

What we claim we’d do in a hypothetical situation often differs from our actions when faced with the actual circumstance. In essence, our actions might not align with our proclaimed values.

In a real situation, what course of action a person takes will have real consequences – and sometimes very unpleasant ones for themselves. Would subjects reason in the same way if they were placed in a real situation? We don’t know.

The fact that Kohlberg’s theory is heavily dependent on an individual’s response to an artificial dilemma questions the validity of the results obtained through this research.

People may respond very differently to real-life situations that they find themselves in than they do to an artificial dilemma presented to them in the comfort of a research environment.

4. Poor research design

How Kohlberg carried out his research when constructing this theory may not have been the best way to test whether all children follow the same sequence of stage progression.

His research was cross-sectional , meaning that he interviewed children of different ages to see their moral development level.

A better way to see if all children follow the same order through the stages would be to conduct longitudinal research on the same children.

However, longitudinal research on Kohlberg’s theory has since been carried out by Colby et al. (1983), who tested 58 male participants of Kohlberg’s original study.

She tested them six times in 27 years and supported Kohlberg’s original conclusion, which is that we all pass through the stages of moral development in the same order.

Contemporary research employs more diverse methods beyond Kohlberg’s interview approach, such as narrative analysis, to study moral experience. These newer methods aim to understand moral reasoning and development within authentic contexts and experiences.
  • Tappan and colleagues (1996) promote a narrative approach that examines how individuals construct stories and identities around moral experiences. This draws from the sociocultural tradition of examining identity in context. Tappan argues narrative provides a more contextualized understanding of moral development.
  • Colby and Damon’s (1992) empirical research uses in-depth life story interviews to study moral exemplars – people dedicated to moral causes. Instead of hypothetical dilemmas, they ask participants to describe real moral challenges and commitments. Their goal is to respect exemplars as co-investigators of moral meaning-making.
  • Walker and Pitts’ (1995) studies use open-ended interviews asking people to discuss real-life moral dilemmas and reflect on the moral domain in their own words. This elicits more naturalistic conceptions of morality compared to Kohlberg’s abstract decontextualized approach.

Problems with Kohlberg’s Theory

1. are there distinct stages of moral development.

Kohlberg claims there are, but the evidence does not always support this conclusion.

For example, a person who justified a decision based on principled reasoning in one situation (postconventional morality stage 5 or 6) would frequently fall back on conventional reasoning (stage 3 or 4) with another story.

In practice, it seems that reasoning about right and wrong depends more on the situation than on general rules. Moreover, individuals do not always progress through the stages, and Rest (1979) found that one in fourteen slipped backward.

The evidence for distinct stages of moral development looks very weak. Some would argue that behind the theory is a culturally biased belief in the superiority of American values over those of other cultures and societies.

Gilligan (1982) did not dismiss developmental psychology or morality. She acknowledged that children undergo moral development in stages and even praised Kohlberg’s stage logic as “brilliant” (Jorgensen, 2006, p. 186). However, she preferred Erikson’s model over the more rigid Piagetian stages.

While Gilligan supported Kohlberg’s stage theory as rational, she expressed discomfort with its structural descriptions that lacked context.

She also raised concerns about the theory’s universality, pointing out that it primarily reflected Western culture (Jorgensen, 2006, pp. 187-188).

Neo-Kohlbergian Schema Model

Rest and colleagues (199) have developed a theoretical model building on but moving beyond Kohlberg’s stage-based approach to moral development. Their model outlines four components of moral behavior: moral sensitivity, moral judgment, moral motivation, and moral character.

For the moral judgment component, Rest et al. propose that individuals use moral schemas rather than progress through discrete stages of moral reasoning.

Schemas are generalized knowledge structures that help us interpret information and situations. An individual can have multiple schemas available to make sense of moral issues, rather than being constrained to a single developmental stage.

Some examples of moral schemas proposed by Rest and colleagues include:

  • Personal Interest Schema – focused on individual interests and preferences
  • Maintaining Norms Schema – emphasizes following rules and norms
  • Postconventional Schema – considers moral ideals and principles

Rather than viewing development as movement to higher reasoning stages, the neo-Kohlbergian approach sees moral growth as acquiring additional, more complex moral schemas. Lower schemas are not replaced, but higher order moral schemas become available to complement existing ones.

The schema concept attempts to address critiques of the stage model, such as its rigidity and lack of context sensitivity. Using schemas allows for greater flexibility and integration of social factors into moral reasoning.

2. Does moral judgment match moral behavior?

Kohlberg never claimed that there would be a one-to-one correspondence between thinking and acting (what we say and what we do), but he does suggest that the two are linked.

However, Bee (1994) suggests that we also need to take into account of:

a) habits that people have developed over time. b) whether people see situations as demanding their participation. c) the costs and benefits of behaving in a particular way. d) competing motive such as peer pressure, self-interest and so on.

Overall, Bee points out that moral behavior is only partly a question of moral reasoning. It also has to do with social factors.

3. Is justice the most fundamental moral principle?

This is Kohlberg’s view. However, Gilligan (1977) suggests that the principle of caring for others is equally important. Furthermore, Kohlberg claims that the moral reasoning of males has often been in advance of that of females.

Girls are often found to be at stage 3 in Kohlberg’s system (good boy-nice girl orientation), whereas boys are more often found to be at stage 4 (Law and Order orientation). Gilligan (p. 484) replies:

“The very traits that have traditionally defined the goodness of women, their care for and sensitivity to the needs of others, are those that mark them out as deficient in moral development”.

In other words, Gilligan claims that there is a sex bias in Kohlberg’s theory. He neglects the feminine voice of compassion, love, and non-violence, which is associated with the socialization of girls.

Gilligan concluded that Kohlberg’s theory did not account for the fact that women approach moral problems from an ‘ethics of care’, rather than an ‘ethics of justice’ perspective, which challenges some of the fundamental assumptions of Kohlberg’s theory.

In contrast to Kohlberg’s impersonal “ethics of justice”, Gilligan proposed an alternative “ethics of care” grounded in compassion and responsiveness to needs within relationships (Gilligan, 1982).

Her care perspective highlights emotion, empathy and understanding over detached logic. Gilligan saw care and justice ethics as complementary moral orientations.

Walker et al. (1995) found everyday moral conflicts often revolve around relationships rather than justice; individuals describe relying more on intuition than moral reasoning in dilemmas. This raises questions about the centrality of reasoning in moral functioning.

4. Do people make rational moral decisions?

Kohlbeg’s theory emphasizes rationality and logical decision-making at the expense of emotional and contextual factors in moral decision-making.

One significant criticism is that Kohlberg’s emphasis on reason can create an image of the moral person as cold and detached from real-life situations. 

Carol Gilligan critiqued Kohlberg’s theory as overly rationalistic and not accounting for care-based morality commonly found in women. She argued for a “different voice” grounded in relationships and responsiveness to particular individuals.

The criticism suggests that by portraying moral reasoning as primarily cognitive and detached from emotional and situational factors, Kohlberg’s theory oversimplifies real-life moral decision-making, which often involves emotions, social dynamics, cultural nuances, and practical constraints.

Critics contend that his model does not adequately capture the multifaceted nature of morality in the complexities of everyday life.

Bee, H. L. (1994). Lifespan development . HarperCollins College Publishers.

Blum, L. A. (1988). Gilligan and Kohlberg: Implications for moral theory.  Ethics ,  98 (3), 472-491.

Colby, A., Kohlberg, L., Gibbs, J., & Lieberman, M. (1983). A longitudinal study of moral judgment. Monographs of the Society for Research in Child Development , 48 (1-2, Serial No. 200). Chicago: University of Chicago Press.

Day, J. M., & Tappan, M. B. (1996). The narrative approach to moral development: From the epistemic subject to dialogical selves.  Human Development ,  39 (2), 67-82.

Gilligan, C. (1977). In a different voice: Women’s conceptions of self and of morality. Harvard Educational Review , 47(4), 481-517.

Gilligan, C. (1982). In a different voice . Harvard University Press.

Gilligan, C. (1995). Hearing the difference: Theorizing connection. Hypatia, 10 (2), 120-127.

Jorgensen, G. (2006). Kohlberg and Gilligan: duet or duel?.  Journal of Moral Education ,  35 (2), 179-196.

Kohlberg, L. (1958). The Development of Modes of Thinking and Choices in Years 10 to 16. Ph. D. Dissertation , University of Chicago.

Kohlberg, L. (1984). The Psychology of Moral Development: The Nature and Validity of Moral Stages (Essays on Moral Development, Volume 2) . Harper & Row

Piaget, J. (1932). The moral judgment of the child . London: Kegan Paul, Trench, Trubner & Co.

Rest, J. R. (1979). Development in judging moral issues . University of Minnesota Press.

Rosen, B. (1980). Moral dilemmas and their treatment. In, Moral development, moral education, and Kohlberg. B. Munsey (Ed). (1980), pp. 232-263. Birmingham, Alabama: Religious Education Press.

Walker, L. J., Pitts, R. C., Hennig, K. H., & Matsuba, M. K. (1995). Reasoning about morality and real-life moral problems.

Further Information

  • BBC Radio 4: The Heinz Dilemma
  • The Science of Morality
  • Piaget’s Theory of Moral Development

What is an example of moral development theory in real life?

An example is a student who witnesses cheating on an important exam. The student is faced with the dilemma of whether to report the cheating or keep quiet.

A person at the pre-conventional level of moral development might choose not to report cheating because they fear the consequences or because they believe that everyone cheats.

A person at the conventional level might report cheating because they believe it is their duty to uphold the rules and maintain fairness in the academic environment.

A person at the post-conventional level might weigh the ethical implications of both options and make a decision based on their principles and values, such as honesty, fairness, and integrity, even if it may come with negative consequences.

This example demonstrates how moral development theory can help us understand how individuals reason about ethical dilemmas and make decisions based on their moral reasoning.

What are the examples of stage 6 universal principles?

Stage 6 of Kohlberg’s moral development theory, also known as the Universal Ethical Principles stage, involves moral reasoning based on self-chosen ethical principles that are comprehensive and consistent. Examples might include:

Equal human rights : Someone at this stage would believe in the fundamental right of all individuals to life, liberty, and fair treatment. They would advocate for and act according to these rights, even if it meant opposing laws or societal norms.

Justice for all : A person at this stage believes in justice for all individuals and would strive to ensure fairness in all situations. For example, they might campaign against a law they believe to be unjust, even if it is widely accepted by society.

Non-violence : A commitment to non-violence could be a universal principle for some at this stage. For instance, they might choose peaceful protest or civil disobedience in the face of unjust laws or societal practices.

Social contract : People at this stage might also strongly believe in the social contract, wherein individuals willingly sacrifice some freedoms for societal benefits. However, they also understand that these societal norms can be challenged and changed if they infringe upon the universal rights of individuals.

Respect for human dignity and worth : Individuals at this stage view each person as possessing inherent value, and this belief guides their actions and judgments. They uphold the dignity and worth of every individual, regardless of social status or circumstance.

What is the Kohlberg’s Heinz dilemma?

The Heinz dilemma is a moral question proposed by Kohlberg in his studies on moral development. It involves a man named Heinz who considers stealing a drug he cannot afford to save his dying wife, prompting discussion on the moral implications and justifications of his potential actions.

Print Friendly, PDF & Email

Related Articles

Vygotsky vs. Piaget: A Paradigm Shift

Child Psychology

Vygotsky vs. Piaget: A Paradigm Shift

Interactional Synchrony

Interactional Synchrony

Internal Working Models of Attachment

Internal Working Models of Attachment

Soft Determinism In Psychology

Soft Determinism In Psychology

Branches of Psychology

Branches of Psychology

Learning Theory of Attachment

Learning Theory of Attachment

alis yimyen/Shutterstock

Ethics and Morality

Morality, Ethics, Evil, Greed

Reviewed by Psychology Today Staff

To put it simply, ethics represents the moral code that guides a person’s choices and behaviors throughout their life. The idea of a moral code extends beyond the individual to include what is determined to be right, and wrong, for a community or society at large.

Ethics is concerned with rights, responsibilities, use of language, what it means to live an ethical life, and how people make moral decisions. We may think of moralizing as an intellectual exercise, but more frequently it's an attempt to make sense of our gut instincts and reactions. It's a subjective concept, and many people have strong and stubborn beliefs about what's right and wrong that can place them in direct contrast to the moral beliefs of others. Yet even though morals may vary from person to person, religion to religion, and culture to culture, many have been found to be universal, stemming from basic human emotions.

  • The Science of Being Virtuous
  • Understanding Amorality
  • The Stages of Moral Development

Dirk Ercken/Shutterstock

Those who are considered morally good are said to be virtuous, holding themselves to high ethical standards, while those viewed as morally bad are thought of as wicked, sinful, or even criminal. Morality was a key concern of Aristotle, who first studied questions such as “What is moral responsibility?” and “What does it take for a human being to be virtuous?”

We used to think that people are born with a blank slate, but research has shown that people have an innate sense of morality . Of course, parents and the greater society can certainly nurture and develop morality and ethics in children.

Humans are ethical and moral regardless of religion and God. People are not fundamentally good nor are they fundamentally evil. However, a Pew study found that atheists are much less likely than theists to believe that there are "absolute standards of right and wrong." In effect, atheism does not undermine morality, but the atheist’s conception of morality may depart from that of the traditional theist.

Animals are like humans—and humans are animals, after all. Many studies have been conducted across animal species, and more than 90 percent of their behavior is what can be identified as “prosocial” or positive. Plus, you won’t find mass warfare in animals as you do in humans. Hence, in a way, you can say that animals are more moral than humans.

The examination of moral psychology involves the study of moral philosophy but the field is more concerned with how a person comes to make a right or wrong decision, rather than what sort of decisions he or she should have made. Character, reasoning, responsibility, and altruism , among other areas, also come into play, as does the development of morality.

GonzaloAragon/Shutterstock

The seven deadly sins were first enumerated in the sixth century by Pope Gregory I, and represent the sweep of immoral behavior. Also known as the cardinal sins or seven deadly vices, they are vanity, jealousy , anger , laziness, greed, gluttony, and lust. People who demonstrate these immoral behaviors are often said to be flawed in character. Some modern thinkers suggest that virtue often disguises a hidden vice; it just depends on where we tip the scale .

An amoral person has no sense of, or care for, what is right or wrong. There is no regard for either morality or immorality. Conversely, an immoral person knows the difference, yet he does the wrong thing, regardless. The amoral politician, for example, has no conscience and makes choices based on his own personal needs; he is oblivious to whether his actions are right or wrong.

One could argue that the actions of Wells Fargo, for example, were amoral if the bank had no sense of right or wrong. In the 2016 fraud scandal, the bank created fraudulent savings and checking accounts for millions of clients, unbeknownst to them. Of course, if the bank knew what it was doing all along, then the scandal would be labeled immoral.

Everyone tells white lies to a degree, and often the lie is done for the greater good. But the idea that a small percentage of people tell the lion’s share of lies is the Pareto principle, the law of the vital few. It is 20 percent of the population that accounts for 80 percent of a behavior.

We do know what is right from wrong . If you harm and injure another person, that is wrong. However, what is right for one person, may well be wrong for another. A good example of this dichotomy is the religious conservative who thinks that a woman’s right to her body is morally wrong. In this case, one’s ethics are based on one’s values; and the moral divide between values can be vast.

Studio concept/shutterstock

Psychologist Lawrence Kohlberg established his stages of moral development in 1958. This framework has led to current research into moral psychology. Kohlberg's work addresses the process of how we think of right and wrong and is based on Jean Piaget's theory of moral judgment for children. His stages include pre-conventional, conventional, post-conventional, and what we learn in one stage is integrated into the subsequent stages.

The pre-conventional stage is driven by obedience and punishment . This is a child's view of what is right or wrong. Examples of this thinking: “I hit my brother and I received a time-out.” “How can I avoid punishment?” “What's in it for me?” 

The conventional stage is when we accept societal views on rights and wrongs. In this stage people follow rules with a  good boy  and nice girl  orientation. An example of this thinking: “Do it for me.” This stage also includes law-and-order morality: “Do your duty.”

The post-conventional stage is more abstract: “Your right and wrong is not my right and wrong.” This stage goes beyond social norms and an individual develops his own moral compass, sticking to personal principles of what is ethical or not.

moral and ethical development essay

However well-intended, our empathy and support can have a paradoxical effect. We can fail to take others seriously or appreciate their suffering as a response to moral demands.

moral and ethical development essay

Personal Perspective: Life can be more than continually grabbing as much as we can for ourselves. Let's fight back with courtesy and respect the needs of others.

moral and ethical development essay

As a father of an autistic child and a researcher on neurodiversity, I propose we model political interactions after productive relationships with neurodivergent family members.

moral and ethical development essay

Debt is incredibly common, yet we often do not discuss its impact. Moralizing debt can create isolation and shameful feelings.

moral and ethical development essay

Recognizing our shared humanity can counter injustice and harm and help bring peace to our hearts.

moral and ethical development essay

A client is getting on your nerves. You feel it’s not good for your mental health to continue to work with them. How can you terminate work without incurring a malpractice lawsuit?

moral and ethical development essay

Accountability is the virtue of answerability. It involves trust, but not blind trust. It claims both successes and failures. In sports, it makes us hard to beat.

moral and ethical development essay

Let's say you don’t think it’s good for your mental health to continue working with a particular client. How can you terminate work without being sued for malpractice?

moral and ethical development essay

Navigating the delicate balance between selflessness and self-interest. From the evolutionary roots of altruism to its modern-day manifestations.

moral and ethical development essay

How well do scientists know how effectively they apply research ethics? Probably not so well, but that should come as no surprise.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

May 2024 magazine cover

At any moment, someone’s aggravating behavior or our own bad luck can set us off on an emotional spiral that threatens to derail our entire day. Here’s how we can face our triggers with less reactivity so that we can get on with our lives.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

Access your Commentary account.

Lost your password? Please enter your email address. You will receive a link to create a new password via email.

The monthly magazine of opinion.

Essays on Moral Development, by Lawrence Kohlberg

Essays on Moral Development, Volume One: The Philosophy of Moral Development. by Lawrence Kohlberg. Harper & Row. 441 pp. $21.95.

Lawrence Kohlberg is a Harvard psychologist who has been insisting for two decades that the study of children’s moral reasoning can guide society in distinguishing right from wrong. His work has been influential—it has supplied much of the impetus behind “moral education” courses that are appearing even in elementary schools. The present collection of essays is concerned with the moral and pedagogical consequences Kohlberg draws from his empirical findings about children, from cross-cultural studies, and from “longitudinal” studies of given subjects at different ages.

Kohlberg discerns six “stages of moral development.” The first four are uncontroversial, extending from the child’s obedience out of fear of punishment to the “my station and its duties” mentality attributed to J. Edgar Hoover. Stage 5, the “official morality of the U.S. Constitution,” recognizes obligations based on contract, plus basic rights like life and liberty. Stage 6—to which this book is a sustained hosannah—adds “justice,” interpreted as “rationally demonstrable universal ethical principles” based on “respect for the dignity of human beings as individuals.”

What distinguishes stage 6 from stage 5 is, in effect, the willingness to disobey laws that conflict with these principles. Kohlberg estimates the number of stage 6’s to be 5 percent of the American population, but his only sustained example of a 6 is Martin Luther King, Jr. Socrates sometimes rates a 6, but is elsewhere demoted to a “5B,” apparently for taking the laws of Athens too seriously. (Kohlberg repeatedly compares King with Socrates as a “moral teacher” executed by the society he made uncomfortable, as if James Earl Ray were a legally appointed executioner.) Lincoln and Gandhi are accorded 6’s in passing.

_____________

What makes a later stage a higher stage? Part of Kohlberg’s answer is the irreversibility of the sequence of stages: while most people become “fixed” at a stage lower than 6, no one ever retreats from a later stage to an earlier one. Ultimately, however, Kohlberg equates later with better because, he says, each stage resolves conflicts that remain unresolved at earlier stages. Thus, Kohlberg reports that his stage-5 respondents disagreed among themselves about whether a man may steal an expensive drug to save his wife’s life, whereas his stage-6 respondents unanimously approved of stealing the drug. Stage 6 is hence the summit of morality because it is the most “formally adequate,” “integrated” level of morality. Not only does it address every moral dilemma, but all who reach it will agree in their answers.

Kohlberg defends this patent absurdity—Socrates, King, Lincoln, and Gandhi would hardly have seen eye-to-eye about, say, homosexuality—by referring to John Rawls’s A Theory of Justice , “the newest great book of the liberal tradition,” which “systematically justifies” stage 6. In resting his own case on Rawls’s, Kohlberg is virtually asking the non-philosophical reader to accept his claims about stage 6 on faith. Still, the basic outlines of Kohlberg’s position are clear.

According to Rawls, when you truly apply the Golden Rule to a problem, you are not distracted by your own preferences or the natural human tendency to put your own interests first. The principles you come up with will be genuinely fair, or just, principles. Rawls’s basic idea is to devise a model situation in which people are really thinking along golden-rule lines. He has us picture rational egoists who have temporarily forgotten their actual places in society. In deliberating about principles that will govern their society, such self-regarding amnesiacs would imagine a principle’s impact on people of every status, and so not slight any person or position, however humble. And Rawls adds an extra twist: his egoists pay most heed to how the worst off will fare, since (for reasons Rawls never quite clarifies) each is obsessively afraid that he will turn out to be the worst off when the “veil of ignorance” lifts.

Kohlberg illustrates the supposedly computer-like operation of this “method of musical chairs” with the issue of capital punishment. Rawls’s model people would reject it, he says, because, while each recognizes the deterrent advantages of capital punishment, each thinks, “what if I were a murderer?” Each then realizes that the murderer would not want to be executed, and hence renounces capital punishment. Lest the reader accuse me of imputing to Kohlberg a position too preposterous for anyone to maintain, here are his own words: If we “assess the death penalty from the point of view of someone who takes into account the possibility of being a capital offender himself [we see that] the capital offender, obviously, would claim that he should be allowed to remain alive. . . . In short, at stage 6 the rational capital offender’s claim to life would be given priority over the claim of maximal protection from crime asserted by the representative ordinary citizen.”

Something has gone wrong. Kohlberg’s magical argument against capital punishment really works against any punishment; presumably he would repudiate parking tickets for according double-parkers insufficient respect. Kohlberg has apparently confused what one would want in a difficult situation with what one would claim he should be allowed to have. Were I a murderer in the electric chair I would hope for a pardon, a power failure, or anything else that would save me, but I would hardly suppose I had a “rational claim” to a right to live that offset the claims of innocents saved by my execution.

This confusion between what people would be willing to do and what they would claim a right to do skews Kohlberg’s understanding of the drug-stealing case, which he sees as a collision between “capitalist morality” and the “sacredness of life.” While it is true that I would stick at almost nothing to save my wife’s life, I would never claim a right on my part or my wife’s to do what I would do. Nor would I do those things to save a stranger, even though, on Kohlberg’s view, the issue involves a generalized right to life the stranger shares with my wife. (I think my attitude makes me a 3.)

Actually, far from resolving every hard problem, “equal respect under universal principles of justice” is an empty truism. Should Churchill warn Coventry about the planned Nazi bombing or remain silent to protect the secret that the British had cracked the Enigma code? Can British counterespionage frame an honorable U-boat captain to damage German morale? Any choice dooms someone, and avoiding the problem (“I don’t want anybody’s blood on my hands”) amounts to choosing to spare the captain and risk extra Allied lives. Whatever the solutions to such dilemmas, the incantation of “equal respect for everyone” will not reveal them.

Indeed, it quickly becomes clear that Kohlberg is just making up stage 6 as he goes along. He scales the peak of arbitrariness when he counsels a stage-6 wife dying of cancer to concur in her own mercy killing: “If the wife puts herself in the husband’s place, the grief she anticipates about her own death is more than matched by the grief a husband should feel at her pain.” Kohlberg does not disclose how to determine the pain the wife will feel, the pain the husband “should” feel, or, indeed, what has become of the “sacredness of life.”

In fact, there is no stage 6. Kohlberg fudges this by combining stages 5 and 6 in his statistics. Astonishingly, he admits in a candid paragraph that

our empirical findings do not clearly delineate a sixth stage. . . . None of our longitudinal subjects have reached the highest stage. Our examples of stage 6 come either from historical figures [conveniently unavailable for answering questionnaires] or from interviews with people who have extensive philosophic training. . . . Stage 6 is perhaps less a statement of an attained psychological reality than the specification of a direction in which, our theory claims, ethical development is moving.

This trumpery shows Kohlberg’s program of “moral education” for the instrument of propaganda it really is. Kohlberg’s proposal begins modestly enough, with Dewey’s insight that children learn best when challenged by problems that strain their current concepts. To this Kohlberg adds Piaget’s discovery that certain key concepts are learned only in a definite order of maturation. What results is a general educational strategy of helping children through natural cognitive stages by posing stimulating problems. Kohlberg now applies this to morals: since a child is disposed to pass through the levels of morality anyway, the teacher should boost him along with provocative tales about theft and murder.

Kohlberg dismisses the idea that schools, especially public schools, should leave ethics to others with the admonition that a “hidden moral curriculum”—of conformity—always lurks behind official postures of neutrality. But Kohlberg’s own pedagogy is anything but the Socratic midwife to a child’s autonomy. Those tales of mercy killings and the like, a “hidden moral curriculum” if there ever was one, are designed to push children along a specific policy agenda that has nothing to do with any natural bents, let alone with “rationally demonstrable universal ethical principles.”

Beneath the platitudes and the jargon, Kohlberg’s morality comes to a specious egalitarianism. It is hard to believe Kohlberg really thinks that any desire, however base or outrageous, deserves as much “respect”—i.e., satisfaction—as any other. But whatever “stage-6 morality” is, it is not synonymous with respect for persons as understood in the Kantian moral tradition Kohl-berg claims to be following. Kantian respect means allowing each person to choose his actions freely and to accept the consequences of his choices. Such respect has nothing to do with satisfying the desires of the autonomous beings who are said to deserve it.

After interviewing a captured Nazi, the hero of Nicholas Monsarrat’s autobiographical novel The Cruel Sea thinks to himself, “These people are not curable. We’ll just have to shoot them and hope for a better crop next time.” Hardly stage-6 thinking—which is why today I am alive to write this and you to read it.

moral and ethical development essay

Joan Didion From the Couch

moral and ethical development essay

Brush Off Your Shakespeare

moral and ethical development essay

Joseph Epstein’s Brief for the Novel

Scroll Down For the Next Article

Type and press enter

  • More Networks

99 Moral Development Essay Topic Ideas & Examples

🏆 best moral development topic ideas & essay examples, 📌 simple & easy moral development essay titles, 👍 good essay topics on moral development, ❓ questions about moral development.

  • School Bullying and Moral Development The middle childhood is marked by the development of basic literacy skills and understanding of other people’s behavior that would be crucial in creating effective later social cognitions. Therefore, addressing bullying in schools requires strategies […]
  • Kohlberg’s Stages of Moral Development in Justice System Burglars, whose predominant level of morality is conventional, tend to consider the opinion of the society on their actions. Kohlberg’s stages of moral development help to identify the problems and find solutions to them.
  • Kohlberg’s Moral Development Concept This is continuous because, in every stage of the moral development, the moral reasoning changes to become increasingly complex over the years.
  • Moral Development in Early Childhood The only point to be poorly addressed in this discussion is the options for assessing values in young children and the worth of this task.
  • The Moral Development of Children Child development Rev 2000; 71: 1033 1048.’ moral development/moral reasoning which is an important aspect of cognitive development of children has been studied very thoroughly with evidence-based explanations from the work of many psychologists based […]
  • Pulp Fiction: Moral Development of American Life and Interests Quentin Tarantino introduces his Pulp Fiction by means of several scenes which have a certain sequence: proper enlightenment, strong and certain camera movements and shots, focus on some details and complete ignorance of the others, […]
  • An Evaluation of Kohlberg’s Theory of Moral Development and How It Could Be Applied to Grade School It is the purpose of this essay to summarize Kohlberg’s theory, and thereafter analyze how the theory can be applied to grade a school.
  • Moral Development and Bullying in Children The understanding of moral development following the theories of Kohlberg and Gilligan can provide useful solutions to eliminating bullying in American schools.
  • Moral Development and Its Relation to Psychology These stages reveal the individual’s moral orientation expanding his/her experiences and perceptions of the world with regard to the cognitive development of a person admitting this expansion. The views of Piaget and Kohlberg differ in […]
  • Kohlberg’s Theory of Moral Development Dilemma According to Kohlberg, justice is the driver of the process of moral development. Therefore, the early Christians should have continued to practice Christianity regardless of the persecution.
  • Moral Development: Emotion and Moral Behavior More moral emotion is guilt as compared to shame because those who are shamed are relatively unlikely to rectify as compared to the guilty people.
  • Adolescent Moral Development in the United States Adolescents who are in this stage begin to acknowledge and understand the beliefs embraced in their societies. The absence of a moral compass can make it hard for adolescents in this country to realize their […]
  • Moral Development Theory Review by Kohlberg and Hersh Overall, the main strength of this article is that the authors present a comprehensive overview of theories that can throw light on the moral development of a person.
  • Moral Development and Aggression The reason is that children conclude about the acceptability of aggressive or violent behaviors with reference to what they see and hear in their family and community.
  • Moral Development: Kohlberg’s Dilemmas Another characteristic of this stage of moral speculation is that the speculators mostly view the dilemma through the lens of consequences it might result in and engage them in a direct or indirect manner.
  • Chinese Foundations for Moral Education and Character Development In Chinese Foundations for Moral Education and Character Development, Vincent Shen and his team make a wonderful attempt to describe how rich and captivating Chinese cultural heritage may be, how considerable knowledge for this country […]
  • Cognitive, Psychosocial, Psychosexual and Moral Development This, he goes ahead to explain that it is at this very stage that children learn to be self sufficient in terms of taking themselves to the bathroom, feeding and even walking.
  • Moral Development and Ethical Concepts The two concepts are important in the promotion of ethical culture within the organizations, the organizations’ performance and the much needed moral and financial support from the organization’s stakeholders and the public in general.
  • Empathy and Moral Development For a manager to have empathy, he/she has to be able to interact freely with the employees, and spend time with them at their work places. This makes the employees to know that what they […]
  • Cognitive or Moral Development This is the second of the four Piagetian stages of development and the children begin to make use of words, pictures and diagrams to represent their sentiments.
  • Moral Intelligence Development In the course of his day-to-day banking activities, I realized that the general manager used to work in line with the banking rules and regulations to the letter.
  • The Impact Of Television On The Moral Development
  • Influences in Moral Development
  • The Influence of Parenting in the Moral Development of a Child
  • The Effect of Cognitive Moral Development on Honesty in Managerial Reporting
  • Huckleberry Finn Moral Development & Changes
  • Responsibility For Moral Development In Children
  • Morality and Responsibility – Moral Development in Mary Shelley’s Frankenstein
  • Moral Development And Gender Care Theories
  • Jean Piaget and Lawrence Kohlberg on Moral Development
  • Kaylee Georgeoff’s Moral Development According To Lawrence Kohlberg
  • The Criticisms Of Kohlberg’s Moral Development Stages
  • The Ethics Of The Organization ‘s Moral Development
  • Lawrence Kohlbergs Stages Of Moral Development
  • Personal, Psychosocial, And Moral Development Theories
  • Moral Development and Importance of Moral Reasoning
  • Integrating Care and Justice: Moral Development
  • Moral Development in Youth Sport
  • Kohlberg’s Theory on Moral Development: New Field of Study in Western Science
  • The Definition of Ethics and the Foundation of Moral Development
  • Kohleberg´s Philosophy of Moral Development
  • Stealing and Moral Reasoning: Kohlberg’s Stages of Moral Development
  • Plagiarism and Moral Development
  • Kohlberg and Moral Development Between the Ages of One and Six
  • Kohlberg’s 6 Stages of Cognitive Moral Development and Model Suggestions
  • Moral Development and Narcissism of Private and Public University Business Students
  • The Effect of the Transcendental Meditation TM Technique on Moral Development
  • Psychology Stages of Moral Development
  • The Link Between Friendship and Moral Development
  • Moral Development And Gender Related Reasoning Styles
  • Moral Development in the Adventures of Huckleberry Fin by Mark Twain
  • Moral Development : The Way Someone Thinks, Feels, And Behaves
  • The Effect of Nuclear and Joint Family Systems on the Moral Development: A Gender Based Analysis
  • Moral Development and Dilemmas of Huck in The Adventures of Huckleberry Finn by Mark Twain
  • Teaching Moral Development To School Children In The Caribbean
  • History and Moral Development of Mental Health Treatment and Involuntary Commitment
  • Incorporating Kohlberg’s Stages of Moral Development into the Justice System
  • Moral Development Theory in Boys and Girls: Kohlberg and Gilligan
  • The Different Levels in Moral Development
  • Moral Development Of Six-Year-Old Children
  • Portrait of Erik Erikson’s Developmental Theory and Kohlberg’s Model of Moral Development
  • Moral Development Of Jem And Scout In To Kill A Mockingbird
  • Moral Development and Aggression in Children
  • The Idea Of Moral Development In The Novel Adventures of Huckleberry Finn By Mark Twain
  • The Influence of Media Technology on the Moral Development and Self-Concept of Youth
  • The Character of Tituba in Lawrence Kohlberg’s Different Stages of Moral Development
  • Multiple Intelligences, Metacognition And Moral Development
  • Moral Development : The Foundation Of Ethical Behavior
  • Can Moral Development Lead To Upward Influence Behavior?
  • What Are the Five Stages of Moral Development?
  • What Is an Example of Moral Development?
  • What Is Moral Development, and Why Is It Important?
  • What Are the Three Levels of Moral Development?
  • What Are the Six Stages of Kohlberg’s Theory of Moral Development?
  • What Is Moral Development in a Child?
  • What Is Moral Development, According to Kohlberg?
  • How Many Levels of Moral Development Are There?
  • Why Is Moral Development Significant in Early Childhood?
  • What Factors Play Into Moral Development?
  • What Is Moral Development in Adolescence?
  • What Are the Characteristics of Moral Development?
  • Why Is Kohlberg’s Theory of Moral Development Critical?
  • What Characteristics Are Essential for Healthy Moral Development?
  • How Do Parents Affect a Child’s Moral Development?
  • What Is the Most Important Influence on a Child’s Moral Development?
  • What Is the Role of the Teacher in Moral Development?
  • Why Is Moral Development Significant?
  • What Is Meant by Moral Development?
  • Why Is Research on Moral Development Necessary?
  • What Is the Study of Moral Development?
  • What Factors Affect Moral Development?
  • Which of the Following Researchers Studied Moral Development?
  • How Did Kohlberg Research Moral Development?
  • What Is Carol Gilligan’s Theory of Moral Development?
  • How Did Piaget Study Moral Development?
  • What Was Gilligan’s Main Criticism of Kohlberg’s Theory of Moral Development?
  • What Is the Difference Between Kohlberg’s Theory of Moral Development and Gilligan’s Theory of Moral Evolution and Gender?
  • Why Do Different Scholars Criticize Kohlberg’s Theory of Moral Development?
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2023, September 26). 99 Moral Development Essay Topic Ideas & Examples. https://ivypanda.com/essays/topic/moral-development-essay-topics/

"99 Moral Development Essay Topic Ideas & Examples." IvyPanda , 26 Sept. 2023, ivypanda.com/essays/topic/moral-development-essay-topics/.

IvyPanda . (2023) '99 Moral Development Essay Topic Ideas & Examples'. 26 September.

IvyPanda . 2023. "99 Moral Development Essay Topic Ideas & Examples." September 26, 2023. https://ivypanda.com/essays/topic/moral-development-essay-topics/.

1. IvyPanda . "99 Moral Development Essay Topic Ideas & Examples." September 26, 2023. https://ivypanda.com/essays/topic/moral-development-essay-topics/.

Bibliography

IvyPanda . "99 Moral Development Essay Topic Ideas & Examples." September 26, 2023. https://ivypanda.com/essays/topic/moral-development-essay-topics/.

  • Organization Development Research Ideas
  • Personality Development Ideas
  • Professional Development Research Ideas
  • Respect Essay Topics
  • Ethics Ideas
  • Lifespan Development Essay Titles
  • Social Development Essay Topics
  • Ethical Dilemma Titles

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Moral Theory

There is much disagreement about what, exactly, constitutes a moral theory. Some of that disagreement centers on the issue of demarcating the moral from other areas of practical normativity, such as the ethical and the aesthetic. Some disagreement centers on the issue of what a moral theory’s aims and functions are. In this entry, both questions will be addressed. However, this entry is about moral theories as theories , and is not a survey of specific theories, though specific theories will be used as examples.

1.1 Common-sense Morality

1.2 contrasts between morality and other normative domains, 2.1 the tasks of moral theory, 2.2 theory construction, 3. criteria, 4. decision procedures and practical deliberation, other internet resources, related entries, 1. morality.

When philosophers engage in moral theorizing, what is it that they are doing? Very broadly, they are attempting to provide a systematic account of morality. Thus, the object of moral theorizing is morality, and, further, morality as a normative system.

At the most minimal, morality is a set of norms and principles that govern our actions with respect to each other and which are taken to have a special kind of weight or authority (Strawson 1961). More fundamentally, we can also think of morality as consisting of moral reasons, either grounded in some more basic value, or, the other way around, grounding value (Raz 1999).

It is common, also, to hold that moral norms are universal in the sense that they apply to and bind everyone in similar circumstances. The principles expressing these norms are also thought to be general , rather than specific, in that they are formulable “without the use of what would be intuitively recognized as proper names, or rigged definite descriptions” (Rawls 1979, 131). They are also commonly held to be impartial , in holding everyone to count equally.

… Common-sense is… an exercise of the judgment unaided by any Art or system of rules : such an exercise as we must necessarily employ in numberless cases of daily occurrence ; in which, having no established principles to guide us … we must needs act on the best extemporaneous conjectures we can form. He who is eminently skillful in doing this, is said to possess a superior degree of Common-Sense. (Richard Whatley, Elements of Logic , 1851, xi–xii)

“Common-Sense Morality”, as the term is used here, refers to our pre-theoretic set of moral judgments or intuitions or principles. [ 1 ] When we engage in theory construction (see below) it is these common-sense intuitions that provide a touchstone to theory evaluation. Henry Sidgwick believed that the principles of Common-Sense Morality were important in helping us understand the “first” principle or principles of morality. [ 2 ] Indeed, some theory construction explicitly appeals to puzzles in common-sense morality that need resolution – and hence, need to be addressed theoretically.

Features of commons sense morality are determined by our normal reactions to cases which in turn suggest certain normative principles or insights. For example, one feature of common-sense morality that is often remarked upon is the self/other asymmetry in morality, which manifests itself in a variety of ways in our intuitive reactions. For example, many intuitively differentiate morality from prudence in holding that morality concerns our interactions with others, whereas prudence is concerned with the well-being of the individual, from that individual’s point of view.

Also, according to our common-sense intuitions we are allowed to pursue our own important projects even if such pursuit is not “optimific” from the impartial point of view (Slote 1985). It is also considered permissible, and even admirable, for an agent to sacrifice her own good for the sake of another even though that is not optimific. However, it is impermissible, and outrageous, for an agent to similarly sacrifice the well-being of another under the same circumstances. Samuel Scheffler argued for a view in which consequentialism is altered to include agent-centered prerogatives, that is, prerogatives to not act so as to maximize the good (Scheffler 1982).

Our reactions to certain cases also seem to indicate a common-sense commitment to the moral significance of the distinction between intention and foresight, doing versus allowing, as well as the view that distance between agent and patient is morally relevant (Kamm 2007).

Philosophers writing in empirical moral psychology have been working to identify other features of common-sense morality, such as how prior moral evaluations influence how we attribute moral responsibility for actions (Alicke et. al. 2011; Knobe 2003).

What many ethicists agree upon is that common-sense is a bit of a mess. It is fairly easy to set up inconsistencies and tensions between common-sense commitments. The famous Trolley Problem thought experiments illustrate how situations which are structurally similar can elicit very different intuitions about what the morally right course of action would be (Foot 1975). We intuitively believe that it is worse to kill someone than to simply let the person die. And, indeed, we believe it is wrong to kill one person to save five others in the following scenario:

David is a great transplant surgeon. Five of his patients need new parts—one needs a heart, the others need, respectively, liver, stomach, spleen, and spinal cord—but all are of the same, relatively rare, blood-type. By chance, David learns of a healthy specimen with that very blood-type. David can take the healthy specimen's parts, killing him, and install them in his patients, saving them. Or he can refrain from taking the healthy specimen's parts, letting his patients die. (Thomson 1976, 206)

And yet, in the following scenario we intuitively view it entirely permissible, and possibly even obligatory, to kill one to save five:

Edward is the driver of a trolley, whose brakes have just failed. On the track ahead of him are five people; the banks are so steep that they will not be able to get off the track in time. The track has a spur leading off to the right, and Edward can turn the trolley onto it. Unfortunately there is one person on the right-hand track. Edward can turn the trolley, killing the one; or he can refrain from turning the trolley, killing the five. (Thomson 1976, 206).

Theorizing is supposed to help resolve those tensions in a principled way. Theory construction attempts to provide guidance in how to resolve such tensions and how to understand them.

1.2.1 Morality and Ethics

Ethics is generally understood to be the study of “living well as a human being”. This is the topic of works such as Aristotle’s Nicomachean Ethics , in which the aim of human beings is to exemplify human excellence of character. The sense in which we understand it here is that ethics is broader than morality, and includes considerations of personal development of oneself and loved ones. This personal development is important to a life well lived, intuitively, since our very identities are centered on projects that we find important. Bernard Williams and others refer to these projects as “ground projects”. These are the sources of many of our reasons for acting. For Williams, if an agent seeks to adopt moral considerations, or be guided by them, then important ethical considerations are neglected, such as personal integrity and authenticity (Williams 1977; Wolf 1982). However, Williams has a very narrow view of what he famously termed “the morality system” (Williams 1985).

Williams lists a variety of objectionable features of the morality system, including the inescapability of moral obligations, the overridingness of moral obligation, impartiality , and the fact that in the morality system there is a push towards generalization .

There has been considerable discussion of each of these features of the morality system, and since Williams, a great deal of work on the part of standard moral theorists on how each theory addresses the considerations he raised. Williams’ critique of the morality system was part of a general criticism of moral theory in the 1980s on the grounds of its uselessness, harmfulness, and even its impossibility (Clarke 1987). This anti-theory trend was prompted by the same dissatisfaction with consequentialism and deontology that led to the resurgence of Virtue Ethics.

A major criticism of this view is that it has a very narrow view of what counts as a moral theory. Thus, some of these approaches simply rejected some features of William’s characterization of the morality system, such as impartiality. Others, however, Williams’ included, attacked the very project of moral theory. This is the ‘anti-theory’ attack on moral theorizing. For example, Annette Baier argued that morality cannot be captured in a system of rules, and this was a very popular theme amongst early virtue ethicists. On this view, moral theory which systematizes and states the moral principles that ought to guide actions is simply impossible: “Norms in the form of virtues may be essentially imprecise in some crucial ways, may be mutually referential, but not hierarchically orderable, may be essentially self-referential” (Baier 220).

Robert Louden even argued that the best construal of virtue ethics is not as an ethical theory, but as anti-theory that should not be evaluated as attempting to theorize morality at all. (Louden 1990). According to Louden, moral theories are formulated to a variety of reasons, including to provide solutions to problems, formulas for action, universal principles, etc. Louden notes that this characterization is very narrow and many would object to it, but he views anti-theory not so much as a position against any kind of moral theorizing, but simply the kind that he viewed as predominant prior to the advent of Virtue Ethics. This is a much less severe version of anti-theory as it, for example, doesn’t seem to regard weightiness or importance of moral reasons as a problem.

Some of the problems that Williams and other anti-theorists have posed for morality, based on the above characteristics, are:

Morality is too demanding and pervasive: that is, the view that moral reasons are weighty indicates that we should be giving them priority over other sorts of reasons. Further, they leach into all aspects of our lives, leaving very little morally neutral.

Morality is alienating. There are a variety of ways in which morality can be alienating. As Adrian Piper notes, morality might alienate the agent from herself or might alienate the agent from others – impartiality and universality might lead to this, for example (Piper 1987; Stocker 1976). Another way we can understand alienation is that the agent is alienated from the true justifications of her own actions – this is one way to hold that theories which opt for indirection can lead to alienation (see section 4 below).

Morality, because it is impartial, makes no room for special obligations. That is, if the right action is the one that is impartial between persons, then it does not favor the near and dear. On this picture it is difficult to account for the moral requirements that parents have towards their own children, and friends have towards each other. These requirements are, by their nature, not impartial.

Morality is committed to providing guides for action that can be captured in a set of rules or general principles. That is, morality is codifiable and the rules of morality are general.

Morality requires too much. The basic worry is that the morality system is voracious and is creeping into all aspects of our lives, to the detriment of other important values. The worry expressed by 4 takes a variety of forms. For example, some take issue with a presupposition of 4, arguing that there are no moral principles at all if we think of these principles as guiding action . Some argue that there are no moral principles that are complete, because morality is not something that is codifiable . And, even if morality was codifiable, the ‘principles’ would be extremely specific , and not qualify as principles at all.

Since Williams’ work, philosophers have tried to respond to the alienation worry by, for example, providing accounts of the ways in which a person’s reasons can guide without forming an explicit part of practical deliberation. Peter Railton, for example, argues in favor of a form of objective consequentialism, Sophisticated Consequentialism , in which the rightness of an action is a function of its actual consequences (Railton 1984). On Railton’s view, one can be a good consequentialist without being alienated from loved ones. Though not attempting to defend moral theory per se , other writers have also provided accounts of how agents can act on the basis of reasons – and thus perform morally worthy actions, even though these reasons are not explicitly articulated in their practical deliberations (Arpaly 2002; Markovits 2014). Deontologists have argued that autonomous action needn’t involve explicit invocation of, for example, the Categorical Imperative (Herman 1985). Generally, what characterizes these moves is the idea that the justifying reasons are present in some form in the agent’s psychology – they are recoverable from the agent’s psychology – but need not be explicitly articulated or invoked by the agent in acting rightly.

One way to elaborate on this strategy is to argue that the morally good agent is one who responds to the right sorts of reasons, even though the agent can’t articulate the nature of the response (Arpaly 2002). This strategy makes no appeal to codifiable principles, and is compatible with a wide variety of approaches to developing a moral theory. It relies heavily on the concept, of course, of “reason” and “moral reason,” which many writers on moral issues take to be fundamental or basic in any case.

There has also been debate concerning the proper scope of morality, and how moral theories can address problems relating to impartiality. Kant and the classical utilitarians believed that moral reasons are impartial, what others have termed agent-neutral. Indeed, this is one point of criticism that virtue ethics has made of these two theories. One might argue that moral reasons are impartial, but that there are other reasons that successfully compete with them – reasons relating to the near and dear, for example, or one’s own ground projects. Or, one could hold that morality includes special reasons, arising from special obligations, that also morally justify our actions.

The first strategy has been pursued by Bernard Williams and other “anti-theorists”. Again, Williams argues that morality is a special system that we would be better off without (Williams 1985). In the morality system we see a special sense of “obligation” – moral obligation – which possesses certain features. For example, moral obligation is inescapable according to the morality system. A theory such as Kant’s, for example, holds that we must act in accordance with the Categorical Imperative. It is not optional. This is because morality is represented as having authority over us in ways that even demand sacrifice of our personal projects, of the very things that make our lives go well for us. This seems especially clear for Utilitarianism, which holds that we must maximize the good, and falling short of maximization is wrong . A Kantian will try to avoid this problem by appealing to obligations that are less demanding, the imperfect ones. But, as Williams points out, these are still obligations , and as such can only be overridden by other obligations. Thus, the theories also tend to present morality as pervasive in that morality creeps into every aspect of our lives, making no room for neutral decisions. For example, even decisions about what shoes to wear to work becomes a moral one:

Once the journey into more general obligations has started, we may begin to get into trouble – not just philosophical trouble, but the conscience trouble – with finding room for morally indifferent actions. I have already mentioned the possible moral conclusion that one may take some particular course of action. That means that there is nothing else I am obliged to do. But if we have accepted general and indeterminate obligations to further various moral objectives…they will be waiting to provide work for idle hands… (Williams 1985, 181)

He goes on to write that in order to get out of this problem, “…I shall need one of those fraudulent items, a duty to myself” (Williams 1985, 182). Kantian Ethics does supply this. Many find this counterintuitive, since the self/other asymmetry seems to capture the prudence/morality distinction, but Kantians such as Tom Hill, jr. have made strong cases for at least some moral duties to the self. In any case, for writers such as Williams, so much the worse for morality .

Other writers, also concerned about the problems that Williams has raised argue, instead, that morality does make room for our partial concerns and projects, such as the norms governing our relationships, and our meaningful projects. Virtue ethicists, for example, are often comfortable pointing out that morality is not thoroughly impartial because there are virtues of partiality. Being a good mother involves having a preference for the well-being of one’s own children. The mother who really is impartial would be a very bad mother, lacking in the appropriate virtues.

Another option is to hold that there are partial norms, but those partial norms are themselves justified on impartial grounds. This can be spelled out in a variety of different ways. Consider Marcia Baron’s defense of impartiality, where she notes that critics of impartiality are mistaken because they confuse levels of justification: “Critics suppose that impartialists insisting on impartiality at the level of rules or principles are committed to insisting on impartiality at the level of deciding what to do in one’s day-to-day activities” (Baron 1991). This is a mistake because impartialists can justify partial norms by appealing to impartial rules or principles. She is correct about this. Even Jeremy Bentham believed, for example, that the principle of utility ought not be applied in every case, though he mainly appealed to efficiency costs of using the principle all the time. But one can appeal to other considerations. Frank Jackson uses an analogy with predators to argue that partial norms are strategies for maximizing the good, they offer the best chance of actually doing so given our limitations (Jackson 1991). Similarly, a Kantian such as Tom Hill, jr., as Baron notes, can argue that impartiality is part of an ideal, and ought not govern our day-to-day lives (Hill 1987). Does this alienate people from others? The typical mother shows the right amount of preference for her child, let’s say, but doesn’t herself think that this is justified on the basis of promoting the good, for example. A friend visits another in the hospital and also does not view the partiality as justified by any further principles. But this is no more alienating than someone being able to make good arguments and criticize bad ones without a knowledge of inference rules. Maybe it is better to have an awareness of the underlying justification, but for some theories even that is debatable. For an objective theorist (see below) it may be that knowing the underlying justification can interfere with doing the right thing, in which case it is better not to know. For some theorists, however, such as neo-Aristotelian virtue ethicists, a person is not truly virtuous without such knowledge and understanding, though Rosalind Hursthouse (1999) does not make this a requirement of right action.

Recently consequentialists have been approaching this issue through the theory of value itself, arguing that there are agent-relative forms of value. This approach is able to explain the intuitions that support partial moral norms while retaining the general structure of consequentialism (Sen 2000). Douglas Portmore, for example, argues for a form of consequentialism that he terms “commonsense consequentialism” as it is able to accommodate many of our everyday moral intuitions (Portmore 2011). He does so by arguing that (1) the deontic status of an act, whether it is right or wrong, is determined by what reasons the agent has for performing it – if an agent has a decisive reason to perform the act in question, then it is morally required. Combined with (2) a teleological view of practical reasons in which our reasons for performing an action are a function of what we have reason to prefer or desire we are led to a form of act-consequentialism but one which is open to accepting that we have reason to prefer or desire the well-being of the near and dear over others.

Though much of this is controversial, there is general agreement that moral reasons are weighty , are not egoistic – that is, to be contrasted with prudential reasons, and are concerned with issues of value [duty, fittingness].

1.2.2. Morality and Aesthetics

Moral modes of evaluation are distinct from the aesthetic in terms of their content, but also in terms of their authority. So, for example, works of art are evaluated as “beautiful” or “ugly”, and those evaluations are not generally considered as universal or as objective as moral evaluations. These distinctions between moral evaluation and aesthetic evaluation have been challenged, and are the subject of some interesting debates in metaethics on the nature of both moral and aesthetic norms and the truth-conditions of moral and aesthetic claims. But, considered intuitively, aesthetics seems at least less objective than morality.

A number of writers have noted that we need to be cognizant of the distinction between moral norms and the norms specific to other normative areas in order to avoid fallacies of evaluation, and much discussion has centered on a problem in aesthetics termed the “Moralistic Fallacy” (D’Arms and Jacobson 2000).

One challenge that the anti-theorists have raised for morality was to note that in a person’s life there will be certain norm clashes – including clashes between types of norms such as the moral and the aesthetic. It is giving too much prominence to the moral that judges a person’s life as going well relative to the fulfillment or respect of those norms. Can’t a human life go well, even when that life sacrifices morality for aesthetics?

This sort of debate has a long history in moral theory. For example, it arose as a form of criticism of G. E. Moore’s Ideal Utilitarianism, which treated beauty as an intrinsic good, and rendering trade-offs between behaving well towards others and creating beauty at least in principle justified morally (Moore 1903). But the anti-theorists do not pursue this method of accommodating the aesthetic, instead arguing that it is a separate normative realm which has its own weight and significance in human flourishing.

2. Theory and Theoretical Virtues

There is agreement that theories play some kind of systematizing role, and that one function is to examine important concepts relevant to morality and moral practice and the connections, if any, between them. For example, one very common view in the middle of the 20 th century, attributed to John Rawls, was to view moral theory as primarily interested in understanding the ‘right’ and the ‘good’ and connections between the two (Rawls). Priority claims are often a central feature in the systematizing role of moral theory. Related to this is the issue of explanatory, or theoretical, depth . That is, the deeper the explanation goes, the better.

Theories also strive for simplicity , coherence , and accuracy . The fewer epicycles the theory has to postulate the better, the parts of the theory should fit well together. For example, the theory should not contain inconsistent principles, or have inconsistent implications. The theory should cover the phenomena in question. In the case of moral theories, the phenomena in question are thought to be our considered moral intuitions or judgements. Another coherence condition involves the theory cohering with a person’s set of considered judgments, as well.

One last feature that needs stressing, particularly for moral theories, is applicability . One criticism of some normative ethical theories is that they are not applicable. For example, Virtue Ethics has been criticized for not providing an account of what our moral obligations are – appealing to what the virtuous person would do in the circumstances would seem to set a very high bar or doesn’t answer the relevant question about how we should structure laws guiding people on what their social obligations are. Similarly, objective consequentialists, who understand “right action” in terms of actual consequences have been criticized for rendering what counts as a right action in a given circumstance unknowable, and thus useless as a guide to action. Both approaches provide responses to this worry, but this supports the claim that a desideratum of a moral theory is that it be applicable.

One task (though this is somewhat controversial) of a moral theory is to give an account of right actions. Often, this will involve an explication of what counts as good – some theories then get spelled out in terms of how they approach the good, by maximizing it, producing enough of it, honoring it, etc. In addition, some theories explicate the right in terms of acting in accordance with one’s duties, or acting as a virtuous person would act. In these cases the notions of ‘duty’ and ‘virtue’ become important to the overall analysis, and one function of moral theory is to explore the systematic connections between duty or virtue and the right and the good.

Moral theories also have both substantive and formal aims. Moral theories try to provide criteria for judging actions. It might be that the criterion is simple, such as right actions maximize the good, or it may be complex, such as the right action is the one that gives adequate weight to each competing duty. Sometimes, in recognition that there is not always “the” right action, the theory simply provides an account of wrongness, or permissibility and impermissibility, which allows that a range of actions might count as “right”.

In addition to simply providing criteria for right or virtuous action, or for being a virtuous person, a given moral theory, for example, will attempt to explain why something, like an action or character trait, has a particular moral quality, such as rightness or virtuousness. Some theories view rightness as grounded in or explained by value . Some view rightness as a matter of reasons that are prior to value. In each case, to provide an explanation of the property of ‘rightness’ or ‘virtuousness’ will be to provide an account of what the grounding value is, or an account of reasons for action.

In addition, moral theories may also provide decision-procedures to employ in determining how to act rightly or virtuously, conditions on being good or virtuous, or conditions on morally appropriate practical deliberation. Thus, the theory provides substance to evaluation and reasons. However, moral theories, in virtue of providing an explanatory framework, help us see connections between criteria and decision-procedures, as well as provide other forms of systemization. Thus, moral theories will be themselves evaluated according to their theoretical virtues: simplicity, explanatory power, elegance, etc. To evaluate moral theories as theories , each needs to be evaluated in terms of how well it succeeds in achieving these theoretical goals.

There are many more specialized elements to moral theories as well. For example, a moral theory often concerns itself with features of moral psychology relevant to action and character, such as motives, intentions, emotions, and reasons responsiveness. A moral theory that incorporates consideration of consequences into the determination of moral quality, will also be concerned with issues surrounding the proper aggregation of those consequences, and the scope of the consequences to be considered.

There’s been a long history of comparing moral theories to other sorts of theories, such as scientific ones. For example, in meta-ethics one issue has to do with the nature of moral “evidence” on analogy with scientific evidence. On what Ronald Dworkin terms the “natural model” the truths of morality are discovered, just as the truths of science are (Dworkin 1977, 160). It is our considered intuitions that provide the clues to discover these moral truths, just as what is observable to us provides the evidence to discover scientific truths. He compared this model with the “constructive model” in which the intuitions themselves are features of the theory being constructed and are not analogous to observations of the external world.

Yet, even if we decide that morality lacks the same type of phenomena to be accounted for as science, morality clearly figures into our normative judgments and reactions. One might view these – our intuitions about moral cases, for example – to provide the basic data that needs to be accounted for by a theory on either model.

One way to “account for” our considered intuitions would be to debunk them. There is a long tradition of this in moral philosophy as well. When scholars provided genealogies of morality that explained our considered intuitions in terms of social or evolutionary forces that are not sensitive the truth, for example, they were debunking morality by undercutting the authority of our intuitions to provide insight into it (Nietzsche 1887 [1998], Joyce 2001, Street 2006). In this entry, however, we consider the ways in which moral theorists have constructed their accounts by taking the intuitions seriously as something to be systematized, explained, and as something that can be applied to generate the correct moral decisions or outcomes.

Along these lines, one method used in theory construction would involve the use of reflective equilibrium and inference to the best explanation. For example, one might notice an apparent inconsistency in moral judgements regarding two structurally similar cases and then try to figure out what principle or set of principles would achieve consistency between them. In this case, the theorist is trying to figure out what best explains both of those intuitions. But one also might, after thinking about principles one already accepts, or finds plausible, reject one of those intuitions on the basis of it not cohering with the rest of one’s considered views. But full theory construction will go beyond this because of the fully theoretical virtues discussed earlier. We want a systematic account that coheres well not only with itself, but with other things that we believe on the basis of good evidence.

Consider the following:

Malory has promised to take Chris grocery shopping. Unfortunately, as Malory is leaving the apartment, Sam calls with an urgent request: please come over to my house right now, my pipes have broken and I need help! Torn, Malory decides to help Sam, and thus breaks a promise to Chris.

Has Malory done the right thing? The virtuous thing? Malory has broken a promise, which is pro tanto wrong, but Sam is in an emergency and needs help right away. Even if it is clear that what Malory did was right in the circumstances, it is an interesting question as to why it is right. What can we appeal to in making these sorts of judgments? This brings to light the issue of how one morally justifies one’s actions. This is the task of understanding what the justifying reasons are for our actions. What makes an action the thing to do in the circumstances? This is the criterion of rightness (or wrongness). We will focus on the criterion of rightness, though the criterion issue comes up with other modes of moral evaluation, such as judging an action to be virtuous, or judging it to be good in some respect, even if not right. Indeed, some writers have argued that ‘morally right’ should be jettisoned from modern secular ethics, as it presupposes a conceptual framework left over from religiously based accounts which assume there is a God (Anscombe 1958). We will leave these worries aside for now, however, and focus on standard accounts of criteria.

The following are some toy examples that exhibit differing structural features for moral theories and set out different criteria:

Consequentialism . The right action is the action that produces good amongst the options open to the agent at the time of action (Singer). The most well-known version of this theory is Classical Utilitarianism, which holds that the right action promotes pleasure (Mill). Kantian Deontology . The morally worthy action is in accordance with the Categorical Imperative, which requires an agent refrain from acting in a way that fails to respect the rational nature of other persons (Kant). Rossian Deontology . The right action is the action that best accords with the fulfillment and/or non-violation of one’s prima facie duties (Ross). Contractualism . An action is morally wrong if it is an act that would be forbidden by principles that rational persons could not reasonably reject (Scanlon). Virtue Ethics . The right action is the action that a virtuous person would characteristically perform in the circumstances (Hursthouse 1999).

These principles set out the criterion or standard for evaluation of actions. They do not necessarily tell us how to perform right actions, and are not, in themselves, decision-procedures, though they can easily be turned into decision procedures, such as: you ought to try to perform the action that maximizes the good amongst the options available to you at the time of action. This might not be, and in ordinary circumstance probably isn’t, a very good decision-procedure, and would itself need to be evaluated according to the criterion set out by the theory.

These theories can be divided, roughly, into the deontological, consequentialist, and virtue ethical categories. There has been a lively debate about how, exactly, to delineate these categories. Some have held that deontological theories were just those theories that were not consequentialist. A popular conception of consequentialist theories is that they are reductionist in a particular way – that is, in virtue of reducing deontic features of actions (e.g. rightness, obligatoriness) to facts about an agent’s options and the consequences of those options (Smith 2009). If that is the case, then it seems that deontological approaches are just the ones that are not reductive in this manner. However, this fails to capture the distinctive features of many forms of virtue ethics, which are neither consequentialist nor necessarily concerned with what we ought to do , our duties as opposed to what sorts of persons we should be.

One way to distinguish consequentialist from deontological theories is in terms of how each approaches value. Philip Pettit has suggested that while consequentialist theories required promotion of value, deontological theories recommend that value be honored or respected. On each of these views, value is an important component of the theory, and theories will be partially delineated according to their theory of value. A utilitarian such as Jeremey Bentham believes that hedonism is the correct theory of value, whereas someone such as G. E. Moore, a utilitarian but a pluralist regarding value, believes that hedonism is much too narrow an account. A Kantian, on the other hand, views value as grounded in rational nature, in a will conforming to the Categorical Imperative.

Because of the systematizing function of moral theory discussed earlier, the simplest account is to be preferred and thus there is a move away from endorsing value pluralism. Of course, as intuitive pressure is put on each of the simpler alternatives, a pluralistic account of criteria for rightness and wrongness has the advantage of according best with moral intuitions.

Reasons-first philosophers will delineate the theories somewhat differently. For example, one might understand goodness as a matter of what we have reason to desire, in which case what we have reason to desire is prior to goodness rather than the other way around. Value is still an important component of the theories, it is simply that the value is grounded in reasons.

Another distinction between normative theories is that between subjective and objective versions of a type of theory. This distinction cuts across other categories. For example, there are subjective forms of all the major moral theories, and objective versions of many. An objective standard of right holds that the agent must actually meet the standard – and meeting the standard is something ‘objective’, not dependent on the agent’s psychological states – in order to count as right or virtuous. Subjective standards come in two broad forms:

  • Psychology sensitive : are the justifying reasons part of the agent’s deliberative processes? Or, more weakly, are they “recoverable” from the agent’s psychology [perhaps, for example, the agent has a commitment to the values that provide the reasons].
  • Evidence sensitive : the right action isn’t the one that actually meets the standard, but instead, is the action that the agent could foresee would meet that standard. [there are many different ways to spell this out, depending on the degree of evidence that is relevant: in terms of what the agent actually foresees, what is foreseeable by the agent given what the agent knows, is foreseeable by someone in possession of a reasonable amount of evidence, etc.]

Of course, these two can overlap. For theorists who are evaluational internalists , evidence-sensitivity doesn’t seem like a plausible way of spelling out the standard, except, perhaps, indirectly. The distinction frequently comes up in Consequentialism, where the Objective standard is taken to be something like: the right action is the action that actually promotes the good and the Subjective standard is something like: the right action is the action that promotes the good by the agent’s own lights (psychology sensitive) or the right action is the action that promotes the foreseeable good, given evidence available at the time of action (evidence sensitive standard). It is certainly possible for other moral standards to be objective. For example, the right action is the action that the virtuous person would perform, even though the agent does not realize it is what the virtuous agent would do in the circumstances, and even if the person with the best available evidence couldn’t realize it is what the virtuous person would do in the circumstances.

We certainly utter locutions that support both subjective and objective uses of what we ‘ought’ to do, or what is ‘right’. Frank Jackson notes this when he writes:

…we have no alternative but to recognize a whole range of oughts – what she ought to do by the light of her beliefs at the time of action, …what she ought to do by the lights of one or another onlooker who has different information on the subject, and, what is more, what she ought to do by God’s lights…that is, by the lights of one who knows what will and would happen for each and every course of action. (Jackson 1991, 471).

For Jackson, the primary ought, the primary sense of ‘rightness’ for an action, is the one that is “most immediately relevant to action” since, otherwise, we have a problem of understanding how the action is the agent’s. Thus, the subjective ‘ought’ is primary in the sense that this is the one that ethical theory should be concerned with (Jackson 1991). Each type of theorist makes use of our ordinary language intuitions to make their case. But one desideratum of a theory is that it not simply reflect those intuitions, but also provides the tools to critically analyze them. Given that our language allows for both sorts of ‘ought,’ the interesting issue becomes which, if either, has primacy in terms of actually providing the standard by which other things are evaluated? Moral theory needn’t only be concerned with what the right action is from the agent’s point of view.

There are three possibilities:

  • neither has primacy
  • the subjective has primacy
  • the objective has primacy

First off we need to understand what we mean by “primacy”. Again, for Frank Jackson, the primary sense of ‘right’ or ‘ought’ is subjective, since what we care about is the ‘right’ that refers to an inward story, the story of our agency, so to speak. On this view, the objective and subjective senses may have no relationship to each other at all, and which counts as primary simply depends upon our interests. However, the issue that concerns us here is whether or not one sense can be accounted for in terms of the other. Option 1 holds that there is no explanatory connection. That is not as theoretically satisfying. Option 2 holds either there really is no meaningful objective sense, just the subjective sense, or the objective sense is understood in terms of the subjective.

Let’s look at the objective locution again “He did the right thing, but he didn’t know it at the time (or he had no way of knowing it at the time)”. Perhaps all this means is “He did what someone with all the facts and correct set of values would have judged right by their own lights” – this would be extensionally the same as “He performed the action with the best actual consequences”. This is certainly a possible account of what objective right means which makes use of a subjective standard. But it violates the spirit of the subjective standard, since it ties rightness neither to the psychology of the agent, or the evidence that is actually available to the agent. For that reason, it seems more natural to opt for 3. An advantage of this option is that gives us a nice, unified account regarding the connection between the objective and the subjective. Subjective standards, then, are standards of praise and blame, which are themselves evaluable according to the objective standard. Over time, people are in a position to tell whether or not a standard actually works in a given type of context. Or, perhaps it turns out that there are several standards of blame that differ in terms of severity. For example, if someone acts negligently a sensible case can be made that the person is blameworthy but not as blameworthy as if they had acted intentionally.

As to the worry that the objective standard doesn’t provide action guidance, the objective theorist can hold that action guidance is provided by the subjective standards of praise/blameworthiness. Further, the standard itself can provide what we need for action guidance through normative review (Driver 2012). Normative review is a retrospective look at what does in fact meet the standard, and under what circumstances.

Now, consider a virtue ethical example. The right action is the action that is the actual action that a virtuous person would perform characteristically, in the circumstances, rather than the action that the agent believes is the one the virtuous person would perform. Then we evaluate an agent’s “v-rules” in terms of how close they meet the virtuous ideal.

Another function of moral theory is to provide a decision procedure for people to follow so as to best insure they perform right actions. Indeed, some writers, such as R. M. Hare hold action guidance to be the function of the moral principles of the theory (Hare 1965). This raises the question of what considerations are relevant to the content of such principles – for example, should the principles be formulated taking into account the epistemic limitations of most human beings? The requirement that moral principles be action guiding is what Holly Smith terms the “Useability Demand”: “…an acceptable moral principle must be useable for guiding moral decisions…” (Smith 2020, 11). Smith enumerates different forms satisfaction of this demand can take, and notes that how one spells out a principle in order to meet the demand will depend upon how the moral theorist views moral success. For example, whether or not success is achieved in virtue of simply making the right decision or if, in addition to making the right decision, the agent must also have successful follow-through on that decision.

There has been enormous debate on the issue of what is involved in following a rule or principle, and some skepticism that this is in fact what we are doing when we take ourselves to be following a rule. (Kripke 1982) Some virtue theorists believe that it is moral perception that actually does the guiding, and that a virtuous person is able to perceive what is morally relevant and act accordingly (McDowell 1979).

As discussed earlier in the section on criteria, however, this is also controversial in that some theorists believe that decision procedures themselves are not of fundamental significance. Again, objective consequentialist who believes that the fundamental task of theory is to establish a criterion for right argues that decision procedures will themselves be established and evaluated on the basis of how well they get us to actually achieving the right. Thus, the decision-procedures are derivative. Others, such as subjective consequentialists, will argue that the decision-procedures specify the criterion in the sense that following the decision-procedure itself is sufficient for meeting the criterion. For example, an objective consequentialist will hold that the right action maximizes the good, whereas the subjective consequentialist might hold that the right action is to try to maximize the good, whether or not one actually achieves it (Mason 2003 and 2019). Following the decision-procedure itself, then, is the criterion.

The distinction between criterion and decision-procedure has been acknowledged and discussed at least since Sidgwick, though it was also mentioned by earlier ethicists. This distinction allows ethical theories to avoid wildly implausible implications. For example, if the standard that the theory recommends is ‘promote the good’ it would be a mistake to think that ‘promote the good’ needs to be part of the agent’s deliberation. The consequentialist might say that, instead, it is an empirical issue as to what the theory is going to recommend as a decision-procedure, and that recommendation could vary from context to context. There will surely be circumstances in which it would be best to think in terms of meeting the standard itself, but again that is an empirical issue. Likewise, it is open to a Virtue Ethicist to hold that the right action is the one the virtuous agent would perform in the circumstances, but also hold that the agent’s deliberative processes need not make reference to the standard. Pretty much all theories will want to make some space between the standard and the decision-procedure in order to avoid a requirement that agent’s must think in terms of the correct standard, in order to act rightly, or even act with moral worth. There is a distinction to be made between doing the right thing, and doing the right thing for the right reasons . Doing the right thing for the right reasons makes the action a morally worthy one, as it exhibits a good quality of the will. It is possible for a theory to hold that the ‘good will’ is one that understands the underlying justification of an action, but that seems overly demanding. If consequentialism is the correct theory, then demanding that people must explicitly act intentionally to maximize the good would result in fewer morally worthy actions than seems plausible. The ‘for the right reasons’ must be understood as allowing for no explicit invocation of the true justifying standard.

This has led to the development of theories that advocate indirection. First, we need to distinguish two ways that indirection figures into moral philosophy.

  • Indirection in evaluation of right action.
  • Indirection in that the theory does not necessarily advocate the necessity of aiming for the right action.

To use Utilitarianism as an example again, Rule Utilitarianism is an example of the first sort of indirection (Hooker 2000), Sophisticated Consequentialism is an example of the second sort of indirection (Railton 1984). One might hold that some versions of Aristotelian Virtue ethics, such as Rosalind Hursthouse’s version, also are of the first type, since right action is understood in terms of virtue. One could imagine an indirect consequentialist view with a similar structure: the right action is the action that the virtuous person would perform, where virtue is understood as a trait conducive to the good, instead of by appeal to an Aristotelian notion of human flourishing.

The second sort relies on the standard/decision-procedure distinction. Railton argues that personal relationships are good for people, and explicitly trying to maximize the good is not a part of our relationship norms, so it is likely good that we develop dispositions to focus on and pay special attention to our loved ones. The account is open to the possibility that people who don’t believe in consequentialism have another way of deciding how to act that is correlated with promotion of the good. If the criteria a theory sets out need not be fulfilled by the agent guiding herself with the reasons set out by the criteria, then it is termed self-effacing . When a theory is self-effacing, it has the problem of alienating a person from the justification of her own actions. A middle ground, which is closer to Railton’s view, holds that the correct justification is a kind of “touchstone” to the morally good person – consulted periodically for self-regulation, but not taken explicitly into consideration in our ordinary, day-to-day lives. In this way, the theory would not be utterly self-effacing and the agent would still understand the moral basis for her own actions.

  • Alicke, Mark, David Rose and Dori Bloom, 2011, “Causation, Norm Violation, and Culpable Control,” Journal of Philosophy , 108(12): 670–696.
  • Annas, Julia, 2011, Intelligent Virtue , New York: Oxford University Press.
  • Arpaly, Nomy, 2002, Unprincipled Virtue , New York: Oxford University Press.
  • Baier, Annette, 1985, Postures of the Mind , Minneapolis, MN: University of Minnesota Press.
  • Baron, Marcia, 1991, “Impartiality and Friendship,” Ethics , 101(4): 836–857.
  • –––, 1995, Kantian Ethics Almost Without Apology , Ithaca, NY: Cornell University Press.
  • Clarke, Stanley G, 1987, “Anti-Theory in Ethics,” American Philosophical Quarterly , 24(3): 237–244.
  • D’Arms, Justin and Daniel Jacobson, 2000 , “The Moralistic Fallacy: On the ‘Appropriateness’ of Emotions,” Philosophy and Phenomenological Research , 61(1): 65–90.
  • Darwall, Stephen, 2006, The Second-Person Standpoint , Cambridge, MA: Harvard University Press.
  • Dreier, Jamie, 1993, “Structures of Normative Theories,” The Monist , 76(1): 22–40.
  • Driver, Julia, 2012, “What the Objective Standard is Good For,” in Mark Timmons (ed.), Oxford Studies in Normative Ethics , New York: Oxford University Press, 28–44.
  • Dworkin, Ronald, 1977, Taking Rights Seriously , Cambridge, MA: Harvard University Press.
  • Foot, Philippa, 1967, “Abortion and the Doctrine of Double Effect,” Oxford Review , 5: 5–15.
  • Graham, Peter, 2010, “In Defense of Objectivism About Moral Obligation,” Ethics , 121(1): 88–115.
  • Hare, R. M., 1965, Freedom and Reason , New York: Oxford University Press.
  • Herman, Barbara, 1985, “The Practice of Moral Judgment,” Journal of Philosophy , 82(8): 414–436.
  • Hill, jr., Thomas E., 1987, “The Importance of Autonomy,” in Eva Kittay and Diana Meyers (ed.) Women and Moral Theory , Totowa, NJ: Rowman & Allanheld, 129–138.
  • Hooker, Brad, 2000, Ideal Code, Real World , New York: Oxford University Press.
  • Hurka, Thomas, 2001, Virtue, Vice, and Value , New York: Oxford University Press.
  • Hursthouse, Rosalind, 1999, On Virtue Ethics , New York: Oxford University Press.
  • Jackson, Frank, 1991, “Decision-theoretic Consequentialism and the Nearest and Dearest Objection,” Ethics , 101(3): 461–482.
  • Jeske, Diane, 2008, Rationality and Moral Theory: How Intimacy Generates Reasons , New York: Routledge.
  • Joyce, Richard, 2001, The Myth of Morality , New York: Cambridge University Press.
  • Keas, Michael, 2018, “Systematizing the Theoretical Virtues,” Synthese , 195: 2761–2793.
  • Kagan, Shelley, 1989, The Limits of Morality , New York: Oxford University Press.
  • Kamm, Frances, 2007, Intricate Ethics: Rights, Responsibilities, and Permissible Harm , New York: Oxford University Press.
  • Kant, Immanuel, 1785 [2012], Groundwork of the Metaphysics of Morals , tr. by Mary Gregor and Jens Timmerman, New York: Cambridge University Press, 2012.
  • Knobe, Joshua, 2003, “Intentional Action in Folk Psychology: An Experimental Investigation,” Philosophical Psychology , 16(2): 309–325.
  • Kripke, Saul, 1982, Wittgenstein on Rules and Private Languages , Cambridge, MA: Harvard University Press.
  • Louden, Robert, 1990, “Virtue Ethics and Anti-Theory,” Philosophia , 20(1–2): 93–114.
  • Markovits, Julia, 2014, Moral Reason , New York: Oxford University Press.
  • Mason, Elinor, 2003, “Consequentialism and the ‘Ought Implies Can’ Principle,” American Philosophical Quarterly , 40(4): 319–331.
  • –––, 2019, Ways to Be Blameworthy: Rightness, Wrongness, and Responsibility , New York: Oxford University Press.
  • McDowell, John, 1979, “Virtue and Reason,” The Monist , 62(3): 331–350.
  • Moody-Adams, Michelle, 2002, Fieldwork in Familiar Places: Morality, Culture, and Philosophy , Cambridge, MA: Harvard University Press.
  • Moore, G. E., 1903 [1993], Principia Ethica , ed. Thomas Baldwin, New York: Cambridge University Press, 1993.
  • Nagel, Thomas, 1979, “The Fragmentation of Value,” in Mortal Questions , New York: Cambridge University Press, 128–141.
  • Nietzsche, Friedrich, 1887 [1998], On the Genealogy of Morality , Maudemarie Clark and Alan J. Swensen (trans.), Indianapolis, IN: Hackett Publishing.
  • Norcross, Alastair, 2020, Morality By Degrees , New York: Oxford University Press.
  • Olson, Jonas, 2004, “Buck-Passing and the Wrong Kind of Reasons,” Philosophical Quarterly , 54(215): 295–300.
  • Parfit, Derek, 1984, Reasons and Persons , Oxford: Clarendon Press.
  • Pettit, Phillip, 1997, “The Consequentialist Perspective,” in The Three Methods of Ethics , by Marcia Baron, Phillip Pettit, and Michael Slote, Oxford: Blackwell, 92–174.
  • Pettit, Phillip, and Michael Smith, 2000, “Global Consequentialism,” in Brad Hooker, et al. (eds.), Morality, Rules, and Consequences , Edinburgh: University of Edinburgh Press, 121–133.
  • Phillips, David, 2019, Rossian Ethics , New York: Oxford University Press.
  • Piper, Adrian, 1987, “Moral Theory and Moral Alienation,” Journal of Philosophy , 82(2): 102–118.
  • Portmore, Douglas, 2011, Commonsense Consequentialism , New York: Oxford University Press.
  • Rabinowicz, Wlodek and Toni Ronnow-Rasmussen, 2004, “The Strike of the Demon: on Fitting Pro-Attitudes and Value,” Ethics , 114(3): 391–423.
  • Railton, Peter, 1984, “Alienation, Consequentialism, and the Demands of Morality,” Philosophy and Public Affairs , 13(2): 134–171.
  • Rawls, John, 1971, A Theory of Justice , Cambridge, MA: Belknap Press.
  • Scanlon, T. M., 1998, What We Owe to Each Other , Cambridge, MA: Belknap Press.
  • –––, 2008, Moral Dimensions , Cambridge, MA: Harvard University Press.
  • Scheffler, Samuel, 1982, The Rejection of Consequentialism , New York: Oxford University Press.
  • Schneewind, J. B., 1963, “First Principles and Common-sense Morality in Sidgwick’s Ethics,” Archiv fur Geschichte der Philosophie , 45(2): 137–156.
  • –––, 1990, “The Misfortunes of Virtue,” Ethics , 101(1): 42–63.
  • Schofield, Paul, 2021, Duty to Self: Moral, Political, and Legal Self-Relation , New York: Oxford University Press.
  • Sen, Amartya, 2000, “Consequential Evaluation and Practical Reason,” The Journal of Philosophy , 47(9): 477–502.
  • Sidgwick, Henry, 1874 [1907], The Methods of Ethics , London: Macmillan. [The seventh edition was published in 1907.]
  • Singer, Marcus, 1986, “Ethics and Common Sense,” Revue Internationale de Philosophie , 40(158): 221–258.
  • Slote, Michael, 1985, Common-Sense Morality and Consequentialism , New York: Routledge & Kegan Paul.
  • –––, 2007, The Ethics of Care and Empathy , New York: Routledge.
  • Smith, Holly, 2018, Making Morality Work , New York: Oxford University Press.
  • Smith, Michael, 2009, “Two Kinds of Consequentialism,” Philosophical Issues , 19(1): 257–272.
  • Stark, Cynthia, 1997, “Decision Procedures, Standards of Rightness and Impartiality,” Noûs , 31(4): 478–495.
  • Stocker, Michael, 1976, “The Schizophrenia of Modern Ethical Theories,” Journal of Philosophy , 73(14): 453–466.
  • Strawson, Peter, 1961, “Social Morality and Individual Ideal,” Philosophy , 36(136): 1–17.
  • Street, Sharon, 2006, “A Darwinian Dilemma for Realist Theories of Value,” Philosophical Studies , 127(1): 109–166.
  • Thomson, Judith Jarvis, 1976, “Killing, Letting Die, and the Trolley Problem,” The Monist , 59(2): 204–217.
  • Wiland, Eric J, “The Incoherence Objection in Moral Theory,” Acta Analytica , 25(3): 279–284.
  • Williams, Bernard, 1985, Ethics and the Limits of Philosophy , New York: Oxford University Press.
  • Wolf, Susan, 1982, “Moral Saints,” Journal of Philosophy , 79(8): 419–439.
  • –––, 2014, “Loving Attention: Lessons in Love from The Philadelphia Story ,” in Susan Wolf and Christopher Grau (eds.), Understanding Love: Philosophy, Film, and Fiction , Oxford: Oxford University Press, pp. 369–386.
  • Zagzebski, Linda Trinkhaus, 2017, Exemplarist Moral Theory , New York: Oxford University Press.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

[Please contact the author with suggestions.]

consequentialism | ethics: deontological | ethics: virtue | morality, definition of | value theory

Copyright © 2022 by Julia Driver < julia . driver @ austin . utexas . edu >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Moral Development in Business Ethics: An Examination and Critique

  • Review Paper
  • Open access
  • Published: 18 November 2019
  • Volume 170 , pages 429–448, ( 2021 )

Cite this article

You have full access to this open access article

moral and ethical development essay

  • Kristen Bell DeTienne 1 ,
  • Carol Frogley Ellertson 2 ,
  • Marc-Charles Ingerson 3 &
  • William R. Dudley 1  

63k Accesses

30 Citations

4 Altmetric

Explore all metrics

The field of behavioral ethics has seen considerable growth over the last few decades. One of the most significant concerns facing this interdisciplinary field of research is the moral judgment-action gap. The moral judgment-action gap is the inconsistency people display when they know what is right but do what they know is wrong. Much of the research in the field of behavioral ethics is based on (or in response to) early work in moral psychology and American psychologist Lawrence Kohlberg’s foundational cognitive model of moral development. However, Kohlberg’s model of moral development lacks a compelling explanation for the judgment-action gap. Yet, it continues to influence theory, research, teaching, and practice in business ethics today. As such, this paper presents a critical review and analysis of the pertinent literature. This paper also reviews modern theories of ethical decision making in business ethics. Gaps in our current understanding and directions for future research in behavioral business ethics are presented. By providing this important theoretical background information, targeted critical analysis, and directions for future research, this paper assists management scholars as they begin to seek a more unified approach, develop newer models of ethical decision making, and conduct business ethics research that examines the moral judgment-action gap.

Similar content being viewed by others

moral and ethical development essay

Behavioral Ethics: A Critique and a Proposal

moral and ethical development essay

Moral Motivation Across Ethical Theories: What Can We Learn for Designing Corporate Ethics Programs?

moral and ethical development essay

How Methods of Moral Philosophy Inform Business

Avoid common mistakes on your manuscript.

Scandals in business never seem to end. Even when one scandal seems to finally end, another company outdoes the prior disgraced company and dominates the public dialog on corporate ethics (c.f., Chelliah and Swamy 2018 ; Merle 2018 ). So, what is happening here? One common issue shows up repeatedly in cases of unethical behavior, which is that of knowing what is right yet doing what is wrong. This failure is classically understood as the moral judgment-moral action gap. Footnote 1

A main goal of behavioral business ethics is to understand the primary drivers of good and bad ethical decision making (Treviño et al. 2014 ). The hope is that with a better understanding of these drivers, organizations can implement structures that lead to more frequent and consistent ethical behavior by employees. However, business scholars are still working to discover what actually spurs ethical behaviors that improve profit maximization and corporate social performance.

This focus on understanding ethical decision making in business in a way that bridges the moral judgment-moral action gap has experienced an explosion of interest in recent decades (Bazerman and Sezer 2016 ; Paik et al. 2017 ; Treviño et al. 2014 ). These types of studies constitute a branch of behavioral ethics research that incorporates moral philosophy, moral psychology, and business ethics. These same interdisciplinary scholars seek to address questions about the fundamental nature of morality—and whether the moral has any objective justification—as well as the nature of moral capacity or moral agency and how it develops (Stace 1937 ). These aims are similar to those of prior moral development researchers.

However, behavioral business ethicists sometimes approach these aims without the theoretical or philosophical background that can be helpful in grappling with problems like the judgment-action gap (Painter-Morland and Werhane 2008 ). Therefore, this article provides a useful reference for behavioral business ethics scholars on cognitive moral development and its indirect but important influence on research today.

The first goal of this paper is to examine the moral development theory in behavioral business ethics that comes first to mind for most laypersons and practitioners—the cognitive approach. At the forefront of the cognitive approach is Kohlberg ( 1969 , 1971a , b , 1981 , 1984 ) with his studies and reflection of the development of moral reasoning. We also examine subsequent supports and critiques of the approach, as well as reactions to its significant influence on business ethics teaching, research, and practice. We also examine the affective approach by reviewing the work of Haidt ( 2001 , 2007 , 2009 ), Bargh ( 1989 , 1990 , 1996 , 1997 ), and others.

We then consider research that moves away from this intense historical debate between cognitive and affective decision making and may be better for understanding moral development and helping to bridge the moral judgment-moral action gap. For example, some behavioral ethics researchers bracket thinking and feeling and have explored a deeper approach by examining the brain’s use of subconscious mental shortcuts (Gigerenzer 2008 ; Sunstein 2005 ). In addition, virtue ethics and moral identity scholars focus on how individuals in organizations develop certain qualities that become central to their identity and motivate their moral behavior, not by focusing on cognition or affect but by focusing on the practice of behavioral habits (Blasi 1993 , 1995 , 2009 ; Grant et al. 2018 ; Martin 2011 ). Each of these groups of behavioral ethics researchers have moved the discussion of moral development forward using theorizing that rests on different—and often competing—assumptions.

In this article, we seek to make these various theories of moral development explicit and to bring different theories face to face in ways that are rarely discussed. We show how some of the unrelated theories seem compatible and how some of the contrasting theories seem irreconcilable. The comparisons and conflicts will then be used to make recommendations for future research that we hope will lead to greater unity in theorizing within the larger field of business ethics.

In other words, the second goal of this paper is to provide a critical theoretical review of the most pertinent theories of Western moral development from moral psychology and to highlight similarities and differences among scholars with respect to their views on moral decision making. We hope this review and critique will be helpful in identifying what is best included in any future unified theory for moral decision making in behavioral ethics that will actually lead to the moral judgment-moral action gap being bridged in practice as well.

The third goal of our paper is to question common assumptions about the nature of morality by making them explicit and analyzing them (Martin and Parmar 2012 ). Whetten ( 1989 ) notes the importance of altering our thinking “in ways that challenge the underlying rationales supporting accepted theories” (p. 493). Regarding the field of business ethics specifically, O’Fallon and Butterfield ( 2005 ) found that a major weakness in the business ethics literature is a lack of theoretical grounding—and we believe this concern still requires attention. In addition, Craft ( 2013 ) notes that “perhaps theory building is weak because researchers are reluctant to move beyond the established theories into more innovative territory” (p. 254). As recommended by Whetten ( 1989 ), challenging our assumptions in the field of behavioral ethics will help us conduct stronger, more compelling research that will have a greater impact on the practice of business ethics.

For example, many business and management scholars are heavily influenced by long-held assumptions reflected in the work of Lawrence Kohlberg ( 1969 , 1971a , b ), one of the most prominent theorists of ethical decision making (Hannah et al. 2011 ; Treviño 1986 ; Treviño et al. 2006 ; Weber 2017 ; Zhong 2011 ). Like Sobral and Islam ( 2013 ), we call upon researchers to move beyond these assumptions. We will review a selection of research that explores alternate ideas and leaves past assumptions behind, leading to unique outcomes, which are of value to the field of management. Thus, in addition to making long-held assumptions clear, we will present critical analysis and alternative ways of thinking to further enhance the scientific literature on the topic.

To accomplish this third goal, we will discuss links between definition, theory, and empirical study. This method of analysis is demonstrated by Fig.  1 .

figure 1

Model for analysis of moral theory

Our fourth and final goal is to note gaps in our current understanding of ethical decision making and to present directions for future research. We discuss these opportunities throughout the paper and more specifically in our summary.

To accomplish these four goals, we begin with a review of the moral judgment-action gap and Greek and Kantian philosophy. After laying this theoretical background as a foundation for our discussion, we move deeper into a critical analysis. To begin this critical analysis, we discuss Piaget and Kohlberg, and the implications of their approaches. We then consider the Neo-Kohlbergian, Moral Identity, and Moral Domain research. The final section analyzes Moral Automaticity, Moral Schemas, and Moral Heuristics Research, as outlined in Fig.  2 .

figure 2

Visual summary of review

Moral Judgment-Action Gap

As mentioned above, behavioral ethics research indicates that the mere ability to reason accurately about moral issues predicts surprisingly little about how a person will actually behave ethically (Blasi 1980 ; Floyd et al. 2013 ; Frimer and Walker 2008 ; Jewe 2008 ; Walker and Hennig 1997 ). This ongoing failure is not for a lack of many thoughtful attempts on the part of researchers (Wang and Calvano 2015 ; Williams and Gantt 2012 ). The predictive failure has led to expressions of disappointment and frustration from scholars (Bergman 2002 ; Blasi 1980 ; Thoma 1994 ; Walker 2004 ).

The gap has led to a call for a more integrated and interdisciplinary approach to the problem in business ethics (De Los Reyes Jr et al. 2017 ). In agreement with that call for greater integration, we suggest that if business scholars and practitioners are going to move forward the work on the moral judgment-moral action gap, then it will be helpful to return to the historical embeddedness of this gap problem.

Philosophical Background: Greeks and Kant

The study of ethics is concerned with the question of “what is right?” Greek philosophers such as Socrates, Plato, and Aristotle examined issues such as right versus wrong and good versus bad. For these philosophers, morality found its meaning in the fact that it served to achieve personal needs and desires for happiness, avoid harm, and preserve goods required for the well-being of individuals and society. These goods include truth, safety, health, and harmony, and are maintained by moral, virtuous behavior. We call this a teleological approach because of its focus on results rather than on rules governing behavior (Lane 2017 ; Parry 2014 ).

One of the first of these moral philosophers was Socrates (470–399 B.C.). Socrates believed that through rational processes or reasoning we can discern truth, including universal moral truth. Thus, Socrates taught that a person’s essential moral function is to act rationally. He taught that “to know the good is to do the good” (Stumpf and Fieser 2003 , p. 42), meaning that if an individual knows what is right, he or she will do it. On the other hand, Socrates acknowledged that humans frequently commit acts that they know to be wrong. The Greeks called this phenomena—knowing what is right but failing to act on that knowledge—akrasia. Akrasia, from the ancient Greek perspective, is what leads to wrong or evil doing (Kraut 2018 ). Footnote 2

Another perspective that will be helpful later on in our examination of current literature is that of Aristotle. Regarding moral functioning, Aristotle focused on the development of and reliance on virtues: qualities, such as courage, that motivate a person’s actions (Kraut 2018 ). These virtues are developed through social influences and practice, and they become an essential part of who a person is. Thus, rather than learning to reason about actions and their results, as Socrates would emphasize as the core of moral functioning, Aristotle emphasizes virtues that a person possesses and that motivate ethical behavior (Kraut 2018 ).

Like Socrates, German philosopher Immanuel Kant ( 1785/1993 ) claimed that moral judgment is a result of reasoning. However, rather than taking a teleological approach to morality, he held to deontological views. For Kant, moral behavior is defined by an overarching obligation or duty to comply with strict universal principles, valid independent of any empirical observation. According to this deontological view, an action is right or wrong in and of itself, not as defined by end results or impact on well-being. Simply put, people are obligated out of duty to perform certain moral actions (Johnson 2008 ; Kant 1785/1993 ). In summary, for Socrates, Aristotle, and Kant, the emphasis is on knowledge and cognition.

Modern Influences: Piaget and Kohlberg

This reliance on knowledge and cognition continued on from Socrates to Kant and on to the American moral psychologist Kohlberg (1927–1987). Kohlberg advocated a theory that sought to describe how individuals mature in their abilities to make moral decisions.

Before discussing Kohlberg further, we note that his work has had an enormous impact on academic research as whole. His research has been cited over 70,000 times. In the last 5 years alone, he has been cited between 2000 and 3500 times each year. Within business, his theory of cognitive moral development is widely discussed, commonly used as a basis for research, and frequently covered in the standard textbooks for business ethics courses. Thus, to say that the field has moved past him is to deny the reality of the literature and experience of business ethics as a whole. With that in mind, any careful examination of how to better bridge the moral judgment-moral action gap in behavioral ethics must address Kohlberg’s ideas.

Socrates’ belief that “to know the good is to do the good,” which reflects the importance in Greek thought of arriving at truths through reasoning, influenced Kohlberg’s emphasis on the chief role of rationality as the arbiter for discerning moral universals (Stumpf and Fieser 2003 , p. 42). Kohlberg also embraced Aristotle’s notion that social experiences promote development by stimulating cognitive processes. Moreover, his emphasis on justice morality reflects Aristotle’s claims that virtues function to attain justice, which is needed for well-being, inner harmony, and the moral life.

Kohlberg’s thinking was heavily influenced by Jean Piaget, who believed that children develop moral ideas in a progression of cognitive development. Piaget held that children develop judgments—through experience—about relationships, social institutions, codes of conduct, and authority. Social moral standards are transmitted by adults, and the children participate “in the elaborations of norms instead of receiving them ready-made,” thus creating their own conceptions of the world (Piaget 1977 , p. 315).

According to Piaget, children develop a moral perception of the world, including concepts of fairness and ideas about right and wrong. These ideas do not originate directly from teaching. Often, children persist in these beliefs even when adults disagree (Gallagher 1978 ). In his theory of morality, presented in The Moral Judgment of the Child, Piaget philosophically defined morality as universal and obligatory (Ash and Woodward 1987 ; Piaget 1977 ). He drew on Kantian theory, which emphasized generating universal moral maxims through logical, rational thought processes. Thus, he rejected equating cultural norms with moral norms. In other words, he rejected the moral relativity that pervaded most research in human development at the time (Frimer and Walker 2008 ).

In the tradition of Piaget’s four stages of cognitive development, Kohlberg launched contemporary moral psychology with his doctoral paper in 1958. His structural development model holds that the stages of moral development emerge from a person’s own thoughts concerning moral issues. Kohlberg believed that social experiences play a part in moral development by stimulating our mental processes. Thus, moral behavior is rooted in moral and ethical cognitive deliberation (Kohlberg 1969 ; Levine et al. 1985 ).

Kohlberg investigated how people justify their decisions in the face of moral dilemmas. Their responses to these dilemmas established how far within stages of moral development a person had progressed. He outlined six discrete stages of moral reasoning within three overarching levels of moral development (Kohlberg 1971a ), outlined in Table  1 below. These stages were centered in cognitive reasoning (or rationality).

Kohlberg claimed that the moral is manifest within the formulation of moral judgments that progress through stages of development and could be demonstrated empirically (Kohlberg 1971a , b ). In this way, Kohlberg shifted the paradigm for moral philosophy and moral psychology. Up to this point, from the modern, Western perspective, most empirical studies of morality were descriptive (Lapsley and Hill 2009 ). Most research chronicled how various groups of peoples lived their moral lives and what the moral life consisted of, not what universal moral principles should constitute moral life. Kohlberg made the bold claim that individuals should aspire to certain moral universal principles of moral reasoning, and furthermore, that these principles could be laid bare through rigorous scientific investigation.

According to Kohlberg, an individual’s moral reasoning begins at stage one and develops progressively to stage two, then stage three, and so on, in order. Movement from one level to the next entails re-organization of a form of thought into a new form.
Not everyone can progress through all six stages. According to Kohlberg, it is quite rare to find people who have progressed to stage five or six, emphasizing that his idea of moral development stages was not synonymous with maturation (Kohlberg 1971a ). That is, the stages do not simply arise based on a genetic blueprint. Neither do they develop directly from socialization. In other words, new thinking strategies do not come from direct instruction, but from active thinking about moral issues. The role of social experiences is to prompt cognitive activity. Our views are challenged as we discuss or contend with others. This process motivates us to invent more comprehensive opinions, which reflect more advanced stages of moral development (c.f., Kohlberg 1969 ).

Reflecting Piaget and thus Kantian ethics, Kohlberg claimed that his stages of moral development are universal. His sixth stage of moral development (the post-conventional, universal principles level) occurs when reasoning includes abstract ethical thinking based on universal principles. Footnote 3

For Kohlberg, moral development consisted of transformations in a person’s thinking–not as an increased knowledge of cultural values that leads to ethical relativity, but as maturing knowledge of existing structures of moral judgment found universally in development sequences across cultures (Kohlberg and Hersh 1977 ). In other words, Kohlberg sought to eliminate moral relativism by advocating for the universal application of moral principles. According to him, the norms of society should be judged against these universal standards. Thus, Kohlberg sought to demonstrate empirically that specific forms of moral thought are better than others (Frimer and Walker 2008 ; Kohlberg 1971a , b ).

Lapsley and Hill ( 2009 ) discuss the far-reaching ramifications of how Kohlberg moralized child psychology: “He committed the ‘cognitive developmental approach to socialization’ to an anti-relativism project where the unwelcome specter of ethical relativism was to yield to the empirical findings of moral stage theory” (p. 1). Footnote 4 For Kohlberg, a particular behavior qualified as moral only when motivated by a deliberate moral judgment (Kohlberg et al. 1983 , p. 8). His ‘universal’ moral principles, then, were not so universal after all. Lapsley and Hill ( 2009 ) note that this principle of phenomenalism “was used as a cudgel against behaviourism (which rejected both cognitivism and ordinary moral language)” (p. 2).

Implications of Kohlberg for Today

This section of the article examines Kohlberg’s underlying assumptions and limitations. Although Kohlberg’s work is historically important and currently influential, this article proposes that business ethicists should avoid mis-application of and over-reliance on his framework.

To begin, Kohlberg assumes that the essence of morality is found in cognitive reasoning, mirroring Greek and Kantian thought. While such an assumption fit his purposes, we must move beyond this to understand ethical decision making more holistically (Sobral and Islam 2013 ). We know that the ability to reason does not always lead humans to act morally. Morality is more central to human existence, and reasoning is only one of multiple human activities that achieve the ends of morality (c.f., Ellertson et al. 2016 ). If we were to use Kohlberg’s assumption, we would assume that as long as someone is capable of advanced moral reasoning (as with Kohlberg’s use of hypothetical situations), we need not worry about that person’s actions. However, empirical studies by Hannah et al. ( 2018 ) indicate that although a person might demonstrate advanced moral reasoning in one role, the same person might show moral deviance in another role. Thus, recent research suggests that moral identity is multi-dimensional and ethical decision making is quite complex. Future work should consider the true, yet limited role of rationality in moral behavior and moral decision making (see Table  2 ).

Kohlberg also assumes that all humans proceed universally through moral development and that when fully developed—for those who do reach the highest level of reasoning—everyone will exhibit the same moral reasoning. If we are to build on this assumption, many questions are left unanswered about the easily observable differences both within and between individuals. For example, recent research by Sanders et al. ( 2018 ), suggests that in leaders who have high levels of moral identity, those who are authentically proud (versus leaders who are hubristically proud) are more likely to engage in ethical behavior. We call on researchers to study differences and limitations in moral processing that come from individual differences including past experiences, upbringing, age, personality, and culture. With such research, we will be able to better understand and reconcile differences regarding ethical issues and behavior.

Continuing to follow Kohlberg’s emphasis on universalism may limit our consideration of the real impact of social norms. We call on management scholars to investigate the importance of social, organizational, and individual norms rather than unwittingly assuming that universal principles should govern all organizational affairs. Certainly, some actions in business are universally unethical, but an assumption of absolute universal norms may limit organizational development, creative decision making, and the innovative power that comes from diversity of an individual’s social and cultural background. For example, empirical research by Reynolds et al. ( 2010 ) suggests that humans are moral agents and that their automatic decision-making practices interact with the situation to influence their moral behavior. Also, research by Kilduff et al. ( 2016 ) demonstrates how rivalry can increase unethical behavior. Future research on how situations and social norms affect behavior may help scholars to better predict, understand, and prevent moral judgment-action gaps and ethical conflicts between different individuals. Moral Domain Theory, which will be discussed later, provides one example of how to handle this question.

Kohlberg’s work does not directly address the moral judgment-action gap. For Kohlberg, until a person functions at the sixth stage of moral development, any immoral behavior roots from an inability to reason based on universal principles. However, his theory does not adequately explain the behavior of individuals who clearly understand what is moral–yet fail to act on that understanding (c.f., Hannah et al. 2018 ). This is yet another reason why as scholars we must question the claim that cognitive reasoning is central to the nature of morality. We call on business ethics scholars to design and test theoretically rigorous models of moral processing that connect gaps between judgment and action.

Moving forward, we do not disagree with Kohlberg’s notion that social interactions are important to moral reasoning, and we invite researchers and practitioners to consider what social experiences in the workplace could promote ethical development. Are some experiences, reflective practices, exercises, ethics training programs, or cultures more effective at promoting ethical behavior? For example, empirical research by Gaspar et al. ( 2015 ) suggests that how an individual reflects on past misdeeds can impact that person’s future immoral behavior. Future research could examine which experiences are most impactful, as well as when, why, and how these experiences affect change. Thus far we have reviewed the early work in moral development, including Socrates, Aristotle, Kant, Piaget, and Kohlberg. The remainder of the article discusses more recent theories.

Variations: Neo-Kohlbergians, Moral Identity, Moral Domain, and Moral Automaticity

The remainder of this paper will review how some researchers have built on Kohlberg’s assumptions and how others have successfully challenged them. In reviewing the theories of these researchers, remaining gaps in understanding will be discussed and future possible directions will be offered. Four areas of moral psychology research will be reviewed as follows: (1) Neo-Kohlbergian research, which builds upon Kohlberg’s original “rational moral judgments” approach, (2) Moral Identity research, which examines how moral identity is a component of how individuals define themselves and is a source for social identification, (3) Moral Domain research, which sees no moral judgment-action gap and assumes that social behavior stems from various domains of judgment, such as moral universals, cultural norms, and personal choice, and (4) Moral Automaticity research, which emphasize the fast and automatic intuitive approach in explanations of moral behavior.

Neo-Kohlbergian Research

Rest ( 1979 , 1984 , 1999 ) extended Kohlberg’s work methodologically and theoretically with his formulation of the Defining Issues Test (DIT), which began as a simple, multiple-choice substitute for Kohlberg’s time-consuming interview procedure. The DIT is a means of activating moral schemas (general knowledge structures that organize information) (Narvaez and Bock 2002 ). It is based on a component model that builds on Kohlberg’s stages of moral development—an approach he called ‘Neo-Kohlbergian.’ Rest ( 1983 ) maintained that a person must develop four key psychological qualities to become moral: moral sensitivity, moral judgment, moral motivation, and moral character. Without these, a person would have many gaps between his or her judgment and behavior. With 25 years of DIT research, Rest and others (Rest et al. 2000 ; Thoma et al. 2009 ) have found some support for the DIT and the model.

Although Rest built on Kohlberg’s work by emphasizing the role of cognitive moral judgments, he moved beyond the idea that the essence of morality is found in reasoning. Under the Neo-Kohlbergian approach, dealing with the moral became a more multifaceted endeavor, and many intricate theories of moral functioning—including moral motivation—have followed.

The work of Rest and his colleagues, along with Kohlberg’s foundation, has become a ‘gold standard’ in the minds of some management scholars (Hannah et al. 2011 ). Rest’s work has proven promising in its ability to explain the gap between moral cognition and behavior. However, his four-component model has also been criticized for assigning a single level of moral development to each respondent. Curzer ( 2014 ) points out that people develop at different rates and across different spheres of life, and that Rest’s Defining Issues Test (DIT) is not specific enough in its assessment of moral development. Future research could explore this criticism and analyze other methods for identifying, measuring, and improving moral development.

Moral Identity and Virtue Ethics Research

Blasi ( 1995 ) subscribed to a Neo-Kohlbergian point of view as he expanded on Kohlberg’s Cognitive Developmental Theory by focusing on motivation, an area of exploration not within the purview of Kohlberg’s main research. Though, toward the end of his career, Kohlberg did become more interested in the concept of motivation in his research (Kohlberg and Candee 1984 ), his empirical findings illuminate an individual’s understanding of moral principles without shedding much light on the motivation to act on those principles. According to Kohlberg, proficient moral reasoning informs moral action but does not necessarily explain it completely (Aquino and Reed 2002 ). Kohlberg’s own findings showed moral reasoning does not necessarily predict moral behavior.

Though his research builds on Kohlberg’s by emphasizing the role of cognitive development, Blasi’s focus on motivation represents a philosophical shift that provides a basis for moral identity research. Researchers in moral identity, though they agree with Kohlberg on some aspects of moral behavior, find the meaning of morality in characteristics or values that motivate a person to act. Because these components of identity are defined by society and deal with outcomes that a decision maker seeks, the philosophy of moral identity is more teleological than deontological. The philosophical definition of morality held by moral identity theorists influenced the way they studied moral behavior and the judgment-action gap.

Blasi introduced the concept of ‘the self’ as a sort of mediator between moral reasoning and action. Could it be that ‘the self’ was the source for moral motivation? Up until then, most of Kohlberg’s empirical findings involved responses to hypothetical moral dilemmas which might not seem relevant to the self or in which an individual might not be particularly engaged (Giammarco 2016 ; Walker 2004 ). Blasi’s model of the self was one of the first influential theories that endeavored to connect moral cognition (reasoning) to moral action, explaining the moral judgment-action gap. He proposed that moral judgments or moral reasoning could more reliably connect with moral behavior by taking into account other judgments about personal responsibility based upon moral identity (Blasi 1995 ).

Blasi is considered a pioneer for his theory of moral identity. His examination has laid a foundation upon which other moral identity scholars have built using social cognition research and theory. These other scholars have focused on concepts such as values, goals, actions, and roles that make up the content of identity. The content of identity can take a moral quality (e.g., values such as honesty and kindness, goals of helping, serving, or caring for others) and, to one degree or another, become central and important in a person’s life (Blasi 1983 ; Hardy and Carlo 2005 , 2011 ). Research by Walker et al. ( 1995 ) shows that some individuals see themselves exhibiting the moral on a regular basis, while others do not consider moral standards and values particularly relevant to their daily activities.

Blasi’s original Self Model ( 1983 ) posited that three factors combine to bridge the moral judgment-action gap. The first is the moral self, or what is sometimes referred to as ‘moral centrality,’ which constitutes the extent to which moral values define a person’s self-identity. Second, personal responsibility is the component that determines that after making a moral judgment, a person is responsible to act upon the judgment. This is a connection that Kohlberg’s model lacked. Third, this kind of self-consistency leads to a reliable, constant uniformity between judgment and action (Walker 2004 ).

Blasi ( 1983 , 1984 , 1993 , 1995 , 2004 , 2009 ) and Colby and Damon ( 1992 , 1993 ) posit that people with a moral personality have personal goals that are synonymous with moral values. Blasi’s model claims if one acts consistently according to his or her core beliefs, moral values, goals, and actions, then he or she possesses a moral identity or personality. When morality is a critical element of a person’s identity, that person generally feels responsible to act in harmony with his or her moral beliefs (Hardy and Carlo 2005 ).

Since Blasi introduced his Self Model, he has elaborated in more detail on the structure of the self’s identity. He has classified two distinct elements of identity: first, the objective content of identity such as moral ideals, and second, the modes in which identity is experienced, or the subjective experience of identity. As moral identity matures, the basis for self-perception transitions from external content to internal content. A mature identity is based on moral ideals and aspirations rather than relationships and actions. Maturity also brings increased organization of the self and a refined sense of agency (Blasi 1993 ; Hardy and Carlo 2005 ).

Blasi believes that moral identity produces moral motivation. Thus, moral identity is the source for understanding or bridging the moral judgment-action gap. However, some researchers (Frimer and Walker 2008 ; Hardy and Carlo 2005 ; Lapsley and Hill 2009 ) have noted that Blasi’s ideas are quite abstract and somewhat inaccessible. Furthermore, empirical research supporting his notions is limited. Moreover, Blasi’s endorsement of the first-person perspective on the moral has made it difficult to devise empirical studies. Empirical research on Blasi’s model often involves self-report methods, calling into question the validity of self-perceived attributes. In addition, the survey instruments that rate character traits often exhibit arbitrariness and variability across lists of collections of virtues hearkening back to the ‘bag of virtues’ approach that Kohlberg sought to move beyond (Frimer and Walker 2008 ).

On the other hand, some researchers have investigated the concept of ‘moral exemplars,’ presumably under the assumption that they possess moral identities. Colby and Damon’s ( 1992 , 1993 ) research on individuals known for their moral exemplarity found that these individuals experienced “a unity between self and morality” and that “their own interests were synonymous with their moral goals” (Colby and Damon 1992 , p. 362). Hart and Fegley ( 1995 ) compared teenage moral exemplars to other teens and found that moral exemplars are more likely than other teens to describe themselves using moral concepts such as being honest and helpful. Additional research using self-descriptions found similar results (Reimer and Wade-Stein 2004 ). This implies that to maintain ethical character in the workplace managers may want to hire candidates who describe themselves using moral characteristics.

Other identity research includes Hart’s ( 2005 ) model, which strives to identify a moral identity in terms of five factors that give rise to moral behavior (personality, social influence, moral cognition, self and opportunity). Aquino and Reed ( 2002 ) propose that definitions of self can be rooted in moral identity. This concept of self is organized around moral characteristics. Their self-report questionnaire measures the extent to which moral traits are integrated into an individual’s self-concept. Cervone and Tripathi ( 2009 ) stress the need for moral identity researchers to step outside the field of moral psychology, shift the focus away from the moral and engage general personality theorists. This allows moral psychologists to access broader studies in personality and cognitive science and to break out of what they see as the compartmentalized discourse within moral psychology.

In summary, the main concern of moral identity theory is how unified or disunified a person is, or the level of integrity an individual possesses. For moral psychologists, an individual with integrity is unified and consistent across all contexts. Because of this unification and consistency, that person experiences fewer lapses (or gaps) in his or her moral judgments and moral actions (Frimer and Walker 2008 ).

Moral identity theory represents a philosophical belief that morality is at the core of personhood. Rather than focusing simply on the processes or functioning of moral development and ethical decision making, moral identity scholars look more deeply at what motivates moral behavior, and they make room for the concept of agency. Similarly, Ellertson et al. ( 2016 ) draw on Levinas to explain that morality is more central to human existence than simply the processes it includes.

The philosophy of virtue ethics arose from Aristotle’s views of the development of virtues (Grant et al. 2018 ). Virtue ethics theorizes that any individual can obtain real happiness by pursuing meaning, concern for the common good, and the trait of virtue itself, and that by doing so such an individual will develop virtuous qualities that further increase his or her capacity to obtain real happiness through worthwhile pursuits (Martin 2011 ).

Virtue ethics also posits that individuals with enough situational awareness and knowledge can correctly evaluate their own virtue, underlying motivations, and ethical options in a given situation (Martin 2011 ). Grant et al. ( 2018 ) explain that researchers of virtue ethics explore virtue as being context specific, relative to the individual, and developing over a lifetime. Therefore, virtue ethics considers moral decision making to be both personal and contextual, and defines ethical decisions as leading to actions that impact the common good and contribute to an individual’s real happiness and self-perceived virtue.

Although empirical research has found evidence of the constancy of individuals’ virtue characteristics under different situations, research suggests virtues are not necessarily predictive of actual ethical behavior (Jayawickreme et al. 2014 ). Empirical evidence of the application of the theory of virtue ethics at the individual level is lacking; a recent review of thirty highly cited virtue ethics papers found only two studies that collected primary empirical data at the individual level (Grant et al. 2018 ). Thus, we call on ethics scholars to investigate the development and situational or universal influence of virtue states, traits, and characteristics, as well as their impact on happiness and other outcomes.

We invite management scholars to utilize the findings summarized in this section as they research how to effectively identify, socialize, and leverage candidates possessing virtuous characteristics and moral integrity. Future research can explore the feasibility of hiring metrics centered on ethical integrity. We note the difficulty scholars have had in designing a tool for accurately assessing ethical integrity and in separating the concept from ethical sensitivity (Craig and Gustafson 1998 ). We also note the opportunity for more research to discover and improve instruments and measures to assess ethical integrity and subsequent development of high moral character.

Moral Domain Research

As with most moral psychology research, ‘domain theory’ also stems from Kohlberg’s foundational research because it emphasizes the role of cognition in moral functioning. However, the work of theorists in this branch of psychology differs philosophically from the work of Kohlberg. Domain theory incorporates moral relativity to an extent that Kohlberg would likely be uncomfortable with. For them, the study of moral behavior is less about determining how humans ought to behave and more about observing how humans do behave. This ‘descriptive’ approach to morality is reflected in the majority of the theories through the end of this section.

Elliot Turiel and Larry Nucci are prominent domain theorists; they distinguish judgments of social right and wrong into different types or categories. For Nucci ( 1997 ), morality is distinct from other domains of knowledge, including our understanding of social norms. For domain theorists, social behavior can be motivated by moral universals, cultural norms, social norms, or even personal choice (Turiel 1983 ). Thus, social judgments are organized within domains of knowledge. Whether an individual behaves morally depends upon the judgments that person makes about which domain takes precedence in a particular context.

Nucci ( 1997 ) asserts that certain types of social behavior are governed by moral universals that are independent from social beliefs. This category includes violence, theft, slander, and other behaviors that threaten or harm others. Accordingly, research suggests that notions of morality are derived from underlying perceptions about justice and welfare (Turiel 1983 ). Theories of this sort define morality as beliefs and behavior related to treating others fairly and respecting their rights and welfare. In this sense, morality is distinct from social conventions such as standards of fashion and communication. These social norms define what is correct based on social systems and cultural traditions. This category of rules has no prescriptive force and is valuable primarily as a way to coordinate social interaction (Turiel 1983 ).

Turiel ( 1983 , 1998 ) elaborates on the differences between the moral and social domain in his Social Domain Theory. In contrast to Blasi, he proposes that morality is not a domain in which judgments are central for some and peripheral for others, but that morality stands alongside other important social and personal judgments. To understand the connection between judgment and action, Turiel believes it is necessary to consider how an individual applies his or her judgments in each domain—moral, social, and personal (Turiel 2003 ).

Turiel’s social-interactionist model places behaviors that harm, cause injustice, or violate rights in the ‘moral domain.’ He claims that the definition of moral action is derived in part from criteria given in the philosophy of Aristotle where concepts of welfare, justice, and rights are not considered to be determined by consensus or existing social arrangements, but are universally valid. In contrast, actions that involve matters of social or personal convention have no intrinsic interpersonal consequences, thus they fall outside the moral domain. Individuals form concepts about social norms through involvement in social groups.

Turiel and Nucci’s work does not accept the premise that a moral judgment-action gap exists (Nucci 1997 ; Turiel 1983 , 1998 ). They explain inconsistencies between judgment and behavior as the result of individuals accessing different domains of behavior. Thus, a judgment about which domain of judgments to prioritize precedes action. While an action may be inconsistent with a person’s moral judgment, it may not be inconsistent with that person’s overarching judgments that have higher priority. In other words, the person can know something is right, but in the end decide that he would rather do something else because in balancing his moral, personal, and social concerns, something else won out as seeming more important in the end. This particular aspect of Turiel’s model could be compared to Blasi’s personal responsibility component in which after a moral judgment is made, the person decides whether he has a responsibility in the particular moment or situation to act upon the judgment. Kohlberg’s research did not sufficiently address this element of responsibility to act.

Even though Turiel and Nucci recognize the prescriptive nature of behavior in the moral domain, they assert that the individual must make a judgment about whether it merits acting upon, or whether another sphere of action takes precedence. In other words, Turiel and Nucci may deem a particular moral action to be more important than action in the social or personal conventional sphere. However, unless the individual deems it so, there is no moral failure. The individual decides which sphere takes priority at any given time. The notions of integrity, personal responsibility, and identity as the origin of moral motivation (Blasi 1995 ; Hardy and Carlo 2005 ; Lapsley and Narvaez 2004 ) do not apply within Turiel’s social-interactionist model.

Dan Ariely, Francesca Gino, and others have discovered some interesting findings about activating the moral domain through triggers such as recall of the Ten Commandments or an honor code (Ariely 2012 ; Gino et al. 2013 ; Hoffman 2016 ; Mazar et al. 2008 ). However, research in this area is still in its infancy, and other scholars have not always been able to replicate the results (c.f., Verschuere et al. 2018 ). Future research could examine factors that determine why a certain sphere of action takes precedence over other spheres in motivating specific behaviors. For example, which factors impact an individual’s decision to act within the moral domain or within the social sphere? How can the moral domain be triggered? Why does or doesn’t one’s training or knowledge (such as the ability to recall culturally accepted moral principles such as the Ten Commandments) predict one’s ethical behavior?

In a similar vein to Turiel and Nucci, Bergman’s Model ( 2002 ) accepts an individual to be moral, even if that individual does not act upon his or her moral understanding. He finds the moral in the relationships among components of reasoning, motivation, action, and identity. With this model he seeks to answer the question raised by Turiel’s model, ‘If it is just a matter of prioritizing domains of behavior, why be moral?’ He asserts that his model preserves the centrality of moral reasoning in the moral domain, while also taking seriously personal convention and motivation, without succumbing to a purely subjectivist perspective (c.f., Bergman 2002 , p. 36).

Bergman strives to articulate the motivational potential of moral understanding as truly moral even when it has not been acted upon. He does not assume that moral understanding must have an inevitable expression in action as did Kohlberg. Thus, Bergman provides another context for thinking about the problem of the judgment-action gap. He focuses on our inner moral intentions. He believes that when people behave morally, they do so simply because they define themselves as moral; acting otherwise would be inconsistent with their identity (Bergman 2002 ).

The assumptions underlying domain theory present several dangers to organizations. Moral Domain Theory assumes, with Kohlberg, that the essence of morality is in the human capability to reason, and that there is no moral issue at hand unless it is recognized cognitively. This creates the possibility of excusing individuals from the responsibility of the outcome of their actions. Even though Kohlberg believed in universal moral rules, the fact that he based such a belief in reasoning and empirical evidence allows those who build on his theory to create a morally acceptable place for behaviors that one deems reasonable even when such behaviors negatively impact the well-being of self or others. The question for management scholars is if we are willing to accept the consequences of such assumptions. We call on scholars to challenge these assumptions, such as by researching on a deeper level where morality really comes from and what it implies for decision making in organizations.

On the other hand, Moral Domain Theory addresses the influence of social norms, which is an important moral issue that Kohlberg’s research did not address. For example, empirical research by Desai and Kouchaki ( 2017 ) suggests that subordinates can use moral symbols to discourage unethical behavior by superiors. As we suggested earlier, future research should examine the influence of organizational, cultural, and social norms, symbols, and prompts. Even where universal norms do not prohibit an action, a person may be acting immorally according to expectations established within organizations or relationships. We call on scholars to consider if and when certain norms specific to a situation, organization, community, relationship, or other context may or may not (and should or should not) override universal principles. Research of this nature will help clarify what is ethically acceptable.

Moral Automaticity Research

The philosophies of the researchers we will describe in this section begin to move away from Kohlberg’s assumption that morality is found in deliberate cognitive reasoning and the assumption that universal moral standards exist. For scholars in the moral automaticity realm, morality is based on automatic mental processes developed through evolution to benefit our individual and collective social survival. However, while they discuss moral judgments in terms of automatic rather than deliberate judgments, they still hold that the meaning of morality is found in the judgments that humans make.

Additionally, accounts of morality focused on automatic, neurological processes conflict with ideas of free will and personal responsibility. These accounts rely on the concept of determinism, or the belief that all actions and events are the predetermined, inevitable consequences of various environmental and biological processes (Ellertson et al. 2016 ). If these processes are really the basis of morality, some critics argue we are reduced to creatures without individuality. There is clearly a balance between automatic and deliberative processes in human moral behavior that allows for individual differences and preserves the idea of agency. We propose that while automatic processes certainly play a role in moral decision making, that role is to assist in a more fundamental purpose of our existence as humans (Ellertson et al. 2016 ). With this in mind, we summarize some of the most prominent research based on moral automaticity, summarize the research that argues for the existence of moral schemas and moral heuristics, then suggest directions for future research.

Narvaez and Lapsley ( 2005 ) have argued that John Bargh’s research provides persuasive empirical evidence that automatic, preconscious cognition governs a large part of our daily activities (e.g., Bargh 1989 , 1990 , 1996 , 1997 ; Uleman and Bargh 1989 ). Narvaez and Lapsley ( 2005 ) assert that this literature seems to thoroughly undermine Kohlberg’s assumptions. Bargh and Ferguson ( 2000 ) note, for example, that “higher mental processes that have traditionally served as quintessential examples of choice and free will—such as goal pursuit, judgment, and interpersonal behavior—have been shown recently to occur in the absence of conscious choice or guidance” (p. 926). Bargh concludes that human behavior is not very often motivated by conscious, deliberate thought. He further states that “if moral conduct hinges on conscious, explicit deliberation, then much of human behavior simply does not qualify” (c.f., Narvaez and Lapsley 2005 , p. 142).

Haidt’s ( 2001 ) views on the moral take the field in the intuitive direction. He focuses on emotional sentiments, some of which have been seen in the previous arguments of Eisenberg ( 1986 ) and Hoffman ( 1970 , 1981 , 1982 ) as well as the original thinking of Hume ( 1739/2001 , 1777/1960 ), who concerned himself with human ‘sentiments’ as sources of moral action. Haidt claims that “the river of fMRI studies on neuroeconomics and decision making” gives empirical evidence that “the mind is driven by constant flashes of affect in response to everything we see and hear” (Haidt 2009 , p. 281). Hoffman ( 1981 , 1982 ) provides an example of these affective responses that Haidt refers to. He gives evidence that humans reliably experience feelings of empathy in response to others’ misfortunes, resulting in altruistic behavior. In Hoffman’s foundational work, we see that altruism and other pro-social behaviors fit in with empirical findings from modern psychological and biological research.

Haidt’s Social Intuitionist Model (SIM), has brought a resurgence of interest in the importance of emotion and intuition in determining the moral. He asserts that the moral is found in judgments about social processes, not in private acts of cognition. These judgments are manifest automatically as innate intuitions. He defines moral intuition as “the sudden appearance in consciousness, or at the fringe of consciousness, of an evaluative feeling (like-dislike, good-bad) about the character or actions of a person, without any conscious awareness of having gone through steps of search, weighing evidence, or inferring a conclusion” (Haidt 2001 , p. 818).


Haidt asserts that “studies of everyday reasoning show that we usually use reasoning to search for evidence to support our initial judgment, which was made in milliseconds” ( 2009 , p. 281). He believes that only rarely does reasoning override our automatic judgments. He does not like to contrast the terms emotion and cognition because he sees it all as cognition, just different kinds: (1) intuitions that are fast and affectively laden and (2) reasoning that is slow and less motivating.

Haidt focuses on innate intuitions that are linked to the social construction of the ethics of survival. He sees action as moral when it benefits survival (Haidt 2007 ). He argues that humans “come equipped with an intuitive ethics , an innate preparedness to feel flashes of approval or disapproval toward certain patterns of events involving other human beings” (Haidt and Joseph 2004 , p. 56). Haidt proposes two main questions that he believes are answered by his Social Intuitionist Model: (1) Where do moral beliefs and motivations come from? and (2) How does moral judgment work?

His answer to the first question is that moral views and motivation come from automatic and immediate emotional evaluations of right and wrong that humans are naturally programmed to make. He cites Hume who believed that the basis for morality comes from an “immediate feeling and finer internal sense” (Hume 1777/1960 , p. 2).

To answer the second question (‘How does moral judgment work?’), Haidt explains that brains “integrate information from the external and internal environments to answer one fundamental question: approach or avoid?” (Haidt and Bjorklund 2007 , p. 6). Approach is labeled good; avoid is bad . The human mind is constantly evaluating and reacting along a good-bad dimension regarding survival.

The Social Intuitionist Model presents six psychological connections that describe the relationships among intuitions, conscious judgments, and reasoning. Haidt’s main proposition is that intuition trumps reasoning in moral processing (Haidt and Bjorklund 2007 ). Moral judgment-action gaps, then, appear between an action motivated by intuition and judgments that come afterwards. Applied to Kohlberg’s empirical study, this would imply that the reasoning he observed served not to motivate decisions but to justify them after the fact.

This approach suggests that ethical behavior is driven by naturally programmed emotional responses. Recent research by Wright et al. ( 2017 ) suggests that moral emotions can influence professional behavior. Other work conducted by Peck et al. ( 1960 ) shows that social influences, especially in family settings, stimulate character development over time. They also dismiss the importance of the debate between automatic and cognitive judgments by showing that people who have developed the highest level of moral character judge their actions “either consciously or unconsciously” and that “the issue is not the consciousness, but the quality of the judgment” (Peck et al. 1960 , p. 8).

Monin et al. ( 2007 ) also strive to move beyond the debate that pits emotion or intuition against reason, vying for primacy as the source for the moral. They assert that the various models that seek to bridge the judgment–action gap are considering two very different proto-typical situations. First, those who examine how people deal with complex moral issues find that moral judgments are made by elaborate reasoning. Second, those who study reactions to alarming moral misconduct conclude that moral judgments are quick and intuitive. Benoit Monin and his colleagues propose that researchers should not arbitrarily choose between the one or the other but embrace both types of models and determine which model type has the greater applicability in any given setting (Monin et al. 2007 ).

Narvaez ( 2008a ) contends that Haidt’s analysis limits moral judgment to the evaluation of another person’s behavior or character. In other words, his narrow definition of moral reasoning is limited to processing information about others. She wonders about moral decision making involving personal goals and future planning (Narvaez 2008a ).

Narvaez ( 2008a ) also believes that Haidt over-credits flashes of affect and intuition and undervalues reasoning. In her view, flash affect is just one of many processes we use to make decisions. Numerous other factors affect moral decisions along with gut feelings, including goals, mood, preferences, environmental influences, context, social pressure, and consistency with self-perception (Narvaez 2008a ). We call on scholars to investigate whether, when, how, and with which level of complexity people wrestle with moral decisions. We also suggest researchers consider investigating whether there is anything that organizations can do to move people away from fast and automatic decisions (and toward slow and thoughtful decisions), and whether doing so motivates more ethical choices.

Moral Schemas Research

Haidt and Narvaez both believe that morality exists primarily in evolved brain structures that maximize social survival, both collectively and individually (Narvaez 2008a , b ). Narvaez asserts that Haidt’s Social Intuitionist Model includes biological and social elements but lacks a psychological perspective. Narvaez ( 2008a ) finds the moral ultimately in “psychobehavioral potentials that are genetically ingrained in brain development” as “evolutionary operants” (p. 2). To explicate these evolutionary operants, she refers to her own model of psychological schemas that humans access to make decisions. She notes that Haidt’s idea of modules in the human brain is accepted by many evolutionary psychologists but that such assertions lack solid empirical evidence in neuroscience ( 2008a ).

In contrast, Narvaez’s schemas are brain structures that organize knowledge based on an individual’s experience (Narvaez et al. 2006 ). In general, Schema Theory describes abstract cognitive formations that organize intricate networks of knowledge as the basis for learning about the world (Frimer and Walker 2008 ).

Schemas facilitate the process of appraising one’s social landscape, forming moral identity or moral character. Narvaez terms this “moral chronicity” and claims that it explains the automaticity by which many moral decisions are made. Individuals “just know” what is required of them without engaging in an elaborate decision-making process. Neither the intuition nor the activation of the schemas is a conscious, deliberative process. Schema activation, though mostly shaped by experience (thus the social aspect), is ultimately rooted in what Narvaez ( 2008b ) refers to as “evolved unconscious emotional systems” that predispose responses to particular events (p. 95).

Narvaez’s ‘Triune Ethics Theory’ ( 2008b ) explains her idea of unconscious emotional systems. This research proposes that these emotional systems are fundamentally derived from three evolved formations in the human brain. Her theory is modeled after MacLean’s ( 1990 ) Triune Brain Theory, which posited that these formations bear the resemblance of animal evolution. Each of the three areas has a “biological propensity to produce an ethical motive” (Narvaez 2008b , p. 2). With these formations, animals and humans have been able to adapt their behavior according to the challenges of life (Narvaez 2008b ). Emotional systems, because of their central location, can interact with other cognitive formations. Thus, a thought accompanies every emotion, and most thoughts also stimulate emotion. Narvaez’s model is a complex system in which moral behavior (though influenced by social events) is determined almost completely from the structures of the brain.

Some researchers (Bargh and Chartrand 1999 ; Gigerenzer 2008 ; Lapsley and Narvaez 2008 ; Sunstein 2005 ) assert that intuition and its consequent behavior are constructed almost completely through environmental stimuli. Bargh and Chartrand ( 1999 ) assert that “most of a person’s everyday life is determined not by their conscious intentions and deliberate choices but by mental processes that are put into motion by features of the environment and that operate outside of conscious awareness and guidance” (p. 462). Our brains automatically perceive our environment, including the behavior of other people. These perceptions stimulate thoughts that lead to actions and eventually to patterns of behavior. This sequence is automatic; conscious choice plays no role in it (see, e.g., Bargh and Chartrand 1999 , p. 466).

Lapsley and Hill ( 2008 ) address Frimer and Walker’s original question of whether moral judgment is more deliberate or more automatic. They include Bargh and Chartrand ( 1999 ) in their list of intuitive models of moral behavior which they label ‘System 1’ models because they describe processing which is “associative, implicit, intuitive, experiential, automatic and tacit” as opposed to ‘System 2’ models where the mental processing is “rule based, explicit, analytical, ‘rational’, conscious and controlled” (p. 4). They categorize Haidt’s and Narvaez’s models as System 1 models because they are intuitive, experiential, and automatic.

Moral Heuristics Research

Gigerenzer ( 2008 ) believes that intuitions come from moral heuristics. Moral heuristics are rules developed by experience that help us make simple moral decisions and are transferable across settings. They are shortcuts that are easier and quicker to process than deliberative, conscious reasoning. Thus, they are automatic in their presentation. They are fast and frugal. They are fast in that they enable quick decision making and frugal because they require a minimal search for information to make decisions.

Heuristics are deeply context sensitive. The science of heuristics investigates which intuitive rules are readily available to people (Gigerenzer and Selten 2001 ). Gerd Gigerenzer is interested in the success or failure of these rules in different contexts. He rejects the notion of moral functioning as either rational or intuitive. Reasoning can be the source of heuristics, but the distinction that matters most is between unconscious and conscious reasoning. Unconscious reasoning causes intuition, and—as with Haidt’s theories mentioned earlier—conscious reasoning justifies moral judgments after they are made (Lapsley and Hill 2008 ). In general, Gigerenzer asserts that moral heuristics are accurate in negotiating everyday moral behavior.

Sunstein’s ( 2005 ) model also claims that intuitions are generated by ‘moral heuristics.’ However, in contrast to Gigerenzer, he notes that heuristics can lead to moral errors or gaps between good judgment and appropriate behavior when these rules of thumb are undisciplined or decontextualized. This happens when we use heuristics as if they were universal truths or when we apply heuristics to situations that would be handled more appropriately with slower rational deliberation. Sunstein ( 2005 ) supports the view that evolution and social interaction cause the development of moral heuristics. Also, recent research by Lee et al. ( 2017 ) suggests an evolutionary account for male immorality, providing some support for the existence of an evolutionary origin and for the use of moral automaticity. To investigate the disagreement between Sunstein and Gigerenzer, we call on researchers to further examine the frequency, depth, and accuracy with which humans use moral heuristics.

Lapsley and Hill ( 2008 ) categorize the theories of Sunstein ( 2005 ) and Gigerenzer ( 2008 ) as System 1 models because the behavior they describe appears to be implicit, automatic, and intuitive. These models emphasize the automaticity of moral judgments that come from social situations. A person with a moral personality detects the moral implications of a situation and automatically makes a moral judgment (Lapsley and Hill 2009 ). For this kind of person, morality is deeply engrained into social habits.

Though Lapsley and Hill categorize heuristics models the same as Haidt’s, we observe that ‘intuition’ in the sense of heuristics means something very different to them than what it means to Haidt. In Haidt’s Social Intuitionist Model, learning structures developed through evolution are the source of automatic judgments. On the other hand, Sunstein’s ( 2005 ) intuitions come from ‘moral heuristics,’ which are quick moral rules of thumb that pop into our heads and can even be developed through reasoning. As researchers examine the roles of reasoning and intuition in moral decision making, they may consider breaking intuition into categories, such as intuitions that represent heuristics and intuitions that come from biological predispositions.

The models of moral functioning just described fall into the ‘intuitive’ category, though they are competing descriptions of how to meaningfully connect judgment and action. Frimer and Walker ( 2008 ) observe that on one hand, models based on deliberative reasoning may be the most explanatory in that they require individuals to engage in and be aware of their own moral processing; “The intuitive account, in contrast, requires a modicum of moral cognition but grants it permission to fly below the radar” (p. 339). In a way, it separates moral functioning from consciousness or the ‘self.’

Future Research Directions

The specialties of moral automaticity, moral schemas, and moral heuristics are interesting and promising areas for those interested in future research in ethical decision making. One reason is because these specialties are highly multidisciplinary. Philosophers, psychologists, sociologists, anthropologists, neuroscientists, and others, in addition to business scholars, are throwing themselves into this area. A second reason is because some of the most interesting future research questions in this multidisciplinary field are interdisciplinary. Many of the unanswered questions are complex and must be addressed from many different angles and with a variety of tools.

Consider just two research questions: (1) How does individual meaning-making actually take place if biological evolution is the primary driver and architect of both our personal moral choices and subsequent ethical interpretations? (2) What type of real accountability is possible if brains are programmed to make moral and/or immoral choices? These types of questions lie at the heart of what it means to be a human being, and these are just a few of the theoretical questions in moral automaticity research.

Future research directions in the empirical examination of moral automaticity are just as fascinating. For example, (1) Where, when, and why does the brain light up when ethical decisions are made and reflected upon? (2) Which areas of the brain fire first when confronted with a difficult ethical situation? (3) What is the sequence that the brain fires in when experiencing moral disengagement? (4) How plastic is the brain as it relates to rewiring and strengthening neural pathways that will lead to more prosocial behavior? (5) What are the predominant dispositional and situational factors that lead the brain to heal from moral injury? (6) How do various situations, social settings, and personality differences interact to activate automatic and deliberative processes?

In summary, we call for future research in moral automaticity, moral schemas, and moral heuristics to shed light on the roots of moral action. Given the research supporting the role of automaticity in moral processing, we caution against relying too heavily on models that emphasize the preeminence of rational thinking until research further examines this phenomenon. We also call for research examining the same subjects both in situations that require deliberative processing and in situations that are inherently intuitive. We suggest the use of fMRI studies to observe the activity of different sections of the brain—those associated with rational, cognitive processes and those associated with intuitive judgments—during unique situations.

Even in the earliest stages of moral philosophy, Socrates, Plato, and Aristotle noted that people do not always act on the rational understanding they possess (Kraut 2018 ). They used the term “akrasia” to describe the phenomenon in which a person knows what is right but fails to act on that knowledge. This is commonly called the moral judgment-action gap.

Lawrence Kohlberg’s work (Kohlberg 1969 , 1971a , b ) is not only widespread in research, but also in business education today. His influential theory of cognitive moral development rests on the assumption that the ability to morally reason at a certain level is the primary core and driver of a person’s morality. Kohlberg’s work proposes that stages of moral development, which are defined at a universal level, are what is most fundamental. Although his ideas are important, recent research demonstrates his theorizing is insufficient in understanding and predicting the moral judgment-action gap (Hannah et al. 2018 ; Sanders et al. 2018 ). This article has provided various examples of other research that has successfully moved beyond Kohlberg’s assumptions (Aquino and Reed 2002 ; Grant et al. 2018 ). For this reason, we encourage ethics scholars to reconsider an overreliance on rationality in their research in behavioral business ethics. In Fig.  2 , we show the major theories and the relationships between them.

Many scholars have presented research that specifically addresses the judgment-action gap. For example, moral identity theory and virtue ethics explore how a person’s self-perception motivates moral behavior (Blasi 1983 ; Hardy and Carlo 2005 , 2011 ; Walker et al. 1995 ). However, more empirical evidence and better theories and models are needed that show how a person develops moral identity and moral character. Future studies can examine the ways in which moral identity leads to ethical decision making.

Moral domain theory suggests that the judgment-action gap does not exist and that the gap can be understood through additional domains of reasoning (ex. self-interest, social interest, etc.) used to evaluate the moral implications of a given situation (Bergman 2002 ; Nucci 1997 ; Turiel 2003 ). What we do not fully understand is what causes a person to recognize moral implications in the first place and how individuals apply different domains in decision making. Given the conflicting research findings (e.g., Mazar et al. 2008 ; Verschuere et al. 2018 ), we call for more research that shows what stimuli can trigger a person to view a decision as a moral decision as opposed to a decision in which social influences or personal preferences take precedence.

Some scholars oppose the idea that conscious reasoning governs most moral behavior. For example, Bargh ( 1989 , 1990 , 1996 , 1997 , 2005 ; Bargh and Ferguson 2000 ; Uleman and Bargh 1989 ) and Haidt ( 2001 , 2007 , 2009 ) have provided evidence that people make ethical decisions based on automatic intuitions. As Narvaez ( 2008b ) has pointed out, however, we would be wrong to assume that all decisions are based solely on flashes of intuition. What we do not know is how factors such as situation, personality, and cultural background influence the relative and complimentary roles of conscious reasoning and intuition in moral behavior. We call for research that investigates the influence of these factors on moral processing.

Even Haidt ( 2009 ) recognizes the existence of moral reasoning, though he claims that it occurs only to rationalize an intuitive decision after it has been made. Scholars who discuss the development and use of heuristics (Gigerenzer 2008 ; Sunstein 2005 ) show how past reasoning about moral situations—perhaps the kind of reasoning that Haidt refers to—can influence the development of behavioral rules of thumb. These rules, or “heuristics,” appear to function automatically after they have been developed through cognition over the course of a person’s experiences. What we do not understand is the extent to which heuristics are consistent with an individual’s conscious moral understanding. We call for research that explores the formation of heuristics and their reliability in making real-life ethical decisions that are consistent with a person’s moral understanding.

This article shows that different theories point us in different directions within the fields of moral psychology and ethical decision making. Thus, it is very difficult to form a holistic understanding of moral development and processing. With this in mind, our most urgent call is for scholars to develop a holistic framework of moral character development and a comprehensive theory of ethical decision making. These types of models and theories would serve as powerful tools to fuel future empirical research to help us understand why people do not always act on their moral understanding. More robust research is critical to understanding how to prevent devastating ethical failures and how to foster ethical courage.

For simplicity throughout this article, we also use “judgment-action gap.”

Akrasia relates to the moral judgment-moral action gap discussed throughout this article.

The individual considers laws valid and worthy of obedience insofar as they are grounded in justice.

“This principle asserts that moral reasoning is a conscious process of individual moral judgment using ordinary moral language (Kohlberg et al. 1983 ). The moral quality of behavior hinges on agent phenomenology; it depends solely on the subjective perspective, judgment and intention of the agent.” (Lapsley and Hill 2009 , p. 1)

Aquino, K., & Reed, A. I. (2002). The self-importance of moral identity. Journal of Personality and Social Psychology, 83 (6), 1423–1440.

Google Scholar  

Ariely, D. (2012). The (Honest) truth about dishonesty: How we lie to everyone-especially ourselves . London: HarperCollins.

Ash, M. G., & Woodward, W. R. (1987). Psychology in twentieth-century thought and society . New York: Cambridge University Press.

Bargh, J. A. (1989). Conditional automaticity: Varieties of automatic influence in social perception and cognition. In J. S. Uleman & J. A. Bargh (Eds.), Unintended thought (pp. 3–51). New York: Guilford Press.

Bargh, J. A. (1990). Auto-motives: Preconscious determinants of thought and behavior. In E. T. Higgins & R. M. Sorrentino (Eds.), Handbook of motivation and cognition (Vol. 2, pp. 93–130). New York: Guilford Press.

Bargh, J. A. (1996). Principles of automaticity. In E. T. Higgins & A. Kruglanski (Eds.), Social psychology: Handbook of basic principles (pp. 169–183). New York: Guilford Press.

Bargh, J. A. (1997). The automaticity of everyday life. In R. S. Wyer Jr. (Ed.), The automaticity of everyday life, advances in social cognition (Vol. 10, pp. 1–61). Mahwah, NJ: Lawrence Erlbaum Associates.

Bargh, J. A. (2005). Bypassing the will: Toward demystifying the nonconscious control of social behavior. In R. R. Hassin, J. S. Uleman, & J. A. Bargh (Eds.), The new unconscious (pp. 37–60). Oxford: Oxford University Press.

Bargh, J. A., & Chartrand, T. L. (1999). The unbearable automaticity of being. American Psychologist, 54, 462–479.

Bargh, J. A., & Ferguson, M. J. (2000). Beyond behaviorism: On the automaticity of higher mental processes. Psychological Bulletin, 126, 925–945.

Bazerman, M. H., & Sezer, O. (2016). Bounded awareness: Implications for ethical decision making. Organizational Behavior and Human Decision Processes, 136, 95–105.

Bergman, R. (2002). Why be moral? A conceptual model from developmental psychology. Human Development, 45, 104–124.

Blasi, A. (1980). Bridging moral cognition and moral action: A critical review of the literature. Psychological Bulletin, 88 (1), 1–45.

Blasi, A. (1983). Moral cognition and moral action: A theoretical perspective. Developmental Review, 3 (2), 178–210.

Blasi, A. (1984). Moral identity: Its role in moral functioning. In W. M. Kurtines & J. L. Gewirtz (Eds.), Morality, moral behavior, and moral development (pp. 129–139). New York: Wiley.

Blasi, A. (1993). The development of identity: Some implications for moral functioning. In G. G. Noam, T. E. Wren, G. Nunner-Winkler, & W. Edelstein (Eds.), Studies in contemporary German social thought (pp. 99–122). Cambridge, MA: The MIT Press.

Blasi, A. (1995). Moral understanding and the moral personality: The process of moral integration. In W. M. Kurtines & J. L. Gewirtz (Eds.), Moral development (pp. 229–253). Boston, MA: Allyn.

Blasi, A. (2004). Moral functioning: Moral understanding and personality. In A. Blasi, D. K. Lapsley, & D. Narváez (Eds.), Moral development, self, and identity (pp. 335–348). Mahwah, NJ: Lawrence Erlbaum Associates.

Blasi, A. (2009). The moral functioning of mature adults and the possibility of fair moral reasoning. In D. Narvaez & D. K. Lapsley (Eds.), Personality, identity, and character (pp. 396–440). New York: Cambridge University Press.

Cervone, D., & Tripathi, R. (2009). The moral functioning of the person as a whole: On moral psychology and personality science. In D. Narvaez & D. K. Lapsley (Eds.), Personality, identity and character, explorations in moral psychology (pp. 30–51). New York: Cambridge University Press.

Chelliah, J., & Swamy, Y. (2018). Deception and lies in business strategy. Journal of Business Strategy, 39 (6), 36–42.

Colby, A., & Damon, W. (1992). Some do care: Contemporary lives of moral commitment . New York: Free Press.

Colby, A., & Damon, W. (1993). The uniting of self and morality in the development of extraordinary moral commitment. In G. G. Noam & T. E. Wren (Eds.), The moral self (pp. 149–174). Cambridge, MA: The MIT Press.

Craft, J. L. (2013). A review of the empirical ethical decision-making literature: 2004-2011. Journal of Business Ethics, 117 (2), 221–259.

Craig, S. B., & Gustafson, S. B. (1998). Perceived leader integrity scale: An instrument for assessing employee perceptions of leader integrity. Leadership Quarterly, 9 (2), 127–145.

Curzer, H. J. (2014). Tweaking the four-component model. Journal of Moral Education, 43 (1), 104–123.

De Los Reyes Jr, G., Kim, T. W., & Weaver, G. R. (2017). Teaching ethics in business schools: A conversation on disciplinary differences, academic provincialism, and the case for integrated pedagogy. Academy of Management Learning and Education, 16 (2), 314–336.

Desai, S. D., & Kouchaki, M. (2017). Moral symbols: A necklace of garlic against unethical requests. Academy of Management Journal, 60 (1), 7–28.

Eisenberg, N. (1986). Altruistic emotion, cognition, and behavior . Hillsdale, NJ: Lawrence Erlbaum Associates.

Ellertson, C. F., Ingerson, M., & Williams, R. N. (2016). Behavioral ethics: A critique and a proposal. Journal of Business Ethics, 138 (1), 145–159.

Floyd, L. A., Xu, F., Atkins, R., & Caldwell, C. (2013). Ethical outcomes and business ethics: Toward improving business ethics education. Journal of Business Ethics, 117 (4), 753–776.

Frimer, J. A., & Walker, L. J. (2008). Towards a new paradigm of moral personhood. Journal of Moral Education, 37 (3), 333–356.

Gallagher, J. M. (1978). Knowledge and development Piaget and education . New York: Plenum Publishing Corporation.

Gaspar, J. P., Seabright, M. A., Reynolds, S. J., & Yam, K. C. (2015). Counterfactual and factual reflection: The influence of past misdeeds on future immoral behavior. The Journal of Social Psychology, 155 (4), 370–380.

Giammarco, E. A. (2016). The measurement of individual differences in morality. Personality and Individual Differences, 88, 26–34.

Gigerenzer, G. (2008). Moral intuitions = fast and frugal heuristics? In W. Sinnott-Armstrong (Ed.), Moral psychology (Vol. 2, pp. 1–26)., The cognitive science of morality: Intuition and diversity Cambridge, MA: MIT Press.

Gigerenzer, G., & Selten, R. (Eds.). (2001). Bounded rationality: The adaptive toolbox . Cambridge, MA: The MIT Press.

Gino, F., Krupka, E. L., & Weber, R. A. (2013). License to cheat: Voluntary regulation and ethical behavior. Management Science, 59 (10), 2187–2203.

Grant, P., Arjoon, S., & McGhee, P. (2018). In pursuit of eudaimonia: How virtue ethics captures the self-understandings and roles of corporate directors. Journal of Business Ethics, 153 (2), 389–406.

Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108, 814–834.

Haidt, J. (2007). The new synthesis in moral psychology. Science, 316 (5827), 998–1002.

Haidt, J. (2009). Moral Psychology and the Misunderstanding of Religion. In J. Schloss & M. Murray (Eds.), The believing primate: Scientific, philosophical, and theological reflections on the origin of religion (pp. 278–291). New York: Oxford University Press.

Haidt, J., & Bjorklund, F. (2007). Social intuitionists answer six questions about morality. In W. Sinnott-Armstrong (Ed.), Moral psychology (Vol. 2, pp. 181–217)., The cognitive science of morality: Intuition and diversity Cambridge, MA: The MIT Press.

Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus, 133 (44), 55–66.

Hannah, S. T., Avolio, B. J., & May, D. R. (2011). Moral maturation and moral conation: A capacity approach to explaining moral thought and action. Academy of Management Review, 36 (4), 663–685.

Hannah, S. T., Thompson, R. L., & Herbst, K. C. (2018). Moral identity complexity: Situated morality within and across work and social roles. Journal of Management . https://doi.org/10.1177/0149206318814166 .

Hardy, S. A., & Carlo, G. (2005). Identity as a source of moral motivation. Human Development, 48, 232–256.

Hardy, S. A., & Carlo, G. (2011). Moral identity: Where identity formation and moral development converge. In S. J. Schwartz, K. Luyckx, & V. L. Vignoles (Eds.), Handbook of identity theory and research (pp. 495–513). New York: Springer.

Hart, D. (2005). The development of moral identity. In G. Carlo & C. P. Edwards (Eds.), Nebraska Symposium on Motivation (Vol. 51, pp. 165–196)., Moral motivation through the life span Lincoln, NE: University of Nebraska Press.

Hart, D., & Fegley, S. (1995). Prosocial behavior and caring in adolescence: Relations to self-understanding and social judgment. Child Development, 66 (5), 1346–1359.

Hoffman, M. L. (1970). Moral Development. In P. Mussen (Ed.), Handbook of child psychology (pp. 261–361). New York: Wiley.

Hoffman, M. L. (1981). Is altruism part of human nature? Journal of Personality and Social Psychology, 40 (1), 121.

Hoffman, M. L. (1982). Development of prosocial motivation: Empathy and guilt. In N. Eisenberg (Ed.), The development of prosocial behavior (pp. 281–313). New York: Academic Press.

Hoffman, T. (2016). Contemporary neuropsychological and behavioural insights into cheating: Lessons for the workplace and application to consulting. Organisational and Social Dynamics, 16 (1), 39–54,175.

Hume, D. (1739/2001). A treatise of human nature . In D. F. Norton & M. J. Norton, (Eds.), Oxford: Oxford University Press.

Hume, D. (1777/1960). An enquiry concerning the principles of morals. La Salle, IL: Open Court.

Jayawickreme, E., Meindl, P., Helzer, E. G., Furr, R. M., & Fleeson, W. (2014). Virtuous states and virtuous traits: How the empirical evidence regarding the existence of broad traits saves virtue ethics from the situationist critique. School Field, 12 (3), 283–308.

Jewe, R. D. (2008). Do business ethics courses work? The effectiveness of business ethics education: An empirical study. Journal of Global Business Issues, 2, 1–6.

Johnson, R. (2008). Kant’s moral philosophy. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy . Retrieved from http://plato.stanford.edu/archives/fall2008/entries/kant-moral/ .

Kant, I. (1785/1993). Grounding for the metaphysics of morals (3rd edn.) (J. W. Ellington, Trans.). Indianapolis, IN: Hackett Publishing.

Kilduff, G. J., Galinsky, A. D., Gallo, E., & Reade, J. J. (2016). Whatever it takes to win: Rivalry increases unethical behavior. Academy of Management Journal, 59 (5), 1508–1534.

Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to socialization. In D. A. Goslin (Ed.), Handbook of socialization theory and research (pp. 347–480). Chicago: Rand McNally.

Kohlberg, L. (1971a). Stages of moral development as a basis for moral education. In C. M. Beck, B. S. Crittenden, & E. V. Sullivan (Eds.), Moral education: Interdisciplinary approaches (pp. 23–92). Toronto: University of Toronto Press.

Kohlberg, L. (1971b). From is to ought: How to commit the naturalistic fallacy and get away with it in the study of moral development. In T. Mischel (Ed.), Psychology and genetic epistemology (pp. 151–235). New York: Academic Press.

Kohlberg, L. (1981). Essays on moral development, Vol. 11: The philosophy of moral development . San Francisco: Harper and Row.

Kohlberg, L. (1984). Essays on moral development, Vol 2: The psychology of moral development . San Francisco, CA: Harper and Row.

Kohlberg, L., & Candee, D. (1984). The relationship of moral judgment to moral action. In L. Kohlberg (Ed.), Essays in moral development (Vol. 2, pp. 498–581)., The psychology of moral development New York: Harper and Row.

Kohlberg, L., & Hersh, R. (1977). Moral development: A review of the theory. Theory into Practice, 16 (2), 53–59.

Kohlberg, L., Levine, C., & Hewer, A. (1983). Moral stages: A current formulation and a response to critics . Basel, NY: Karger.

Kraut, R. (2018). Aristotle’s ethics, In E. N Zalta (Ed.), The Stanford encyclopedia of philosophy . Retrieved from https://plato.stanford.edu/cgi-bin/encyclopedia/archinfo.cgi?entry=aristotle-ethics

Lane, M. (2017). Ancient political philosophy. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy . Retrieved from https://plato.stanford.edu/archives/sum2017/entries/ancient-political/ .

Lapsley, D. K., & Hill, P. L. (2008). On dual processing and heuristic approaches to moral cognition. Journal of Moral Education, 37 (3), 313–332.

Lapsley, D. K., & Hill, P. L. (2009). The development of the moral personality. In D. Narvaez & D. K. Lapsley (Eds.), Moral personality, identity and character: An integrative future (pp. 185–213). New York: Cambridge University Press.

Lapsley, D. K., & Narvaez, D. (2004). A social-cognitive approach to the moral personality. In D. K. Lapsley & D. Narvaez (Eds.), Moral development, self and identity (pp. 189–212). Mahwah, NJ: Erlbaum.

Lapsley, D. K., & Narvaez, D. (2008). Psychologized morality and ethical theory, or, do good fences make good neighbors? In F. Oser & W. Veugelers (Eds.), Getting involved: Global citizenship development and sources of moral values (pp. 279–291). Rotterdam: Sense Publishers.

Lee, M., Pitesa, M., Pillutla, M. M., & Thau, S. (2017). Male immorality: An evolutionary account of sex differences in unethical negotiation behavior. Academy of Management Journal, 60 (5), 2014–2044.

Levine, C., Kohlberg, L., & Hewer, A. (1985). The current formulation of Kohlberg’s theory and a response to critics. Human Development, 28 (2), 94–100.

MacLean, P. D. (1990). The triune brain in evolution: Role in paleocerebral functions . New York: Plenum Press.

Martin, F. (2011). Human development and the pursuit of the common good: Social psychology or aristotelian virtue ethics? Journal of Business Ethics, 100 (1), 89–98.

Martin, K., & Parmar, B. (2012). Assumptions in decision making scholarship: Implications for business ethics research. Journal of Business Ethics, 105 (3), 289–306.

Mazar, N., Amir, O., & Ariely, D. (2008). The dishonesty of honest people: A theory of self-concept maintenance. Journal of Marketing Research, 45 (6), 633–644.

Merle, R. (2018, April 19). U.S. to fine Wells Fargo $1 billion—The most aggressive bank penalty of the Trump era. The Washington Post . Retrieved from https://www.washingtonpost.com

Monin, B., Pizarro, D., & Beer, J. (2007). Deciding vs. reacting: Conceptions of moral judgment and the reason-affect debate. Review of General Psychology, 11, 99–111.

Narvaez, D. (2008a). The social-intuitionist model: Some counter-intuitions. In W. A. Sinnott-Armstrong (Ed.), Moral psychology, Vol. 2: The cognitive science of morality: Intuition and diversity (pp. 233–240). Cambridge, MA: The MIT Press.

Narvaez, D. (2008b). Triune ethics: The neurobiological roots of our multiple moralities. New Ideas in Psychology, 26, 95–119.

Narvaez, D., & Bock, T. (2002). Moral schemas and tacit judgment or how the defining issues test is supported by cognitive science. Journal of Moral Education, 31 (3), 297–314.

Narvaez, D., & Lapsley, D. K. (2005). The psychological foundations of everyday morality and moral expertise. In D. Lapsley & C. Power (Eds.), Character psychology and character education (pp. 140–165). Notre Dame, IN: University of Notre Dame Press.

Narvaez, D., Lapsley, D. K., Hagele, S., & Lasky, B. (2006). Moral chronicity and social information processing: Tests of a social cognitive approach to the moral personality. Journal of Research in Personality, 40, 966–985.

Nucci, L. (1997). Moral development and character education. In H. J. Walberg & G. D. Haertel (Eds.), Psychology and educational practice (pp. 127–157). Berkeley, CA: MacCarchan.

O’Fallon, M. J., & Butterfield, K. D. (2005). A review of the empirical ethical decision-making literature: 1996-2003. Journal of Business Ethics, 59 (4), 375–413.

Paik, Y., Lee, J. M., & Pak, Y. S. (2017). Convergence in international business ethics? A comparative study of ethical philosophies, thinking style, and ethical decision-making between US and Korean managers. Journal of Business Ethics 1–17.

Painter-Morland, M., & Werhane, P. (2008). Cutting-edge issues in business ethics: Continental challenges to tradition and practice . Englewood Cliffs, NJ: Springer.

Parry, R. (2014). Ancient ethical theory, In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy . Retrieved from https://plato.stanford.edu/archives/fall2014/entries/ethics-ancient/ .

Peck, R. F., Havighurst, R. J., Cooper, R., Lilienthal, J., & More, D. (1960). The psychology of character development . New York: Wiley.

Piaget, J. (1977). The moral judgment of the child (M. Gabain, Trans.). Harmondsworth: Penguin. (Original work published in 1932)

Reimer, K., & Wade-Stein, D. (2004). Moral identity in adolescence: Self and other in semantic space. Identity, 4, 229–249.

Rest, J. R. (1979). Development in judging moral issues . Minneapolis, MN: University of Minnesota Press.

Rest, J. R. (1983). Morality. In P. Mussen, J. Flavell, & E. Markman (Eds.), Handbook of child psychology: Cognitive development (Vol. 3, pp. 556–629). New York: Wiley.

Rest, J. R. (1984). The major components of morality. In W. M. Kurtines & J. L. Gewirtz (Eds.), Morality, moral behavior, and moral development (pp. 24–38). New York: John Wiley and Sons.

Rest, J. R. (1999). Postconventional moral thinking: A Neo-Kohlbergian approach . Mahwah, New Jersey: Lawrence Erlbaum Associates.

Rest, J. R., Narvaez, D., Thoma, S. J., & Bebeau, M. J. (2000). A Neo-Kohlbergian approach to morality research. Journal of Moral Education, 29 (4), 381–395.

Reynolds, S. J., Leavitt, K., & DeCelles, K. A. (2010). Automatic ethics: The effects of implicit assumptions and contextual cues on moral behavior. Journal of Applied Psychology, 95 (4), 752–760.

Sanders, S., Wisse, B., Van Yperen, N. W., & Rus, D. (2018). On ethically solvent leaders: The roles of pride and moral identity in predicting leader ethical behavior. Journal of Business Ethics, 150 (3), 631–645.

Sobral, F., & Islam, G. (2013). Ethically questionable negotiating: The interactive effects of trust, competitiveness, and situation favorability on ethical decision making. Journal of Business Ethics, 117 (2), 281–296.

Stace, W. T. (1937). The concept of morals . New York: The MacMillan Company.

Stumpf, S. E., & Fieser, J. (2003). Socrates to Sartre and beyond: A history of philosophy (7th ed.). New York: McGraw-Hill.

Sunstein, C. R. (2005). Moral heuristics. Behavioral and Brain Sciences, 28 (4), 531–573.

Thoma, S. (1994). Moral judgments and moral action. In J. R. Rest & D. Narvaez (Eds.), Moral development in the professions: Psychology and applied ethics (pp. 199–212). Mahwah, NJ: Lawrence Erlbaum Associates.

Thoma, S. J., Derryberry, P., & Narvaez, D. (2009). The distinction between moral judgment development and verbal ability: Some relevant data using socio-political outcome variables. High Ability Studies, 20 (2), 173–185.

Treviño, L. K. (1986). Ethical decision making in organizations: A person-situation interactionist model. Academy of Management Review, 11 (3), 601–617.

Treviño, L. K., den Nieuwenboer, N. A., & Kish-Gephart, J. (2014). (Un)ethical behavior in organizations. Annual Review of Psychology, 65, 635.

Treviño, L. K., Weaver, G. R., & Reynolds, S. J. (2006). Behavioral ethics in organizations: A review. Journal of Management, 32 (6), 951–990.

Turiel, E. (1983). The development of social knowledge: Morality and convention . Cambridge: Cambridge University Press.

Turiel, E. (1998). The development of morality. In N. Eisenberg (Ed.), Handbook of child psychology (Vol. 3, pp. 863–932)., Social, emotional and personality development New York: Wiley.

Turiel, E. (2003). Resistance and subversion in everyday life. Journal of Moral Education, 32 (2), 115–130.

Uleman, J. S., & Bargh, J. A. (Eds.). (1989). Unintended thought . New York: Guilford Press.

Verschuere, B., Meijer, E. H., Jim, A., Hoogesteyn, K., Orthey, R., McCarthy, R. J., et al. (2018). Registered replication report on Mazar, Amir, and Ariely (2008). Advances in Methods and Practices in Psychological Science, 1 (3), 299–317.

Walker, L. J. (2004). Gus in the gap: Bridging the judgment-action gap in moral functioning. In D. K. Lapsley & D. Narvaez (Eds.), Moral development, self, and identity (pp. 1–20). Mahwah: Lawrence Erlbaum Associates.

Walker, L. J., & Hennig, K. H. (1997). Moral development in the broader context of personality. In S. Hala (Ed.), The development of social cognition (pp. 297–327). East Sussex: Psychology Press.

Walker, L. J., Pitts, R. C., Hennig, K. H., & Matsuba, M. K. (1995). Reasoning about morality and real-life moral problems. In M. Killen & D. Hart (Eds.), Morality in everyday life: Developmental perspectives (pp. 371–407). New York: Cambridge University Press.

Wang, L. C., & Calvano, L. (2015). Is business ethics education effective? An analysis of gender, personal ethical perspectives, and moral judgment. Journal of Business Ethics, 126 (4), 591–602.

Weber, J. (2017). Understanding the millennials’ integrated ethical decision-making process: Assessing the relationship between personal values and cognitive moral reasoning. Business and Society. Retrieved from http://journals.sagepub.com/doi/10.1177/0007650317726985

Whetten, D. A. (1989). What constitutes a theoretical contribution? Academy of Management Review, 14 (4), 490–495.

Williams, R. N., & Gantt, E. E. (2012). Felt moral obligation and the moral judgement–moral action gap: Toward a phenomenology of moral life. Journal of Moral Education, 41 (4), 417–435.

Wright, A. L., Zammuto, R. F., & Liesch, P. W. (2017). Maintaining the values of a profession: Institutional work and moral emotions in the emergency department. Academy of Management Journal, 60 (1), 200–237.

Zhong, C. (2011). The ethical dangers of deliberative decision-making. Administrative Science Quarterly, 56 (1), 1–25.

Download references

Acknowledgements

We are grateful to Richard N. Williams, Terrance D. Olson, Edwin E. Gantt, Sam A. Hardy, Daniel K. Judd, John Bingham, Sara Louise Muhr, and three anonymous reviewers for their detailed comments on earlier drafts of this paper.

This study did not have any funding associated with it.

Author information

Authors and affiliations.

Marriott School of Business, Brigham Young University, 590 TNRB, Provo, UT, 84602, USA

Kristen Bell DeTienne & William R. Dudley

Romney Institute of Public Management, Brigham Young University, 760 TNRB, Provo, UT, 84602, USA

Carol Frogley Ellertson

The Wheatley Institution, Brigham Young University, 392 Hinckley Center, Provo, UT, 84602, USA

Marc-Charles Ingerson

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Kristen Bell DeTienne .

Ethics declarations

Conflict of interest.

All authors declare that they have no conflict of interest.

Ethical Approval

This paper does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

DeTienne, K.B., Ellertson, C.F., Ingerson, MC. et al. Moral Development in Business Ethics: An Examination and Critique. J Bus Ethics 170 , 429–448 (2021). https://doi.org/10.1007/s10551-019-04351-0

Download citation

Received : 12 November 2018

Accepted : 04 November 2019

Published : 18 November 2019

Issue Date : May 2021

DOI : https://doi.org/10.1007/s10551-019-04351-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Behavioral ethics
  • Moral judgment-moral action gap
  • Cognitive moral development
  • Find a journal
  • Publish with us
  • Track your research

Moral Development

Influences on moral development.

Like most aspects of development, influencing factors are multifaceted. Moral development is strongly influenced by interpersonal factors, such as family, peers, and culture. Intrapersonal factors also impact moral development, such as cognitive changes, emotions, and even neurodevelopment.

Interpersonal Influences

Children’s interactions with caregivers and peers have been shown to influence their development of moral understanding and behavior. Researchers have addressed the influence of interpersonal interactions on children’s moral development from two primary perspectives: socialization/internalization (Grusec & Goodnow, 1994; Kochanska & Askan, 1995; Kochanska, Askan, & Koenig, 1995) and social domain theory (Turiel, 1983; Smetana 2006). Research from the social domain theory perspective focuses on how children actively distinguish moral from conventional behavior based in part based on the responses of parents, teachers, and peers (Smetana, 1997). Adults tend to respond to children’s moral transgressions (e.g., hitting or stealing) by drawing the child’s attention to the effect of his or her action on others and doing so consistently across various contexts.

In contrast, adults are more likely to respond to children’s conventional misdeeds (e.g., wearing a hat in the classroom, eating spaghetti with fingers) by reminding children about specific rules and doing so only in certain contexts (e.g., at school but not at home) (Smetana, 1984; 1985). Peers respond mainly to moral but not conventional transgressions and demonstrate emotional distress (e.g., crying or yelling) when they are the victim of moral but not conventional transgressions (Smetana, 1984). Children then use these different cues to help determine whether behaviors are morally or conventionally wrong.

Research from a socialization/internalization perspective focuses on how adults pass down standards of behavior to children through parenting techniques and why children do or do not internalize those values (Grusec & Goodnow, 1994; Kochanska & Askan, 1995). From this perspective, moral development involves children’s increasing compliance with and internalization of adult rules, requests, and standards of behavior. Using these definitions, researchers find that parenting behaviors vary in the extent to which they encourage children’s internalization of values and that these effects depend partially on child attributes, such as age and temperament (Grusec & Goodnow, 1994). For instance, Kochanska (1997) showed that gentle parental discipline best promotes conscience development in temperamentally fearful children. However, the same parental responsiveness and a mutually responsive parent-child orientation best promote conscience development in temperamentally fearless children. These parental influences exert their effects through multiple pathways, including increasing children’s experience of moral emotions (e.g., guilt, empathy) and their self-identification as moral individuals (Kochanska, 2010).

Moral Development in the Family

In the formation of children’s morals, no outside influence is greater than that of the family. Through punishment, reinforcement, and both direct and indirect teaching, families instill morals in children and help them to develop beliefs that reflect the values of their culture. Although families’ contributions to children’s moral development are broad, there are particular ways in which morals are most effectively conveyed and learned.

Families establish rules for right and wrong behavior, which are maintained through positive reinforcement and punishment. Positive reinforcement is the reward for good behavior and helps children learn that certain actions are encouraged above others. Punishment, by contrast, helps to deter children from engaging in bad behaviors, and from an early age helps children to understand that actions have consequences. This system additionally helps children to make decisions about how to act, as they begin to consider the outcomes of their behavior.

The notion of what is fair is one of the central moral lessons that children learn in the family context. Families set boundaries on the distribution of resources, such as food and living spaces, and allow members different privileges based on age, gender, and employment. The way in which a family determines what is fair affects children’s development of ideas about rights and entitlements, and also influences their notions of sharing, reciprocity, and respect.

Personal Balance

Through understanding principles of fairness, justice, and social responsibilities, children learn to find a balance between their own needs and wants and the interests of the greater social environment. By placing limits on their desires, children benefit from a greater sense of love, security, and shared identity. At the same time, this connectedness helps children to refine their own moral system by providing them with a reference for understanding right and wrong.

Social Roles

In the family environment, children come to consider their actions not only in terms of justice but also in terms of emotional needs. Children learn the value of social support from their families and develop motivations based on kindness, generosity, and empathy, rather than on only personal needs and desires. By learning to care for the interests and well-being of their family, children develop concern for society as a whole.

Morality and Culture

The role of culture on moral development is an important topic that raises fundamental questions about what is universal and what is culturally specific regarding morality and moral development. Many research traditions have examined this question, with social-cognitive and structural-developmental positions theorizing that morality has a universal requirement to it, drawing from moral philosophy. The expectation is that if morality exists, it has to do with those values that are generalizable across groups and cultures. Alternatively, relativistic cultural positions have been put forth mostly by socialization theories that focus on how cultures transmit values rather than what values are applied across groups and individuals.

As an example of some of the debates, Shweder, Mahapatra, and Miller (1987) argued for moral relativism or the notion that different cultures defined the boundaries of morality differently. In contrast, Turiel and Perkins (2004) argued for the universality of morality, focusing largely on evidence throughout the history of resistance movements that fight for justice through the affirmation of individual self-determination rights. In an update on the debate between moral relativism and moral universality, Miller (2006) provides a thoughtful review of the cultural variability of moral priorities. Miller argues that rather than variability in what individuals consider moral (fairness, justice, rights), there is cultural variability in the priority given to moral considerations (e.g., the importance of prosocial helping). Wainryb (2006), in contrast, reviews extensive literature that has demonstrated that children in diverse cultures such as the U.S., India, China, Turkey, and Brazil share a pervasive view about upholding fairness and the wrongfulness of inflicting harm on others. Cultures vary in terms of conventions and customs, but not principles of fairness, which appear to emerge very early in development, before socialization influences. Wainryb (1991; 1993) shows that many apparent cultural differences in moral judgments are actually due to different informational assumptions or beliefs about the way the world works. When people hold different beliefs about the effects of actions or the status of different groups of people, their judgments about the harmfulness or fairness of behaviors often differ, even when they are applying the same moral principles.

Another powerful socializing mechanism by which values are transmitted is religion, which is for many inextricably linked to cultural identity. Nucci and Turiel (1993) assessed individuals’ reactions to dictates from God, and the distinctions in their reactions to God’s moral (e.g., stealing) and conventional (e.g., day of worship) dictates. One explicit manner in which societies can socialize individuals is through moral education. Solomon and colleagues (1988) present evidence from a study that integrated both direct instruction and guided reflection approaches to moral development, with evidence for resultant increases in spontaneous prosocial behavior. Finally, studies of moral development and cultural issues cover many subtopics. For instance, a recent review of studies examining social exclusion identifies cultural similarities in the evaluation of exclusion across a range of societies and cultures (Hitti, Mulvey & Killen, 2011).

Intrapersonal Influences

Moral questions tend to be emotionally charged issues that evoke strong affective responses. Consequently, emotions likely play an important role in moral development. However, there is currently little consensus among theorists on how emotions influence moral development. Psychoanalytic theory, founded by Freud, emphasizes the role of guilt in repressing primal drives. Research on prosocial behavior has focused on how emotions motivate individuals to engage in moral or altruistic acts. Social-cognitive development theories have recently begun to examine how emotions influence moral judgments. Intuitionist theorists assert that moral judgments can be reduced to immediate, instinctive emotional responses elicited by moral dilemmas.

Research on socioemotional development and prosocial development has identified several “moral emotions,” which are believed to motivate moral behavior and influence moral development (Eisenberg, 2000, for a review). The primary emotions consistently linked with moral development are guilt, shame, empathy, and sympathy. Guilt has been defined as “an agitation-based emotion or painful feeling of regret that is aroused when the actor causes, anticipates causing or is associated with an aversive event” (Fergusen & Stegge, 1998). Shame is often used synonymously with guilt but implies a more passive and dejected response to a perceived wrong. Guilt and shame are considered “self-conscious” emotions because they are of primary importance to an individual’s self-evaluation.

In contrast to guilt and shame, empathy and sympathy are considered other-oriented moral emotions. Empathy is commonly defined as an affective response produced by the apprehension or comprehension of another’s emotional state, which mirrors the other’s affective state. Similarly, sympathy is defined as an emotional response produced by the apprehension or comprehension of another’s emotional state, which does not mirror the other’s affect but instead causes one to express concern or sorrow for the other (Eisenberg, 2000).

The relation between moral action and moral emotions has been extensively researched. Very young children have been found to express feelings of care, and empathy towards others, showing concerns for other’s well being (Eisenberg, Spinard, & Sadovsky, 2006). Research has consistently demonstrated that when empathy is induced in an individual, he or she is more likely to engage in subsequent prosocial behavior (Batson 1998; Eisenberg, 200 for review). Additionally, other research has examined emotions of shame and guilt concerning children’s emphatic and prosocial behavior (Zahn-Waxler & Robinson, 1995).

While emotions serve as information for children in their interpretations about the moral consequences of acts, the role of emotions in children’s moral judgments has only recently been investigated. Some approaches to studying emotions in moral judgments come from the perspective that emotions are automatic intuitions that define morality (Greene, 2001; Haidt, 2001). Other approaches emphasize the role of emotions as evaluative feedback that help children interpret acts and consequences (Turiel & Killen, 2010). Research has shown that children attribute different emotional outcomes to actors involved in moral transgressions than those involved in conventional transgressions (Arsenio, 1988; Arsenio & Fleiss, 1996). Emotions may help individuals prioritize among different information and possibilities and reduce information processing demands in order to narrow the scope of the reasoning process (Lemerise & Arsenio, 2000). In addition, Malti, Gummerum, Keller, and Buchmann (2009), found individual differences in how children attribute emotions to victims and victimizers.

Moral Psychology and Human Agency: Philosophical Essays on the Science of Ethics

  • Cite Icon Cite
  • Permissions Icon Permissions

This book examines the moral and philosophical implications of developments in the science of ethics, the growing movement that seeks to use recent empirical findings to answer long-standing ethical questions. Efforts to make moral psychology a thoroughly empirical discipline have divided philosophers along methodological fault lines, isolating discussions that will profit more from intellectual exchange. This volume takes an even-handed approach, including chapters from advocates of empirical ethics as well as those who are skeptical of some of its central claims. Some of these chapters make novel use of empirical findings to develop philosophical research programs regarding such crucial moral phenomena as desire, emotion, and memory. Others bring new critical scrutiny to bear on some of the most influential proposals of the empirical ethics movement, including the claim that evolution undermines moral realism, the effort to recruit a dual-process model of the mind to support consequentialism against other moral theories, and the claim that ordinary evaluative judgments are seldom if ever sensitive to reasons, because moral reasoning is merely the post hoc rationalization of unthinking emotional response.

Signed in as

Institutional accounts.

  • GoogleCrawler [DO NOT DELETE]
  • Google Scholar Indexing

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code
  • Add your ORCID iD

Institutional access

Sign in with a library card.

  • Sign in with username/password
  • Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Sign in through your institution

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Our books are available by subscription or purchase to libraries and institutions.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

IMAGES

  1. Reflection Essay On Ethical And Moral Issues In Business Example (600

    moral and ethical development essay

  2. Ethical Dilemma Essay

    moral and ethical development essay

  3. 📗 Ethical, Moral and Legal Dilemmas Essay Example

    moral and ethical development essay

  4. Difference between Ethics and Morality Essay Example

    moral and ethical development essay

  5. Sample essay on ethics

    moral and ethical development essay

  6. Essay on Ethics

    moral and ethical development essay

VIDEO

  1. 10 Difference Between Morals and Morale (With Table)

  2. Lecture 6.3: Ethical Perspective Project

  3. write an essay on moral education in english || essay on moral education || role of moral education

  4. Ethical Principles and Cultural Competence in Healthcare

  5. ESSAY ETHICS & MORALITY

  6. EaSTE module 5 social and ethical development at QAED Lahore #shortsfeed #youtubeshorts

COMMENTS

  1. Moral and Ethical Development Essay

    EDU-354: Child Development Prenatal to Adolescence. Kimberly Capron September 17, 2023. Moral and Ethical Development Essay The rationale for teaching or promoting moral and ethical development in the classroom is that students learn right and wrong. The moral development of a student will have a major impact on their life and society as a whole.

  2. Kohlberg's Theory of Moral Development

    Stage 6 (Universal Principles): Kohlberg's final level of moral reasoning is based on universal ethical principles and abstract reasoning. At this stage, people follow these internalized principles of justice, even if they conflict with laws and rules. ... Kohlberg L, Essays On Moral Development. Harper & Row; 1985. Ma HK. The moral ...

  3. Moral Development and Ethical Concepts

    The concept of rights-based ethics comes into play as people begin to exercise their rights to life, justice, liberty, and freedom of speech. People believe in the ability of law and order to maintain social norms but they do not consider rule to be supreme or to be dictated to them (Kohlberg & Hersh, 1977, p. 58).

  4. Moral Development

    Essays in Moral Development: The Philosophy of Moral Development. (1984). The Psychology of Moral Development. New York: Harper and Row. Narvaez, D. & Lapsley, D (2004, in press) S. Bend, Indiana: Notre Dame University Press. Noddings, N. (1985). Caring: A Feminine Approach to Ethics and Moral Education. Los Angeles: University of California ...

  5. Kohlberg's Stages of Moral Development

    Heinz Dilemma. Lawrence Kohlberg (1958) agreed with Piaget's (1932) theory of moral development in principle but wanted to develop his ideas further.. He used Piaget's storytelling technique to tell people stories involving moral dilemmas. In each case, he presented a choice to be considered, for example, between the rights of some authority and the needs of some deserving individual ...

  6. Ethics and Morality

    Morality, Ethics, Evil, Greed. To put it simply, ethics represents the moral code that guides a person's choices and behaviors throughout their life. The idea of a moral code extends beyond the ...

  7. Essays on Moral Development, by Lawrence Kohlberg

    The Stages of Man? Essays on Moral Development, Volume One: The Philosophy of Moral Development. by Lawrence Kohlberg. Harper & Row. 441 pp. $21.95. Lawrence Kohlberg is a Harvard psychologist who has been insisting for two decades that the study of children's moral reasoning can guide society in distinguishing right from wrong.

  8. Individual Moral Development and Moral Progress

    Ethical Theory and Moral Practice - At first glance, one of the most obvious places to look for moral progress is in individuals, in particular in moral development from childhood to adulthood. ... Kohlberg L (1984) Essays on moral development, Vol. 2: The psychology of moral development. Harper & Row, San Francisco. Lapsley DK (1996) Moral ...

  9. Moral decision-making and moral development: Toward an integrative

    The framework bridges the gap between developmental psychology and social neuroscience. •. This framework can guide moral development research in typical and atypical populations. How moral decision-making occurs, matures over time and relates to behaviour is complex. To develop a full picture of moral decision-making, moral development and ...

  10. 99 Moral Development Essay Topic Ideas & Examples

    Aggression Development: Piaget's Moral Development Theory. It is the first stage of moral development in which a child views the rules of authority figures as revered and unchangeable. We will write. a custom essay specifically for you by our professional experts. 809 writers online.

  11. Moral Theory

    However, this entry is about moral theories as theories, and is not a survey of specific theories, though specific theories will be used as examples. 1. Morality. 1.1 Common-sense Morality. 1.2 Contrasts Between Morality and Other Normative Domains. 2. Theory and Theoretical Virtues. 2.1 The Tasks of Moral Theory.

  12. Moral Development Essay

    Satisfactory Essays. 592 Words. 3 Pages. Open Document. The moral development of a person's character will have a major impact on society as a whole. If we fail to show children moral responsibility they in turn will lack the moral and ethical sense of values. The critical importance of the early years remains crucial to all later development.

  13. Moral Development in Business Ethics: An Examination and Critique

    This focus on understanding ethical decision making in business in a way that bridges the moral judgment-moral action gap has experienced an explosion of interest in recent decades (Bazerman and Sezer 2016; Paik et al. 2017; Treviño et al. 2014).These types of studies constitute a branch of behavioral ethics research that incorporates moral philosophy, moral psychology, and business ethics.

  14. Influences on Moral Development

    The role of culture on moral development is an important topic that raises fundamental questions about what is universal and what is culturally specific regarding morality and moral development. Many research traditions have examined this question, with social-cognitive and structural-developmental positions theorizing that morality has a ...

  15. Essays on Moral Development

    Essays on Moral Development, Volume 2. Lawrence Kohlberg. Harper & Row, 1981 - Developmental psychology. V.1. The philosophy of moral development : moral stages and the idea of justice . v.2. The psychology of moral development : moral stages and the life cycle. v .3. Education and moral development : moral stages and practice.

  16. Moral Psychology and Human Agency: Philosophical Essays on the Science

    Abstract. This book examines the moral and philosophical implications of developments in the science of ethics, the growing movement that seeks to use recent empirical findings to answer long-standing ethical questions. Efforts to make moral psychology a thoroughly empirical discipline have divided philosophers along methodological fault lines ...

  17. PDF Brilliant Star: Moral Character Development

    The cognitive-developmental theory of moral character development that dominated during the 1970s is based on the work of Piaget (1969) and Kohlberg (1984). It proposes that all children are predisposed to engage in moral and ethical thinking, feeling, choosing, and behaving.

  18. The Psychology of Morality: A Review and Analysis of Empirical Studies

    Morality indicates what is the "right" and "wrong" way to behave, for instance, that one should be fair and not unfair to others (Haidt & Kesebir, 2010).This is considered of interest to explain the social behavior of individuals living together in groups ().Results from animal studies (e.g., de Waal, 1996) or insights into universal justice principles (e.g., Greenberg & Cropanzano ...

  19. Understanding Moral and Ethical Development: Key Concepts and

    ethical development. Stage 1 is mainly for preschool and some kindergarten students. This is the stage where it is important to begin the framework that encourages the correct moral behaviors. Teachers can help in this stage by "setting a code of conduct for the classroom to encourage good behavior (Wilber, 2018)." It is important to have clear consequences for misbehavior in the classroom ...