McCombs School of Business

  • Español ( Spanish )

Videos Concepts Unwrapped View All 36 short illustrated videos explain behavioral ethics concepts and basic ethics principles. Concepts Unwrapped: Sports Edition View All 10 short videos introduce athletes to behavioral ethics concepts. Ethics Defined (Glossary) View All 58 animated videos - 1 to 2 minutes each - define key ethics terms and concepts. Ethics in Focus View All One-of-a-kind videos highlight the ethical aspects of current and historical subjects. Giving Voice To Values View All Eight short videos present the 7 principles of values-driven leadership from Gentile's Giving Voice to Values. In It To Win View All A documentary and six short videos reveal the behavioral ethics biases in super-lobbyist Jack Abramoff's story. Scandals Illustrated View All 30 videos - one minute each - introduce newsworthy scandals with ethical insights and case studies. Video Series

Case Study UT Star Icon

Apple Suppliers & Labor Practices

Is tech company Apple, Inc. ethically obligated to oversee the questionable working conditions of other companies further down their supply chain?

case study unethical practices in business

With its highly coveted line of consumer electronics, Apple has a cult following among loyal consumers. During the 2014 holiday season, 74.5 million iPhones were sold. Demand like this meant that Apple was in line to make over $52 billion in profits in 2015, the largest annual profit ever generated from a company’s operations. Despite its consistent financial performance year over year, Apple’s robust profit margin hides a more complicated set of business ethics. Similar to many products sold in the U.S., Apple does not manufacture most its goods domestically. Most of the component sourcing and factory production is done overseas in conditions that critics have argued are dangerous to workers and harmful to the environment.

For example, tin is a major component in Apple’s products and much of it is sourced in Indonesia. Although there are mines that source tin ethically, there are also many that do not. One study found workers—many of them children—working in unsafe conditions, digging tin out by hand in mines prone to landslides that could bury workers alive. About 70% of the tin used in electronic devices such as smartphones and tablets comes from these more dangerous, small-scale mines. An investigation by the BBC revealed how perilous these working conditions can be. In interviews with miners, a 12-year-old working at the bottom of a 70-foot cliff of sand said: “I worry about landslides. The earth slipping from up there to the bottom. It could happen.”

Apple defends its practices by saying it only has so much control over monitoring and regulating its component sources. The company justifies its sourcing practices by saying that it is a complex process, with tens of thousands of miners selling tin, many of them through middle-men. In a statement to the BBC, Apple said “the simplest course of action would be for Apple to unilaterally refuse any tin from Indonesian mines. That would be easy for us to do and would certainly shield us from criticism. But that would also be the lazy and cowardly path, since it would do nothing to improve the situation. We have chosen to stay engaged and attempt to drive changes on the ground.”

In an effort for greater transparency, Apple has released annual reports detailing their work with suppliers and labor practices. While more recent investigations have shown some improvements to suppliers’ working conditions, Apple continues to face criticism as consumer demand for iPhones and other products continues to grow.

Discussion Questions

1. Do you think Apple should be responsible for ethical lapses made by individuals further down its supply chain? Why or why not?

2. Should Apple continue to work with the suppliers in an effort to change practices, or should they stop working with every supplier, even the conscientious ones, to make sure no “bad apples” are getting through? Explain your reasoning.

3. Do you think consumers should be expected to take into account the ethical track record of companies when making purchases? Why or why not?

4. Can you think of other products or brands that rely on ethically questionable business practices? Do you think consumers are turned off by their track record or are they largely indifferent to it? Explain.

5. Would knowing that a product was produced under ethically questionable conditions affect your decision to purchase it? Explain with examples.

6. If you were part of a third-party regulating body, how would you deal with ethically questionable business practices of multinational corporations like Apple? Would you feel obligated to do something, or do you think the solution rests with the companies themselves? Explain your reasoning.

Related Videos

Ethical Fading

Ethical Fading

Ethical fading occurs when we are so focused on other aspects of a decision that its ethical dimensions fade from view.

Bibliography

Apple ‘failing to protect Chinese factory workers’ http://www.bbc.com/news/business-30532463

How Apple could make a $53 billion profit this year http://money.cnn.com/2015/07/17/technology/apple-earnings-2015/

Global Apple iPhone sales from 3rd quarter 2007 to 2nd quarter 2016 (in million units) http://www.statista.com/statistics/263401/global-apple-iphone-sales-since-3rd-quarter-2007/

Despite successes, labor violations still haunt Apple http://www.theverge.com/2015/2/12/8024895/apple-slave-labor-working-conditions-2015

Reports – Supplier Responsibility – Apple https://www.apple.com/supplier-responsibility/progress-report/

Stay Informed

Support our work.

  • Browse All Articles
  • Newsletter Sign-Up

case study unethical practices in business

  • 30 Apr 2024

When Managers Set Unrealistic Expectations, Employees Cut Ethical Corners

Corporate misconduct has grown in the past 30 years, with losses often totaling billions of dollars. What businesses may not realize is that misconduct often results from managers who set unrealistic expectations, leading decent people to take unethical shortcuts, says Lynn S. Paine.

case study unethical practices in business

  • 23 Apr 2024
  • Cold Call Podcast

Amazon in Seattle: The Role of Business in Causing and Solving a Housing Crisis

In 2020, Amazon partnered with a nonprofit called Mary’s Place and used some of its own resources to build a shelter for women and families experiencing homelessness on its campus in Seattle. Yet critics argued that Amazon’s apparent charity was misplaced and that the company was actually making the problem worse. Paul Healy and Debora Spar explore the role business plays in addressing unhoused communities in the case “Hitting Home: Amazon and Mary’s Place.”

case study unethical practices in business

  • 15 Apr 2024

Struggling With a Big Management Decision? Start by Asking What Really Matters

Leaders must face hard choices, from cutting a budget to adopting a strategy to grow. To make the right call, they should start by following their own “true moral compass,” says Joseph Badaracco.

case study unethical practices in business

  • 26 Mar 2024

How Do Great Leaders Overcome Adversity?

In the spring of 2021, Raymond Jefferson (MBA 2000) applied for a job in President Joseph Biden’s administration. Ten years earlier, false allegations were used to force him to resign from his prior US government position as assistant secretary of labor for veterans’ employment and training in the Department of Labor. Two employees had accused him of ethical violations in hiring and procurement decisions, including pressuring subordinates into extending contracts to his alleged personal associates. The Deputy Secretary of Labor gave Jefferson four hours to resign or be terminated. Jefferson filed a federal lawsuit against the US government to clear his name, which he pursued for eight years at the expense of his entire life savings. Why, after such a traumatic and debilitating experience, would Jefferson want to pursue a career in government again? Harvard Business School Senior Lecturer Anthony Mayo explores Jefferson’s personal and professional journey from upstate New York to West Point to the Obama administration, how he faced adversity at several junctures in his life, and how resilience and vulnerability shaped his leadership style in the case, "Raymond Jefferson: Trial by Fire."

case study unethical practices in business

  • 02 Jan 2024

Should Businesses Take a Stand on Societal Issues?

Should businesses take a stand for or against particular societal issues? And how should leaders determine when and how to engage on these sensitive matters? Harvard Business School Senior Lecturer Hubert Joly, who led the electronics retailer Best Buy for almost a decade, discusses examples of corporate leaders who had to determine whether and how to engage with humanitarian crises, geopolitical conflict, racial justice, climate change, and more in the case, “Deciding When to Engage on Societal Issues.”

case study unethical practices in business

  • 12 Dec 2023

Can Sustainability Drive Innovation at Ferrari?

When Ferrari, the Italian luxury sports car manufacturer, committed to achieving carbon neutrality and to electrifying a large part of its car fleet, investors and employees applauded the new strategy. But among the company’s suppliers, the reaction was mixed. Many were nervous about how this shift would affect their bottom lines. Professor Raffaella Sadun and Ferrari CEO Benedetto Vigna discuss how Ferrari collaborated with suppliers to work toward achieving the company’s goal. They also explore how sustainability can be a catalyst for innovation in the case, “Ferrari: Shifting to Carbon Neutrality.” This episode was recorded live December 4, 2023 in front of a remote studio audience in the Live Online Classroom at Harvard Business School.

case study unethical practices in business

  • 11 Dec 2023
  • Research & Ideas

Doing Well by Doing Good? One Industry’s Struggle to Balance Values and Profits

Few companies wrestle with their moral mission and financial goals like those in journalism. Research by Lakshmi Ramarajan explores how a disrupted industry upholds its values even as the bottom line is at stake.

case study unethical practices in business

  • 27 Nov 2023

Voting Democrat or Republican? The Critical Childhood Influence That's Tough to Shake

Candidates might fixate on red, blue, or swing states, but the neighborhoods where voters spend their teen years play a key role in shaping their political outlook, says research by Vincent Pons. What do the findings mean for the upcoming US elections?

case study unethical practices in business

  • 21 Nov 2023

The Beauty Industry: Products for a Healthy Glow or a Compact for Harm?

Many cosmetics and skincare companies present an image of social consciousness and transformative potential, while profiting from insecurity and excluding broad swaths of people. Geoffrey Jones examines the unsightly reality of the beauty industry.

case study unethical practices in business

  • 09 Nov 2023

What Will It Take to Confront the Invisible Mental Health Crisis in Business?

The pressure to do more, to be more, is fueling its own silent epidemic. Lauren Cohen discusses the common misperceptions that get in the way of supporting employees' well-being, drawing on case studies about people who have been deeply affected by mental illness.

case study unethical practices in business

  • 07 Nov 2023

How Should Meta Be Governed for the Good of Society?

Julie Owono is executive director of Internet Sans Frontières and a member of the Oversight Board, an outside entity with the authority to make binding decisions on tricky moderation questions for Meta’s companies, including Facebook and Instagram. Harvard Business School visiting professor Jesse Shapiro and Owono break down how the Board governs Meta’s social and political power to ensure that it’s used responsibly, and discuss the Board’s impact, as an alternative to government regulation, in the case, “Independent Governance of Meta’s Social Spaces: The Oversight Board.”

case study unethical practices in business

  • 24 Oct 2023

From P.T. Barnum to Mary Kay: Lessons From 5 Leaders Who Changed the World

What do Steve Jobs and Sarah Breedlove have in common? Through a series of case studies, Robert Simons explores the unique qualities of visionary leaders and what today's managers can learn from their journeys.

case study unethical practices in business

  • 03 Oct 2023
  • Research Event

Build the Life You Want: Arthur Brooks and Oprah Winfrey Share Happiness Tips

"Happiness is not a destination. It's a direction." In this video, Arthur C. Brooks and Oprah Winfrey reflect on mistakes, emotions, and contentment, sharing lessons from their new book.

case study unethical practices in business

  • 12 Sep 2023

Successful, But Still Feel Empty? A Happiness Scholar and Oprah Have Advice for You

So many executives spend decades reaching the pinnacles of their careers only to find themselves unfulfilled at the top. In the book Build the Life You Want, Arthur Brooks and Oprah Winfrey offer high achievers a guide to becoming better leaders—of their lives.

case study unethical practices in business

  • 10 Jul 2023
  • In Practice

The Harvard Business School Faculty Summer Reader 2023

Need a book recommendation for your summer vacation? HBS faculty members share their reading lists, which include titles that explore spirituality, design, suspense, and more.

case study unethical practices in business

  • 01 Jun 2023

A Nike Executive Hid His Criminal Past to Turn His Life Around. What If He Didn't Have To?

Larry Miller committed murder as a teenager, but earned a college degree while serving time and set out to start a new life. Still, he had to conceal his record to get a job that would ultimately take him to the heights of sports marketing. A case study by Francesca Gino, Hise Gibson, and Frances Frei shows the barriers that formerly incarcerated Black men are up against and the potential talent they could bring to business.

case study unethical practices in business

  • 04 Apr 2023

Two Centuries of Business Leaders Who Took a Stand on Social Issues

Executives going back to George Cadbury and J. N. Tata have been trying to improve life for their workers and communities, according to the book Deeply Responsible Business: A Global History of Values-Driven Leadership by Geoffrey Jones. He highlights three practices that deeply responsible companies share.

case study unethical practices in business

  • 14 Mar 2023

Can AI and Machine Learning Help Park Rangers Prevent Poaching?

Globally there are too few park rangers to prevent the illegal trade of wildlife across borders, or poaching. In response, Spatial Monitoring and Reporting Tool (SMART) was created by a coalition of conservation organizations to take historical data and create geospatial mapping tools that enable more efficient deployment of rangers. SMART had demonstrated significant improvements in patrol coverage, with some observed reductions in poaching. Then a new predictive analytic tool, the Protection Assistant for Wildlife Security (PAWS), was created to use artificial intelligence (AI) and machine learning (ML) to try to predict where poachers would be likely to strike. Jonathan Palmer, Executive Director of Conservation Technology for the Wildlife Conservation Society, already had a good data analytics tool to help park rangers manage their patrols. Would adding an AI- and ML-based tool improve outcomes or introduce new problems? Harvard Business School senior lecturer Brian Trelstad discusses the importance of focusing on the use case when determining the value of adding a complex technology solution in his case, “SMART: AI and Machine Learning for Wildlife Conservation.”

case study unethical practices in business

  • 14 Feb 2023

Does It Pay to Be a Whistleblower?

In 2013, soon after the US Securities and Exchange Commission (SEC) had started a massive whistleblowing program with the potential for large monetary rewards, two employees of a US bank’s asset management business debated whether to blow the whistle on their employer after completing an internal review that revealed undisclosed conflicts of interest. The bank’s asset management business disproportionately invested clients’ money in its own mutual funds over funds managed by other banks, letting it collect additional fees—and the bank had not disclosed this conflict of interest to clients. Both employees agreed that failing to disclose the conflict was a problem, but beyond that, they saw the situation very differently. One employee, Neel, perceived the internal review as a good-faith effort by senior management to identify and address the problem. The other, Akash, thought that the entire business model was problematic, even with a disclosure, and believed that the bank may have even broken the law. Should they escalate the issue internally or report their findings to the US Securities and Exchange Commission? Harvard Business School associate professor Jonas Heese discusses the potential risks and rewards of whistleblowing in his case, “Conflicts of Interest at Uptown Bank.”

case study unethical practices in business

  • 17 Jan 2023

Good Companies Commit Crimes, But Great Leaders Can Prevent Them

It's time for leaders to go beyond "check the box" compliance programs. Through corporate cases involving Walmart, Wells Fargo, and others, Eugene Soltes explores the thorny legal issues executives today must navigate in his book Corporate Criminal Investigations and Prosecutions.

  • Carbon Accounting & Carbon Neutral Strategy
  • ESG, CSR, & Sustainability Reporting
  • Sustainability Strategy
  • ESG Regulatory Compliance
  • Portfolio Management & Reporting
  • AERA GHG Manager
  • EPIC for Corporates
  • ZENO for Financial Institutions
  • Carbon Accounting & Carbon Neutral Strategy
  • ESG, CSR, & Sustainability Reporting
  • Portfolio Management & Reporting

en_US

Ethical Business Practices: Case Studies and Lessons Learned

Introduction

Ethical business practices are a cornerstone of any successful company, influencing not only the public perception of a brand but also its long-term profitability. However, understanding what constitutes ethical behavior and how to implement it can be a complex process. This article explores some case studies that shine a light on ethical business practices, offering valuable lessons for businesses in any industry.

Case Study 1: Patagonia’s Commitment to Environmental Ethics

Patagonia, the outdoor clothing and gear company, has long set a standard for environmental responsibility. The company uses eco-friendly materials, promotes recycling of its products, and actively engages in various environmental causes.

Lessons Learned

  • Transparency : Patagonia is vocal about its ethical practices and even provides information on the environmental impact of individual products.
  • Consistency: Ethics are not an “add-on” for Patagonia; they are integrated into the very fabric of the company’s operations, from sourcing to production to marketing.
  • Engagement: The company doesn’t just focus on its practices; it encourages consumers to get involved in the causes it supports.

Case Study 2: Salesforce and Equal Pay

Salesforce, the cloud-based software company, took a stand on the gender pay gap issue. They conducted an internal audit and found that there was indeed a significant wage disparity between male and female employees for similar roles. To address this, Salesforce spent over $6 million to balance the scales.

  • Self-Audit: It’s crucial for companies to actively review their practices. What you don’t know can indeed hurt you, and ignorance is not an excuse.
  • Taking Responsibility: Rather than sweeping the issue under the rug, Salesforce openly acknowledged the problem and took immediate corrective action.
  • Long-Term Benefits: Fair treatment boosts employee morale and productivity, leading to long-term profitability.

Case Study 3: Starbucks and Racial Sensitivity Training

In 2018, Starbucks faced a public relations crisis when two Black men were wrongfully arrested at one of their Philadelphia stores. Instead of issuing just a public apology, Starbucks closed down 8,000 of its stores for an afternoon to conduct racial sensitivity training.

Lessons   Learned

  • Immediate Action : Swift and meaningful action is critical in showing commitment to ethical behavior.
  • Education: Sometimes, the problem is a lack of awareness. Investing in employee education can avoid repeated instances of unethical behavior.
  • Public Accountability: Starbucks made their training materials available to the public, showing a level of transparency and accountability that helped regain public trust.

Why Ethics Matter

Ethical business practices are not just morally correct; they have a direct impact on a company’s bottom line. Customers today are more informed and more sensitive to ethical considerations. They often make purchasing decisions based on a company’s ethical standing, and word-of-mouth (or the digital equivalent) travels fast.

The case studies above show that ethical business practices should be a top priority for companies of all sizes and industries. These are not isolated examples but are representative of a broader trend in consumer expectations and regulatory frameworks. The lessons gleaned from these cases—transparency, consistency, engagement, self-audit, taking responsibility, and education—are universally applicable and offer a robust roadmap for any business seeking to bolster its ethical standing.

By implementing ethical business practices sincerely and not as a marketing gimmick, companies not only stand to improve their public image but also set themselves up for long-term success, characterized by a loyal customer base and a motivated, satisfied workforce.

case study unethical practices in business

Monitor ESG performance in portfolios, create your own ESG frameworks, and make better informed business decisions.

In order to contact us please fill the form on the right or directly email us at the address below

[email protected]

3 Church Street, 25/F, Suite 21 Singapore 049483 (+65) 6692 9267

Gustav Mahlerplein 2 Amsterdam, Netherlands 1082 MA (+31) 6 4817 3634

No. 299, Tongren Road, #2604B Jing'an District, Shanghai, China 200040 (+86) 021 6229 8732

77 Dunhua South Road, 7F Section 2, Da'an District Taipei City, Taiwan 106414 (+886) 02 2706 2108

Viet Tower 1, Thai Ha, Dong Da Hanoi, Vietnam 100000 (+84) 936 075 490

Av Jorge Basadre Grohmann 607 San Isidro, Lima, Peru 15073 (+51) 951 722 377

© 2024 • Seneca Technologies Pte Ltd • All rights reserved

  • ESG, CSR, & Sustainability Reporting
  • ESG Data Collection and Management
  • ESG Scoring and Target Setting
  • ESG Report Writing (ISSB, GRI, SASB, TCFD, CSRD)
  • Materiality Assessment
  • ESG Ratings Analyses and Improvement
  • ESG Performance Analyses and Benchmarking
  • Stock Exchange Reporting
  • EU Taxonomy Reporting (CSRD, SFDR, PAI)
  • Portfolio Management & Reporting
  • Portfolio Custom Scoring and Screening
  • Portfolio Analyses and Benchmarking
  • Product and Firm Level Regulatory Reporting (SFDR)
  • Carbon Accounting & Carbon Neutral Strategy
  • Carbon Inventory (GHG Protocol)
  • Science Based Target Setting (SBTi)
  • Carbon Neutral Strategy
  • Privacy Policy
  • Terms of Use
  • Data Processing Agreement

qrcode_wechat

© 2023 • Seneca • All rights reserved

  • Based Target Setting (SBTi) Carbon

case study unethical practices in business

  • Cases in Global Business Ethics
  • Markkula Center for Applied Ethics
  • Focus Areas
  • Business Ethics
  • Business Ethics Resources

The Issue of Price Cutting

Enforcing fairness in Kerala's cement industry.

Instagram and the Ethics of Privacy

How much, if any, of our information should Instagram be able to share with third-parties and advertisers?

Factory Fires in Bangladesh: Who Is Responsible?

  

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

The Psychology Behind Unethical Behavior

  • Merete Wedell-Wedellsborg

case study unethical practices in business

Understanding it can help keep your worst impulses in check.

Leaders are often faced with ethical conundrums. So how can they determine when they’re inching toward dangerous territory? There are three main psychological dynamics that lead to crossing moral lines. First, there’s omnipotence: when someone feels so aggrandized and entitled that they believe the rules of decent behavior don’t apply to them. Second, consider cultural numbness: when others play along and gradually begin to accept and embody deviant norms. Finally, when people don’t speak up because they are thinking of more immediate rewards, we see justified neglect. There are several strategies leaders can use to counter these dynamics, including relying on a group of trusted peers to keep you in check, keeping a list of things you will never do for profit, and looking out for ways you explain away borderline actions.

On a warm evening after a strategy off-site, a team of executives arrives at a well-known local restaurant. The group is looking forward to having dinner together, but the CEO is not happy about the table and demands a change. “This isn’t the one that my assistant usually reserves for me,” he says. A young waiter quickly finds the manager who explains that there are no other tables available.

case study unethical practices in business

  • Merete Wedell-Wedellsborg is an adjunct professor of leadership at IMD Business School and the author of numerous HBR articles as well as Battle Mind: How to Navigate in Chaos and Perform Under Pressure (Sage, 2015).

Partner Center

  • Business Essentials
  • Leadership & Management
  • Credential of Leadership, Impact, and Management in Business (CLIMB)
  • Entrepreneurship & Innovation
  • Digital Transformation
  • Finance & Accounting
  • Business in Society
  • For Organizations
  • Support Portal
  • Media Coverage
  • Founding Donors
  • Leadership Team

case study unethical practices in business

  • Harvard Business School →
  • HBS Online →
  • Business Insights →

Business Insights

Harvard Business School Online's Business Insights Blog provides the career insights you need to achieve your goals and gain confidence in your business skills.

  • Career Development
  • Communication
  • Decision-Making
  • Earning Your MBA
  • Negotiation
  • News & Events
  • Productivity
  • Staff Spotlight
  • Student Profiles
  • Work-Life Balance
  • AI Essentials for Business
  • Alternative Investments
  • Business Analytics
  • Business Strategy
  • Business and Climate Change
  • Design Thinking and Innovation
  • Digital Marketing Strategy
  • Disruptive Strategy
  • Economics for Managers
  • Entrepreneurship Essentials
  • Financial Accounting
  • Global Business
  • Launching Tech Ventures
  • Leadership Principles
  • Leadership, Ethics, and Corporate Accountability
  • Leading Change and Organizational Renewal
  • Leading with Finance
  • Management Essentials
  • Negotiation Mastery
  • Organizational Leadership
  • Power and Influence for Positive Impact
  • Strategy Execution
  • Sustainable Business Strategy
  • Sustainable Investing
  • Winning with Digital Platforms

What Are Business Ethics & Why Are They Important?

Business professional pressing a graphic that reads "Business Ethics" and is surrounded by icons

  • 27 Jul 2023

From artificial intelligence to facial recognition technology, organizations face an increasing number of ethical dilemmas. While innovation can aid business growth, it can also create opportunities for potential abuse.

“The long-term impacts of a new technology—both positive and negative—may not become apparent until years after it’s introduced,” says Harvard Business School Professor Nien-hê Hsieh in the online course Leadership, Ethics, and Corporate Accountability . “For example, the impact of social media on children and teenagers didn’t become evident until we watched it play out over time.”

If you’re a current or prospective leader concerned about navigating difficult situations, here's an overview of business ethics, why they're important, and how to ensure ethical behavior in your organization.

Access your free e-book today.

What Are Business Ethics?

Business ethics are principles that guide decision-making . As a leader, you’ll face many challenges in the workplace because of different interpretations of what's ethical. Situations often require navigating the “gray area,” where it’s unclear what’s right and wrong.

When making decisions, your experiences, opinions, and perspectives can influence what you believe to be ethical, making it vital to:

  • Be transparent.
  • Invite feedback.
  • Consider impacts on employees, stakeholders, and society.
  • Reflect on past experiences to learn what you could have done better.

“The way to think about ethics, in my view, is: What are the externalities that your business creates, both positive and negative?” says Harvard Business School Professor Vikram Gandhi in Leadership, Ethics, and Corporate Accountability . “And, therefore, how do you actually increase the positive element of externalities? And how do you decrease the negative?”

Related: Why Managers Should Involve Their Team in the Decision-Making Process

Ethical Responsibilities to Society

Promoting ethical conduct can benefit both your company and society long term.

“I'm a strong believer that a long-term focus is what creates long-term value,” Gandhi says in Leadership, Ethics, and Corporate Accountability . “So you should get shareholders in your company that have that same perspective.”

Prioritizing the triple bottom line is an effective way for your business to fulfill its environmental responsibilities and create long-term value. It focuses on three factors:

  • Profit: The financial return your company generates for shareholders
  • People: How your company affects customers, employees, and stakeholders
  • Planet: Your company’s impact on the planet and environment

Check out the video below to learn more about the triple bottom line, and subscribe to our YouTube channel for more explainer content!

Ethical and corporate social responsibility (CSR) considerations can go a long way toward creating value, especially since an increasing number of customers, employees, and investors expect organizations to prioritize CSR. According to the Conscious Consumer Spending Index , 67 percent of customers prefer buying from socially responsible companies.

To prevent costly employee turnover and satisfy customers, strive to fulfill your ethical responsibilities to society.

Ethical Responsibilities to Customers

As a leader, you must ensure you don’t mislead your customers. Doing so can backfire, negatively impacting your organization’s credibility and profits.

Actions to avoid include:

  • Greenwashing : Taking advantage of customers’ CSR preferences by claiming your business practices are sustainable when they aren't.
  • False advertising : Making unverified or untrue claims in advertisements or promotional material.
  • Making false promises : Lying to make a sale.

These unethical practices can result in multi-million dollar lawsuits, as well as highly dissatisfied customers.

Ethical Responsibilities to Employees

You also have ethical responsibilities to your employees—from the beginning to the end of their employment.

One area of business ethics that receives a lot of attention is employee termination. According to Leadership, Ethics, and Corporate Accountability , letting an employee go requires an individualized approach that ensures fairness.

Not only can wrongful termination cost your company upwards of $100,000 in legal expenses , it can also negatively impact other employees’ morale and how they perceive your leadership.

Ethical business practices have additional benefits, such as attracting and retaining talented employees willing to take a pay cut to work for a socially responsible company. Approximately 40 percent of millennials say they would switch jobs to work for a company that emphasizes sustainability.

Ultimately, it's critical to do your best to treat employees fairly.

“Fairness is not only an ethical response to power asymmetries in the work environment,” Hsieh says in the course. “Fairness—and having a successful organizational culture–can benefit the organization economically and legally.”

Leadership, Ethics, and Corporate Accountability | Develop a toolkit for making tough leadership decisions| Learn More

Why Are Business Ethics Important?

Failure to understand and apply business ethics can result in moral disengagement .

“Moral disengagement refers to ways in which we convince ourselves that what we’re doing is not wrong,” Hsieh says in Leadership, Ethics, and Corporate Accountability . “It can upset the balance of judgment—causing us to prioritize our personal commitments over shared beliefs, rules, and principles—or it can skew our logic to make unethical behaviors appear less harmful or not wrong.”

Moral disengagement can also lead to questionable decisions, such as insider trading .

“In the U.S., insider trading is defined in common, federal, and state laws regulating the opportunity for insiders to benefit from material, non-public information, or MNPI,” Hsieh explains.

This type of unethical behavior can carry severe legal consequences and negatively impact your company's bottom line.

“If you create a certain amount of harm to a society, your customers, or employees over a period of time, that’s going to have a negative impact on your economic value,” Gandhi says in the course.

This is reflected in over half of the top 10 largest bankruptcies between 1980 and 2013 that resulted from unethical behavior. As a business leader, strive to make ethical decisions and fulfill your responsibilities to stakeholders.

How to Implement Business Ethics

To become a more ethical leader, it's crucial to have a balanced, long-term focus.

“It's very important to balance the fact that, even if you're focused on the long term, you have to perform in the short term as well and have a very clear, articulated strategy around that,” Gandhi says in Leadership, Ethics, and Corporate Accountability .

Making ethical decisions requires reflective leadership.

“Reflecting on complex, gray-area decisions is a key part of what it means to be human, as well as an effective leader,” Hsieh says. “You have agency. You must choose how to act. And with that agency comes responsibility.”

Related: Why Are Ethics Important in Engineering?

Hsieh advises asking the following questions:

  • Are you using the “greater good” to justify unethical behavior?
  • Are you downplaying your actions to feel better?

“Asking these and similar questions at regular intervals can help you notice when you or others may be approaching the line between making a tough but ethical call and justifying problematic actions,” Hsieh says.

How to Become a More Effective Leader | Access Your Free E-Book | Download Now

Become a More Ethical Leader

Learning from past successes and mistakes can enable you to improve your ethical decision-making.

“As a leader, when trying to determine what to do, it can be helpful to start by simply asking in any given situation, ‘What can we do?’ and ‘What would be wrong to do?’” Hsieh says.

Many times, the answers come from experience.

Gain insights from others’ ethical decisions, too. One way to do so is by taking an online course, such as Leadership, Ethics, and Corporate Accountability , which includes case studies that immerse you in real-world business situations, as well as a reflective leadership model to inform your decision-making.

Ready to become a better leader? Enroll in Leadership, Ethics, and Corporate Accountability —one of our online leadership and management courses —and download our free e-book on how to be a more effective leader.

case study unethical practices in business

About the Author

Business and the Ethical Implications of Technology: Introduction to the Symposium

  • Published: 13 June 2019
  • Volume 160 , pages 307–317, ( 2019 )

Cite this article

case study unethical practices in business

  • Kirsten Martin 1 ,
  • Katie Shilton 2 &
  • Jeffery Smith 3  

160k Accesses

48 Citations

13 Altmetric

Explore all metrics

While the ethics of technology is analyzed across disciplines from science and technology studies (STS), engineering, computer science, critical management studies, and law, less attention is paid to the role that firms and managers play in the design, development, and dissemination of technology across communities and within their firm. Although firms play an important role in the development of technology, and make associated value judgments around its use, it remains open how we should understand the contours of what firms owe society as the rate of technological development accelerates. We focus here on digital technologies: devices that rely on rapidly accelerating digital sensing, storage, and transmission capabilities to intervene in human processes. This symposium focuses on how firms should engage ethical choices in developing and deploying these technologies. In this introduction, we, first, identify themes the symposium articles share and discuss how the set of articles illuminate diverse facets of the intersection of technology and business ethics. Second, we use these themes to explore what business ethics offers to the study of technology and, third, what technology studies offers to the field of business ethics. Each field brings expertise that, together, improves our understanding of the ethical implications of technology. Finally we introduce each of the five papers, suggest future research directions, and interpret their implications for business ethics.

Avoid common mistakes on your manuscript.

Mobile phones track us as we shop at stores and can infer where and when we vote. Algorithms based on commercial data allow firms to sell us products they assume we can afford and avoid showing us products they assume we cannot. Drones watch our neighbors and deliver beverages to fishermen in the middle of a frozen lake. Autonomous vehicles will someday communicate with one another to minimize traffic congestion and thereby energy consumption. Technology has consequences, tests norms, changes what we do or are able to do, acts for us, and makes biased decisions (Friedman and Nissenbaum 1996 ). The use of technology can also have adverse effects on people. Technology can threaten individual autonomy, violate privacy rights (Laczniak and Murphy 2006 ), and directly harm individuals financially and physically. Technologies can also be morally contentious by “forcing deep reflection on personal values and societal norms” (Cole and Banerjee 2013 , p. 555). Technologies have embedded values or politics, as they make some actions easier or more difficult (Winner 1980 ), or even work differently for different groups of people (Shcherbina et al. 2017 ). Technologies also have political consequences by structuring roles and responsibilities in society (Latour 1992 ) and within organizations (Orlikowski and Barley 2001 ), many times with contradictory consequences (Markus and Robey 1988 ).

While the ethics of technology is analyzed across disciplines from science and technology studies (STS), engineering, computer science, critical management studies, and law, less attention is paid to the role that firms and managers play in the design, development, and dissemination of technology across communities and within their firm. As emphasized in a recent Journal of Business Ethics article, Johnson (Johnson 2015 ) notes the possibility of a responsibility gap: the abdication of responsibility around decisions that are made as technology takes on roles and tasks previously afforded to humans. Although firms play an important role in the development of technology, and make associated value judgments around its use, it remains open how we should understand the contours of what firms owe society as the rate of technological development accelerates. We focus here on digital technologies: devices that rely on rapidly accelerating digital sensing, storage, and transmission capabilities to intervene in human processes. Within the symposium, digital technologies are conceptualized to include applications of machine learning, information and communications technologies (ICT), and autonomous agents such as drones. This symposium focuses on how firms should engage ethical choices in developing and deploying these technologies. How ought organizations recognize, negotiate, and govern the values, biases, and power uses of technology? How should the inevitable social costs of technology be shouldered by companies, if at all? And what responsibilities should organizations take for designing, implementing, and investing in technology?

This introduction is organized as follows. First, we identify themes the symposium articles share and discuss how the set of articles illuminate diverse facets of the intersection of technology and business ethics. Second, we use these themes to explore what business ethics offers to the study of technology and, third, what technology studies offers to the field of business ethics. Each field brings expertise that, together, improves our understanding of the ethical implications of technology. Finally we introduce each of the five papers, suggest future research directions, and interpret their implications for business ethics.

Technology and the Scope of Business Ethics

For some it may seem self-evident that the use and application of digital technology is value-laden in that how technology is commercialized conveys a range of commitments on values ranging from freedom and individual autonomy, to transparency and fairness. Each of the contributions to this special issue discusses elements of this starting point. They also—implicitly and explicitly—encourage readers to explore the extent to which technology firms are the proper locus of scrutiny when we think about how technology can be developed in a more ethically grounded fashion.

Technology as Value-Laden

The articles in this special issue largely draw from a long tradition in computer ethics and critical technology studies that sees technology as ethically laden: technology is built from various assumptions that—either implicitly or explicitly—express certain value commitments (Johnson 2015 ; Moor 1985 ; Winner 1980 ). This literature argues that, through affordances—properties of technologies that make some actions easier than others—technological artifacts make abstract values material. Ethical assumptions in technology might take the form of particular biases or values accidentally or purposefully built into a product’s design assumptions, as well as unforeseen outcomes that occur during use (Shilton et al. 2013 ). These issues have taken on much greater concern recently as forms of machine learning and various autonomous digital systems drive an increasing share of decisions made in business and government. The articles in the symposium therefore consider ethical issues in technology design including sources of data, methods of computation, and assumptions in automated decision making, in addition to technology use and outcomes.

A strong example of values-laden technology is the machine learning (ML) algorithms that power autonomous systems. ML technology underlies much of the automation driving business decisions in marketing, operations, and financial management. The algorithms that make up ML systems “learn” by processing large corpi of data. The data upon which algorithms learn, and ultimately render decisions, is a source of ethical challenges. For example, biased data can lead to decisions that discriminate against individuals due to morally arbitrary characteristic, such as race or gender (Danks and London 2017 ; Barocas and Selbst 2016 ). One response to this problem is for companies to think more deliberately about how the data driving automation are selected and assessed to understand discriminatory effects. However, the view that an algorithm or computer program can ever be ‘clean’ feeds into the (mistaken) idea that technology can be neutral. An alternative approach is to frame AI decisions—like all decisions—as biased and capable of making mistakes (Martin 2019 ). The biases can be from the design, the training data, or in the application to human contexts.

Corporate Responsibility for the Ethical Challenges of Technology

It is becoming increasingly accepted that the firms who design and implement technology have moral obligations to proactively address problematic assumptions behind, and outcomes of, new digital technologies. There are two general reasons why this responsibility rests with the firms that develop and commercialize digital technologies. First, in a nascent regulatory environment, the social costs and ethical problems associated with new technologies are not addressed through other institutions. We do not yet have agencies of oversight, independent methods of assessment or third parties that can examine how new digital technologies are designed and applied. This may change, but in the interim, the non-ideal case of responsible technological development is internal restraint, not external oversight. An obvious example of this is the numerous efforts put forth by large firms, such as Microsoft and Google, focused on developing principles or standards for the responsible use of artificial intelligence (AI). There are voices of skepticism that such industry efforts will genuinely focus on the public’s interest; however, it is safe to say that the rate of technological development carries an expectation that firms responsible for innovation are also responsible for showing restraint and judgment in how technology is developed and applied (cf. Smith and Shum 2018 ).

A second reason that new technologies demand greater corporate responsibility is that technologies require attention to ethics during design , and design choices are largely governed by corporations. Design is the projection of how a technology will work in use and includes assumptions as to which users and uses matter and which do not, and how the technology will be used. As STS scholar Akrich notes “…A large part of the work of innovators is that of ‘ inscribing’ this vision of (or prediction about) the world in the technical content of the new object” (Akrich 1992 , p. 208). Engineers and operations directors need to be concerned about how certain values—like transparency, fairness, and economic opportunity—are translated into design decisions.

Because values are implicated during technology design, developers make value judgments as part of their corporate roles. Engineers and developers of technology inscribe visions or preferences of how the world works (Akrich 1992 ; Winner 1980 ). This inscription manifests in choices about how transparent, easy to understand and fix, or inscrutable a technology is (Martin 2019 ), as well as who can use it easily or how it might be misused (Friedman and Nissenbaum 1996 ). Ignoring the value-laden decisions in design does not make them disappear. Philosopher Richard Rudner addresses this in realm of science; for Rudner, scientists as scientists make value judgements; and ignoring value-laden decisions means those decisions are made badly because they are made without much thought or consideration (Rudner 1953 ). In other words, if firms ignore the value implications of design, engineers still make moral decisions; they simply do so without an ethical analysis.

Returning to the example of bias-laden ML algorithms illustrates ways that organizations can work to acknowledge and address those biases through their business practices. For example, acknowledging bias aligns with calls for algorithms to be “explainable” or “interpretable”: capable of being deployed in ways that allow users and affected parties to more fully understand how an algorithm rendered its decisions, including potential biases (cf. Kim and Routledge 2018 ; Kim 2018 ; Selbst and Barocas 2018 ). Explainable and interpretable algorithms require design decisions that carry implications for corporate responsibility. If a design team creates an impenetrable AI-decision, where users are unable to judge or address potential bias or mistakes, then the firm in which that team works can be seen to have responsibility for those decisions (Martin forthcoming).

It follows from these two observations—technology firms operate with nascent external oversight and designers are making value-laden decisions as part of their work in firms—that the most direct means of addressing ethical challenges in new technology is through management decisions within technology firms. The articles in this special issue point out many ways this management might take place. For example, in their paper “A Micro-Ethnographic Study of Big Data Innovation in the Financial Services Sector,” authors Richard Owen and Keren Naa Abeka Arthur give a descriptive account focusing on how an organization makes ethics a selling point of a new financial services platform. Ulrich Leicht-Deobald and his colleagues take a normative tact, writing in “The Challenges of Algorithm-Based HR Decision-Making for Personal Integrity” that firms designing technologies to replace human decision making with algorithms should consider their impact on the personal integrity of humans. Tae Wan Kim and Allan Scheller-Wolf present a case for increased corporate responsibility for what they call technological unemployment : the job losses that will accompany an accelerated pace of automation in the workplace. Their discussion, “Technological Unemployment, Meaning in Life, Purpose of Business and the Future of Stakeholders,” asks what corporations owe not only to employees who directly lose their jobs to technology, but what corporations owe to a future society when they pursue workerless production strategies.

The Interface of Business and Technology Ethics

One of the central insights discussed in the pages of this special issue is that technology-driven firms assume a role in society that demands a consideration of ethical imperatives beyond their financial bottom line. How does a given technology fit within a broader understanding of the purpose of a firm as value creation for a firm and its stakeholders? The contributions to this special issue, directly or indirectly, affirm that neither the efficiencies produced by the use of digital technology, nor enhanced financial return to equity investors solely justify the development, use, or commercialization of a technology. These arguments will not surprise business ethicists, who routinely debate the purpose and responsibilities of for-profit firms. Still, the fact that for-profit firms use new technology and profit from the development of technology raises the question of how the profit-motive impacts the ethics of new digital technology.

One way of addressing this question is to take a cue from other, non-digital technologies. For example, the research, development and commercialization necessary for pharmaceutical products carries ethical considerations for associated entities, whether individual scientists, government agencies, non-governmental organizations, or for-profit companies. Ethical questions include: how are human test subjects treated? How is research data collected and analyzed? How are research efforts funded, and are there any conflicts of interest that could corrupt the scientific validity of that research? Do medical professionals fully understand the costs and benefits of a particular pharmaceutical product? How should new drugs be priced? The special set of ethical issues related to pharmaceutical technology financed through private capital markets include the ones raised above plus a consideration of how the profit-motive, first, creates competing ethical considerations unrelated to pharmaceutical innovation itself, and second, produces social relationships within firms that may compromise the standing responsibilities that individuals and organizations have to the development of pharmaceutical products that support the ideal of patient health.

A parallel story can be told for digital technology. There are some ethical issues that are closely connected to digital technology, such as trust, knowledge, privacy, and individual autonomy. These issues, however, take on a heightened concern when the technologies in question are financed through the profit-motive. We have to be attentive to the extent to which a firm’s inclination to show concern for customer privacy, for instance, can be marginalized when its business model relies on using predictive analytics for advertising purposes (Roose 2019 ). A human resource algorithm that possibly diminishes employee autonomy may be less scrutinized if its use cuts operational expenses in a large, competitive industry. The field of business ethics contributes to the discussion about the responsible use of new technology by illustrating how the interface of the market, profit-motive and the values of technology can be brought into a more stable alignment. Taken together, the contributions in this special issue provide a blueprint for this task. They exemplify the role of technology firmly within the scope of business ethics in that managers and firms can (and should) create and implement technology in a way that remains attentive to the value creation for a firm and its stakeholders including employees, users, customers, and communities.

At the same time, those studying the social aspects of technology need to remain mindful of the special nature—and benefits—of business. Business is a valuable social mechanism to finance large-scale innovation and economic progress. It is hard to imagine that some of the purported benefits of autonomous vehicles, for example, would be on our doorstep if it were not for the presence of nimble, fast-paced private markets in capital and decentralized transportation services. Business is important in the development of technology even if we are concerned about how well it upholds the values of responsible use and application of technology. The challenge taken up by the discussions herein is to explore how we want to configure the future and the role that business can play in that future. Are firms exercising sufficient concern for privacy in the use of technology? What are the human costs associated with relegating more and more decisions to machines, rather than ourselves? Is there an opportunity for further regulatory oversight? If so, in what technological domain? Business ethicists interested in technology need to pay attention to the issues raised by this symposium’s authors and those that study technology need to appreciate the special role that business can play in financing the realization of technology’s potential.

In addition, the articles in this symposium illustrate how the intersection of business ethics and technology ethics illuminates how our conceptions of work—and working—shape the ethics of new technology. The symposium contributions herein have us think critically about how the employment relationship is altered by the use and application of technology. Again, Ulrich Leicht-Deobald and his co-authors prompt an examination of how the traditional HR function is altered by the assistance of machine-learning platforms. Kim and Scheller-Wolf force an examination of what firms using job-automation technologies owe to both displaced and prospective employees, which expands our conventional notions of employee responsibility beyond those who happens to be employed by a particular firm, in a particular industry. Although not exclusively focused on corporate responsibility within the domain of employment, Aurelie Laclercq-Vandelannoitte’s contribution “Is Technological ‘Ill-Being’ Missing from Corporate Responsibility?” encourages readers to think about the implications of “ubiquitous” uses of information technology for future individual well-being and social meaning. There are clear lines between her examination of how uses of technology can adversely impact freedom, privacy and respect and how ethicists and policy makers might re-think firms’ social responsibilities to employees. And, even more pressing, these discussions provide a critical lens for how we think through more fundamental problems such as the rise of work outside of the confines of the traditional employment relationship in the so-called “gig economy” (Kondo and Singer 2019 ).

How Business Ethics Informs Technology Ethics

Business ethics can place current technology challenges into perspective by considering the history of business and markets behaving outside the norms, and the corrections made over time. For example, the online content industry’s claim that changes to the digital marketing ecosystem will kill the industry echoes claims made by steel companies fighting environmental regulation in the 1970s (IAB 2017 ; Lomas 2019 ). Complaints that privacy regulation would curtail innovation echo the automobile industry’s complaints about safety regulation in the 1970s. Here we highlight two areas where business ethics’ understanding of the historical balance between industry desires and pro-social regulation can offer insights on the ethical analysis of technology.

Human Autonomy and Manipulation

There are a host of market actors impacted by the rise of digital technology. Consumers are an obvious case. What we buy and how our identities are created through marketing is, arguably, ground zero for many of the ethical issues discussed by the articles in this symposium. Recent work has begun to examine how technology can undermine the autonomy of consumers or users. For example, many games and online platforms are designed to encourage a dopamine response that makes users want to come back for more (“Technology Designed for Addiction” n.d.). Similar to the high produced by gambling [machines for which have long been designed for maximum addiction (Schüll 2014 )], games and social media products encourage users to seek the interaction’s positive feedback to the point where their lives can be disrupted. Through addictive design patterns, technology firms create a vulnerable consumer (Brenkert 1998 ). Addictive design manipulates consumers and takes advantage of human proclivities to threaten their autonomy.

A second example of manipulation and threatened autonomy is the use of aggregated consumer data to target consumers. Data aggregators can frequently gather enough information about consumers to infer their concerns and desires, and use that information to narrowly and accurately target ads. By pooling diverse information on consumer behavior, such as location data harvested from a phone and Internet browsing behavior tracked by data brokers, consumers can be targeted in ways that undermine individuals’ ability to make a different decision (Susser et al. 2019 ). If marketers infer you are worried about depression based on what you look up or where you go, they can target you with herbal remedies. If marketers guess you are dieting or recently stopped gambling, they can target you with food or casino ads. Business ethics has a long history of examining the ways that marketing strategies target vulnerable populations in a manner that undermines autonomy. A newer, interesting twist on this problem is that these tactics have been extended beyond marketing products into politics and the public sphere. Increasingly, social media and digital marketing platforms are being used to inform and sway debate in the public sphere. The Cambridge Analytica scandal is a well-known example of the use of marketing tactics, including consumer profiling and targeting based on social media data, to influence voters. Such tactics have serious implications for autonomy, because individuals’ political choices can now be influenced as powerfully as their purchasing decisions.

More generally, the articles in this symposium help us understand how the creation and implementation of new technology fits alongside the other pressures experienced within businesses. The articles give us lenses on the relationship between an organization’s culture—its values, processes, commitments, and governance structures—and the challenge of developing and deploying technology in a responsible fashion. There has been some work on how individual developers might or might not make ethical decisions, but very little work on how pressures from organizations and management matter to those decisions. Recent work by Spiekermann et al., for example, set out to study developers, but discovered that corporate cultures around privacy had large impacts on privacy and security design decisions (Spiekermann et al. 2018 ). Studying corporate cultures of ethics, and the complex motivations that managers, in-house lawyers and strategy teams, and developers bring to ethical decision making, is an important area in business ethics, and one upon which the perspectives collected here shed light.

Much of the current discussion around AI, big data, algorithms, and online platforms centers on trust. How can individuals (or governments) trust AI decisions? How do online platforms reinforce or undermine the trust of their users? How is privacy related to trust in firms and trust online? Trust, defined as someone’s willingness to become vulnerable to someone else, is studied at three levels in business ethics: an individual’s general trust disposition, an individual’s trust in a specific firm, and an individual’s institutional trust in a market or community (Pirson et al. 2016 ). Each level is critical to understanding the ethical implications of technology. Trust disposition has been found to impact whether consumers are concerned about privacy: consumers who are generally trusting may have high privacy expectations but lower concerns about bad acts by firms (Turow et al. 2015 ).

Users’ trust in firms can be influenced by how technology is designed and deployed. In particular, design may inspire consumers to overly trust particular technologies. This problem arguably creates a fourth level of trust unique to businesses developing new digital technologies. More and more diagnostic health care decisions, for example, rely upon automated data analysis and algorithmic decision making. Trust is a particularly pressing topic for such applications. Similar concerns exist for autonomous systems in domains such as financial services and transportation. Trust in AI is not simply about whether a system or decision making process will “do” what it purportedly states it will do; rather, trust is about having confidence that when the system does something that we do not fully understand, it will nevertheless be done in a manner that supports in our interests. David Danks ( 2016 ) has argued that such a conception of trust moves beyond mere predictability—which artificial intelligence, by definition, makes difficult—and toward a deeper sense of confidence in the system itself (cf. LaRosa and Danks 2018 ). Finally, more work is needed to identify how technology—e.g., AI decisions, sharing and aggregating data, online platforms, hyper-targeted ads—impact consumers’ institutional trust online. Do consumers see questionable market behavior and begin to distrust an overall market? For example, hearing about privacy violations—the use of a data aggregator—impacts individuals’ institutional trust online and makes consumers less likely to engage with market actors online (Martin 2019 ). The study of technology would benefit from the ongoing conversation about trust in business ethics.

Stakeholder Relations

Technology firms face difficult ethical choices in their supply chain and how products should be developed and sold to customers. For example, technology firms such as Google and Microsoft are openly struggling with whether to create technology for immigration and law enforcement agencies and U.S and international militaries. Search engines and social networks must decide the type of relationship to have with foreign governments. Device companies must decide where gadgets will be manufactured, under what working conditions, and where components will be mined and recycled.

Business ethics offers a robust discussion about whether and how to prioritize the interests of various stakeholders. For example, oil companies debate whether and how to include the claims of environmental groups. Auto companies face claims from unions, suppliers, and shareholders and must navigate all three simultaneously. Clothing manufacturers decide who to partner with for outsourcing. So when cybersecurity firms consider whether to take on foreign governments as clients, their analysis need not be completely new. An ethically attuned approach to cybersecurity will inevitably face the difficult choice of how technology, if at all, should be limited in development, scope, and sale. Similarly, firms developing facial recognition technologies have difficult questions to ask about the viability of those products, if they take seriously the perspective of stakeholders who may find those products an affront to privacy. More research in the ethics of new digital technology should utilize existing work on the ethics of managing stakeholder interests to shed light on the manner in which technology firms should appropriately balance the interests of suppliers, financiers, employees, and customers.

How Technology Ethics Informs Business

Just as business ethics can inform the study of recent challenges in technology ethics, scholars who have studied technology, particularly scholars of sociotechnical systems, can add to the conversation in business ethics. Scholarship in values in design—how social and political values become design decisions—can inform discussions about ethics within firms that develop new technologies. And research in the ethical implications of technology—the social impacts of deployed technologies—can inform discussions of downstream consequences for consumers.

Values in Design

Values in design (ViD) is an umbrella term for research in technology studies, computer ethics, human–computer interaction, information studies, and media studies that focuses on how human and social values ranging from privacy to accessibility to fairness get built into, or excluded from, emerging technologies. Some values in design scholarship analyzes technologies themselves to understand values that they do, or don’t, support well (Brey 2000 ; Friedman and Nissenbaum 1996 ; Winner 1980 ). Other ViD scholars study the people developing technologies to understand their human and organizational motivations and the ways those relate to design decisions (Spiekdermann et al. 2018; JafariNaimi et al. 2015 ; Manders-Huits and Zimmer 2009 ; Shilton 2018 ; Shilton and Greene 2019 ). A third stream of ViD scholarship builds new technologies that purposefully center particular human values or ethics (Friedman et al. 2017 ).

Particularly relevant to business ethics is the way this literature examines how both individually and organizationally held values become translated into design features. The values in design literature points out that the material outputs of technology design processes belong alongside policy and practice decisions as an ethical impact of organizations. In this respect, the values one sees in an organization’s culture and practices are reflected in its approach to the design of technology, either in how that technology is used or how it is created. Similarly, an organization’s approach to technology is a barometer of its implicit and explicit ethical commitments. Apple and Facebook make use of similar data-driven technologies in providing services to their customers; but how those technologies are put to use—within what particular domain and for what purpose—exposes fundamental differences in the ethical commitments to which each company subscribes. As Apple CEO Tim Cook has argued publicly, unlike Facebook, Apple’s business model does not “traffic in your personal life” and will not “monetize [its] customers” (Wong 2018 ). How Facebook and Apple managers understand the boundaries of individual privacy and acceptable infringements on privacy is conveyed in the manner in which their similar technologies are designed and commercialized.

Ethical Implications of Technology and Social Informatics

Technology studies has also developed a robust understanding of technological agency—how technology acts in the world—while also acknowledging the agency of technology users. Scholars who study the ethical implications of technology and social informatics focus on the ways that deployed technology reshapes power relationships, creates moral consequences, reinforces or undercuts ethical principles, and enables or diminishes stakeholder rights and dignity (Martin forthcoming; Kling 1996 ). Importantly, technology studies talks about the intersecting roles of material and non-material actors (Latour 1992 ; Law and Callon 1988 ). Technology, when working in concert with humans, impacts who does what. For example, algorithms influence the delegation of roles and responsibilities within a decision. Depending on how an algorithm is deployed in the world, humans working with their results may have access to the training data (or not), understand how the algorithm reached a conclusion (or not), and have an ability to see the decision relative to similar decisions (or not). Choices about the delegation of tasks between algorithms and individuals may have moral import, as humans with more insight into the components of an algorithmic decision may be better equipped to spot systemic unfairness. Technology studies offers a robust vocabulary for describing where ethics intersect with technology, ranging from design to deployment decisions. While business includes an ongoing discussion about human autonomy as noted above, technology studies adds a conversation about technological agency.

Navigating the Special Issue

The five papers that comprise this thematic symposium range in their concerns from AI and the future of work to big data to surveillance to online cooperative platforms. They explore ethics in the deployment of future technologies, ethics in the relationship between firms and their workers, ethics in the relationship between firms and other firms, and ethical governance of technology use within a firm. All five articles place the responsibility for navigating these difficult ethical issues directly on firms themselves.

Technology and the Future of Employment

Tae Wan Kim and Allan Scheller-Wolf raise a number of important issues related to technologically enabled job automation in their paper “Technological Unemployment, Meaning in Life, Purpose of Business, and the Future of Stakeholders.” They begin by emphasizing what they call an “axiological challenge” posed by job automation. The challenge, simply put, is that trends in job automation (including in manufacturing, the service sector and knowledge-based professions) will likely produce a “crisis in meaning” for individuals. Work—apart from the economic means that it provides—is a deep source of meaning in our lives and a future where work opportunities are increasingly unavailable means that individual citizens will be deprived of the activities that heretofore have defined their social interactions and given their life purpose. If such a future state is likely, as Kim and Scheller-Wolf speculate, what do we expect of corporations who are using the automation strategies that cause “technological unemployment”?

Their answer to this question is complicated, yet instructive. They argue that neither standard shareholder nor stakeholder conceptions of corporate responsibility provide the necessary resources to fully address the crisis in meaning tied to automation. Both approaches fall short because they conceive of corporate responsibility in terms of what is owed to the constituencies that make up the modern firm. But these approaches have little to say about whether there is any entitlement to employment opportunities or whether society is made better off with employment arrangements that provide meaning to individual employees. As such, Kim and Scheller-Wolf posit that there is a second, “teleological challenge” posed by job automation. The moral problem of a future without adequate life-defining employment is something that cannot straightforwardly be answered by existing conceptions of the purpose of the corporation.

Kim and Scheller-Wolf encourage us to think about the future of corporate responsibility with respect to “technological unemployment” by going back to the “Greek agora,” which they take to be in line with some of the premises of stakeholder theory. Displaced workers are neither “employees” nor “community” members in the standard senses of the terms. So, as in ancient Greece, the authors imagine a circumstance where meaningful social interactions are facilitated by corporations who offer “university-like” communities where would-be employees and citizens can participate and collectively deliberate about aspects of the common good, including, but not limited to, how corporations conduct business and how to craft better public policy. This would add a new level of “agency” into their lives and allow them to play an integral role in how business takes place. The restoration of this agency allows individuals to maintain another important sense of meaning in their lives, apart from the work that may have helped define their sense of purpose in prior times. This suggestion is proscriptive and, at times, seems idealistic. But, as with other proposals, such as the recent discussion of taxing job automation, it is part of an important set of conversations that need to be had to creatively imagine the future in light of technological advancement (Porter 2019 ).

The value in this discussion, which frames a distinctive implication for future research, is that it identifies how standard accounts of corporate responsibility are inadequate to justify responsibilities to future workers displaced by automation. It changes the way scholars should understand meaningful work beyond meaning at work to meaning in place of work and sketches an alternative to help build a more comprehensive social response to changing nature of employment that technology will steadily bring.

Technology and Human Well-Being

Aurelie Leclercq-Vandelannoitte’s “Is Employee Technological ‘Ill-Being’ Missing From Corporate Responsibility? The Foucauldian Ethics of Ubiquitous IT Uses in Organizations” explores the employment relationship more conceptually by introducing the concept of “technological ill-being” with the adoption of ubiquitous information technology in the workplace. Leclercq-Vandelannoitte defines technological ill-being as the tension or disconnect between an individual’s social attributes and aspirations when using modern information technology (IT) and the system of norms, rules, and values within the organization. Leclercq-Vandelannoitte asks a series of research questions as to how technological ill-being is framed in organizations, the extent to which managers are aware of the idea, and who is responsible for employees’ technological ill-being.

Leclercq-Vandelannoitte leverages Foucauldian theory and a case study to answer these questions. Foucault offers a rich narrative about the need to protect an individual’s ability to enjoy “free thought from what it silently thinks and so enable it to think differently” (Foucault 1983 , p. 216). The Foucauldian perspective offers an ethical frame by which to analyze ubiquitous IT, where ethics “is a practice of the self in relation to others, through which the self endeavors to act as a moral subject.” Perhaps most importantly, the study, through the lens of Foucault, highlights the importance of self-reflection and engagement as necessary to using IT ethically. An international automotive company provides a theoretically important case of the deployment of ubiquitous IT contemporaneous with strong engagement with corporate social responsibility. The organization offers a unique case in that the geographically dispersed units adopted unique organizational patterns and working arrangements for comparison.

The results illustrate that technological ill-being is not analyzed in broader CSR initiatives but rather as “localized, individual, or internal consequences for some employees.” Further, the blind spot toward employees’ ill-being constitutes an abdication of responsibility, which benefits the firm. The paper has important implications for the corporate responsibility of organizations with regard to the effects of ubiquitous IT on employee well-being—an underexamined area. The author brings to the foreground the value-laden-ness of technology that is deployed within an organization and centers the conversation on employees in particular. Perhaps most importantly, ethical self-engagement becomes a goal for ethical IT implementation and a critical concept to understand technological ill-being. Leclercq-Vandelannoitte frames claims of “unawareness” of the value-laden implications of ubiquitous IT as “the purposeful abdication of responsibility” thereby placing the responsibility for technological ill-being squarely on the firm who deploys the IT. Future work could take the same critical lens toward firms who sell (rather than internally deploy) ubiquitous IT and their responsibility to their consumers.

Technology and Governance

Richard Owen and Keren Naa Abeka Arthur’s “A Micro-Ethnographic Study Of Big Data—Based Innovation In The Financial Services Sector: Governance, Ethics And Organisational Practices” uses a case study of a financial services firm to illustrate how organizations might responsibly govern their uses of big data. This topic is timely, as firms in numerous industries struggle to self-regulate their use of sensitive data about their users. The focus on how a firm achieves ethics-oriented innovation is unusual in the literature and provides important evidence of the factors that influence a firms’ ability to innovate ethically.

The authors describe a company that governs its uses of big data on multiple levels, including through responses to legislation, industry standards, and internal controls. The authors illustrate the ways in which the company strives for ethical data policies that support mutual benefit for their stakeholders. Though the company actively uses customer data to develop new products, the company’s innovation processes explicitly incorporate both customer consent mechanisms, and client and customer feedback. The company also utilizes derived, non-identifiable data for developing new insights and products, rather than using customers’ identifiable data for innovation. The authors describe how national regulation, while not directly applicable to the big data innovations studied, guided the company’s data governance by creating a culture of compliance with national data privacy protections. This has important consequences for both regulators and consumers. This finding implies that what the authors refer to as “contextual” legislation—law that governs other marginally related data operations within the firm—can positively influence new innovations, as well. The authors write that contextual data protection legislation was internalized by the company and “progressively embedded” into future innovation.

The authors also found that company employees directly linked ethical values with the success of the company, highlighting consumer trust as critical to both individual job security and organizational success. This finding speaks to the importance of corporate culture in setting the values incorporated into technology design. Owen & Arthur use the company’s practices as a case study to begin to define ethical and responsible financial big data innovation. Their evidence supports frameworks for responsible innovation that emphasize stakeholder engagement, anticipatory ethics, reflexivity on design teams, and deliberative processes embedded in development practice.

Technology and Personal Integrity

Ulrich Leicht-Deobald and his colleagues unpack the responsibilities organizations have to their workers when adopting and implementing new data collection and behavior analysis tools in “The Challenges of Algorithm-based HR Decision-making for Personal Integrity.” It unites theory from business ethics and the growing field of critical algorithm and big data studies to study the topical issue of algorithmic management of workers by human resource departments. The authors focus on tools for human resources decision making that monitor employees and use algorithms and machine learning to make assessments, such as algorithmic hiring and fraud monitoring tools. The authors argue that, in addition to well-documented problems with bias and fairness, such algorithmic tools have the potential to undermine employees’ personal integrity, which they define as consistency between convictions, words, and actions. The authors argue that algorithmic hiring technologies threaten a fundamental human value by shifting employees to a compliance mindset. Their paper demonstrates how algorithmic HR tools undermine employees’ personal integrity by encouraging blind trust in rules and discouraging moral imagination. The authors argue that the consequences of such undermining include increased information asymmetries between management and employees. The authors classify HR decision making as an issue of corporate responsibility and suggest that companies that wish to use predictive HR technologies must take mitigation measures. The authors suggest participatory design of algorithms, in which employees would be stakeholders in the design process, as one possible mitigative tactic. The authors also advocate for critical data literacy for managers and workers, and adherence to private regulatory regimes such as the Association of Computing Machinery’s (ACM) code of ethics and professional conduct and the Toronto Declaration of Machine Learning.

This paper makes an important contribution to the scoping of corporate responsibility for the algorithmic age. By arguing that companies using hiring algorithms have a moral duty to protect their workers’ personal integrity, it places the ethical dimensions of the design and deployment of algorithms alongside more traditional corporate duties such as responsibility for worker safety and wellness. And like Owen and Arthur, the authors believe that attention to ethics in design—here framed as expanding employees’ capacity for moral imagination—will open up spaces for reflection and ethical discourse within companies.

Technology and Trust

Livia Levine’s “Digital Trust and Cooperation with an Integrative Digital Social Contract” focuses on digital business communities and the role of the members in creating communities of trust. Levine notes that digital business communities, such as online markets or business social networking communities, have all the markers of a moral community as conceived by Donaldson and Dunfee in their Integrative Social Contract Theory (ISCT) (Donaldson and Dunfee 1999 ): these individuals in the community form relationships which generate authentic ethical norms. Digital business communities, on the other hand, differ in that participants cannot always identify each other and do not always have the legal or social means to punish participant businesses who renege on the community’s norms.

By identifying the hypernorm of “the efficient pursuit of aggregate economic welfare,” which would transcend communities and provide guidance for the development of micronorms in a community, Levine then focuses on trust and cooperation micronorms. Levine shows that trust and cooperation are “an instantiation of the hypernorm of necessary social efficiency and that authentic microsocial norms developed for the ends of trust and cooperation are morally binding for members of the community.” Levine uses a few examples, such as Wikipedia, open-source software, online reviews, and Reddit, to illustrate micronorms at play. In addition, Levine illustrates how the ideas of community and moral free space should be applied in new arenas including online.

The paper has important implications for both members of the social contract community and platforms that host the community to develop norms focused on trust and cooperation. First, the idea of community has traditionally been applied to people who know each other. However, Levine makes a compelling case as to why community can and should be applied for groups online of strangers—strangers in real life, but known online. Future research could explore the responsibilities of platforms who facilitate or hinder the development of authentic norms for communities on their service. For example, if a gaming platform is seen as a community of gamers, then what are the obligations of the gaming platform to enforce hypernorms and support the development of authentic micronorms within communities? Levine’s approach opens up many avenues to apply the ideas behind ISCT in new areas.

While each discussion in this symposium offers a specific, stand-alone contribution to the ongoing debate about the ethics of the digital economy, the five larger themes addressed by the articles—the future of employment, personal identity and integrity, governance and trust—will likely continue to occupy scholars’ attention for the foreseeable future. More importantly, the diversity of theoretical perspectives and methods represented within this issue is illustrative of the how the ethical challenges presented by new information technologies are likely best understood through continued cross-disciplinary conversations with engineers, legal theorists, philosophers, organizational behaviorists, and information scientists.

Akrich, M. (1992). The de-scription of technological objects. In W. Bijker & J. Law (Eds.), Shaping technology/building society: Studies in sociotechnical change (pp. 205–224). Cambridge, MA: MIT Press.

Google Scholar  

Barocas, S. I., & Selbst, A. W. (2016). Big data’s disparate impact. California Law Review, 104, 671–733.

Brenkert, G. G. (1998). Marketing and the vulnerable. The Ruffin Series of the Society for Business Ethics , 1 , 7–20.

Brey, P. (2000). Method in computer ethics: Towards a multi-level interdisciplinary approach. Ethics and Information Technology , 2 (2), 125–129.

Article   Google Scholar  

Cole, B. M., & Banerjee, P. M. (2013). Morally contentious technology-field intersections: The case of biotechnology in the United States. Journal of Business Ethics, 115 (3), 555–574.

Danks, D. (2016). Finding trust and understanding in autonomous systems. The Conversation . Retrieved from https://theconversation.com/finding-trust-and-understanding-in-autonomous-technologies-70245

Danks, D., & London, A. J. (2017). Algorithmic bias in autonomous systems. Proceedings of the 26th International Joint Conference on Artificial Intelligence . Retrieved from https://www.cmu.edu/dietrich/philosophy/docs/london/IJCAI17-AlgorithmicBias-Distrib.pdf

Donaldson, T., & Dunfee, T. W. (1999). Ties that bind: A social contracts approach to business ethics . Harvard Business Press.

Foucault, M. (1983). The subject and power. In H. Dreyfus & P. Rabinow (Eds.), Michel Foucault: Beyond structuralism and hermeneutics (2nd ed., pp. 208–228). Chicago: University of Chicago Press.

Friedman, B., Hendry, D. G., & Borning, A. (2017). A survey of value sensitive design methods. Foundations and Trends® in Human–Computer Interaction, 11 (2), 63–125.

Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems (TOIS), 14 (3), 330–347.

IAB. (2017). The economic value of the advertising-supported Internet Ecosystem. https://www.iab.com/insights/economic-value-advertising-supported-internet-ecosystem/

JafariNaimi, N., Nathan, L., & Hargraves, I. (2015). Values as hypotheses: design, inquiry, and the service of values. Design issues, 31 (4), 91–104.

Johnson, D. G. (2015). Technology with no human responsibility? Journal of Business Ethics, 127 (4), 707.

Kim, T. W. (2018). Explainable artificial intelligence, the goodness criteria and the grasp-ability test. Retrieved from https://arxiv.org/abs/1810.09598

Kim, T. W., & Routledge, B. R. (2018). Informational privacy, a right to explanation and interpretable AI. 2018 IEEE Symposium on Privacy - Aware Computing . https://doi.org/10.1109/pac.2018.00013

Kling, R. (1996). Computerization and controversy: value conflicts and social choices . San Diego: Academic Press.

Kondo, A., & Singer, A. (2019 April 3). Labor without employment. Regulatory Review . Retrieved from https://www.theregreview.org/2019/04/03/kondo-singer-labor-without-employment/

Laczniak, G. R., & Murphy, P. E. (2006). Marketing, consumers and technology. Business Ethics Quarterly, 16 (3), 313–321.

LaRosa, E., & Danks, D. (2018). Impacts on trust of healthcare AI. Proceedings of the 2018 AAAI/ACM conference on artificial intelligence, ethics, and society . https://doi.org/10.1145/3278721.3278771

Latour, B. (1992). Where are the missing masses? The sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping technology/building society: Studies in sociotechnical change (pp. 225–258). Cambridge, MA: MIT Press.

Law, J., & Callon, M. (1988). Engineering and sociology in a military aircraft project: A network analysis of technological change. Social Problems, 35 (3), 284–297. https://doi.org/10.2307/800623 .

Lomas, N. (2019). Even the IAB warned adtech risks EU privacy rules. Tech Crunch. https://techcrunch.com/2019/02/21/even-the-iab-warned-adtech-risks-eu-privacy-rules/

Manders-Huits, N., & Zimmer, M. (2009). Values and pragmatic action: The challenges of introducing ethical intelligence in technical design communities. International Review of Information Ethics, 10 (2), 37–45.

Markus, M. L., & Robey, D. (1988). Information technology and organizational change: Causal structure in theory and research. Management Science, 34 (5), 583–598.

Martin, K. (2019). Designing Ethical Algorithms. MIS Quarterly Executive , June .

Martin, K. (Forthcoming). Ethics and accountability of algorithms. Journal of Business Ethics .

Moor, J. H. (1985). What is computer ethics? Metaphilosophy , 16 (4), 266–275.

Orlikowski, W. J., & Barley, S. R. (2001). Technology and institutions: What can research on information technology and research on organizations learn from each other? MIS Quarterly, 25 (2), 145–165.

Pirson, M., Martin, K., & Parmar, B. (2016). Public trust in business and its determinants. Business & Society . https://doi.org/10.1177/0007650316647950 .

Porter, E. (2019 February 23). Don’t fight the robots, tax them. New York Times. Retrieved from https://www.nytimes.com/2019/02/23/sunday-review/tax-artificial-intelligence.html

Rose, K. (2019 January 30). Maybe only tim cooke can fix Facebook’s privacy problem. Retrieved from https://www.nytimes.com/2019/01/30/technology/facebook-privacy-apple-tim-cook.html

Rudner, R. (1953). The scientist qua scientist makes value judgments. Philosophy of Science, 20 (1), 1–6.

Schüll, N. D. (2014). Addiction by design: Machine gambling in Las Vegas (Reprint edition) . Princeton: Princeton University Press.

Selbst, A. D., & Barocas, S. I. (2018). The intuitive appeal of explainable machines. Fordham Law Review, 87, 1085–1140.

Shcherbina, A., Mattsson, C. M., Waggott, D., Salisbury, H., Christle, J. W., Hastie, T., … Ashley, E. A. (2017). Accuracy in Wrist-Worn, sensor-based measurements of heart rate and energy expenditure in a diverse cohort. Journal of Personalized Medicine , 7 (2), 3. https://doi.org/10.3390/jpm7020003

Shilton, K. (2018). Engaging values despite neutrality: Challenges and approaches to values reflection during the design of internet infrastructure. Science, Technology and Human Values, 43 (2), 247–269.

Shilton, K., & Greene, D. (2019). Linking platforms, practices, and developer ethics: Levers for privacy discourse in mobile application development. Journal of Business Ethics, 155 (1), 131–146.

Shilton, K., Koepfler, J. A., & Fleischmann, K. R. (2013). Charting sociotechnical dimensions of values for design research. The Information Society, 29 (5), 259–271.

Smith, B., & Shum, H. (2018). The future computed: Artificial intelligence and its role in society . Retrieved from https://blogs.microsoft.com/blog/2018/01/17/future-computed-artificial-intelligence-role-society/

Spiekermann, S., Korunovska, J., & Langheinrich, M. (2018). Inside the organization: Why privacy and security engineering is a challenge for engineers[40pt]. Proceedings of the IEEE , PP (99), 1–16.

Susser, D., Roessler, B., & Nissenbaum, H. (2019). Online Manipulation: Hidden Influences in a Digital World. Available at SSRN 3306006 . https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3306006

Turow, J., Hennessy, M., & Draper, N. (2015). The tradeoff fallacy: how marketers are misrepresenting american consumers and opening them up to exploitation. Annenburg School of Communication. https://www.asc.upenn.edu/sites/default/files/TradeoffFallacy_1.pdf .

Winner, L. (1980). Do artifacts have politics? Daedalus, 109 (1), 121–136.

Wong, J. (2018 March 28). Apple’s tim cook rebukes Zuckerberg over Facebook’s business model. The Guardian . Retrieved from https://www.theguardian.com/technology/2018/mar/28/facebook-apple-tim-cook-zuckerberg-business-model

Zuckerberg, M. (2019 March 30). The internet needs new rules. Washington Post. Retrieved from https://www.washingtonpost.com/opinions/mark-zuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/

Download references

Author information

Authors and affiliations.

George Washington University, Washington, DC, USA

Kirsten Martin

University of Maryland, College Park, MS, USA

Katie Shilton

Seattle University, Seattle, WA, USA

Jeffery Smith

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Kirsten Martin .

Ethics declarations

Animal and human rights.

The authors conducted no research on human participants or animals.

Conflict of interest

The authors declare that they have no conflict of interest.

Informed Consent

The authors had no reason to receive informed consent (no empirical research).

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Martin, K., Shilton, K. & Smith, J. Business and the Ethical Implications of Technology: Introduction to the Symposium. J Bus Ethics 160 , 307–317 (2019). https://doi.org/10.1007/s10551-019-04213-9

Download citation

Received : 22 May 2019

Accepted : 28 May 2019

Published : 13 June 2019

Issue Date : December 2019

DOI : https://doi.org/10.1007/s10551-019-04213-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Socio-technical systems
  • Science and technology studies
  • Values in design
  • Social contract theory
  • Find a journal
  • Publish with us
  • Track your research

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Business LibreTexts

15.9: Examples of Unethical Business Behavior

  • Last updated
  • Save as PDF
  • Page ID 45465

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

Learning Objectives

  • Give examples of unethical corporate or business behavior

In business, sometimes ethics comes down to deciding whether or not to tell the truth. Admitting an error, disclosing material facts, or sending a customer to a competitor are all decisions that business people make based on issues of honesty and integrity. Because honesty and integrity are often used in the same breath, many people believe that they are one and the same. However, they are decidedly different, and each is important in its own way. As Professor Stephen L. Carter of Yale Law School points out in his book Integrity, “one cannot have integrity without being honest, but one can be honest and yet lack integrity.”

Integrity means adherence to principles. It’s a three-step process: choosing the right course of conduct; acting consistently with the choice—even when it’s inconvenient or unprofitable to do so; openly declaring where one stands. Accordingly, integrity is equated with moral reflection, steadfastness to commitments, and trustworthiness.

The major difference between honesty and integrity is that one may be entirely honest without engaging in the thought and reflection that integrity demands. The honest person may truthfully tell what he or she believes without the advance determination of whether it’s right or wrong. Sometimes the difference is subtle. Take the following example:

Being himself a graduate of an elite business school, a manager gives the more challenging assignments to staff with the same background. He does this, he believes, because they will do the job best and for the benefit of others who did not attend similar institutions. He doesn’t want them to fail. He claims integrity because he is acting according to his beliefs.

The manager fails the integrity test. The question is not whether his actions are consistent with what he most deeply believes but whether he has done the hard work of ascertaining whether what he believes is right and true. [1] .

Companies that value honesty and integrity can expect to see those values permeate their company culture. In such a climate, coworkers trust one another, employees view management with less suspicion, and customers spread the word about the company’s ethical behavior. Honest companies also don’t have to worry about getting into trouble with the IRS or the media on account of ethical wrongdoing. Even though a company may have to give up short-term gains in order to maintain an atmosphere of honesty and integrity, in the long run it will come out ahead.

Practice Question

https://assessments.lumenlearning.co...essments/14370

Read how seven business leaders made decisions to act with honesty and integrity.

Play the simulation below multiple times to see how different choices influence the outcome. All simulations allow unlimited attempts so that you can gain experience applying the concepts.

  • Thomas, Jim. "Honesty Is Not Synonymous With Integrity,And We Need To Know The Difference,For Integrity Is What We Need." Alliance for Integrity RSS. August 15, 2011. Accessed February 27, 2019. http://allianceforintegrity.com/integrity-articles/honesty-is-not-synonymous-with-integrityand-we-need-to-know-the-differencefor-integrity-is-what-we-need/ . ↵

Contributors and Attributions

  • Examples of Unethical Business Behavior. Authored by : Linda Williams and Lumen Learning. License : CC BY: Attribution
  • Try It: Ethics. Authored by : Clark Aldrich for Lumen Learning. License : CC BY: Attribution
  • Practice Questions. Authored by : Robert Danielson. Provided by : Lumen Learning. License : CC BY: Attribution

Starbucks sued for allegedly using coffee from farms with rights abuses while touting its ‘ethical’ sourcing

People stand outside a Starbucks in Los Angeles in 2022.

A consumer advocacy group is suing Starbucks, the world’s largest coffee brand, for false advertising, alleging that it sources coffee and tea from farms with human rights and labor abuses, while touting its commitment to ethical sourcing.

The case, filed in a Washington, D.C., court on Wednesday on behalf of American consumers, alleges that the coffee giant is misleading the public by widely marketing its “100% ethical” sourcing commitment on its coffee and tea products, when it knowingly sources from suppliers with “documented, severe human rights and labor abuses.”

“On every bag of coffee and box of K-cups that Starbucks sells, Starbucks is heralding its commitment to 100% ethical sourcing,” said Sally Greenberg, CEO of the National Consumers League, the legal advocacy group bringing the case. “But it’s pretty clear that there are significant human rights and labor abuses across Starbucks’ supply chain.”

The lawsuit cites reporting about human rights and labor abuses on specific coffee and tea farms in Guatemala , Kenya and Brazil , and alleges that Starbucks has continued to purchase from these suppliers in spite of the documented violations.

"We are aware of the lawsuit, and plan to aggressively   defend against the asserted claims that Starbucks has misrepresented its ethical sourcing commitments to customers," said a spokesperson for Starbucks.

In an earlier statement they said, “We take allegations like these extremely seriously and are actively engaged with farms to ensure they adhere to our standards. Each supply chain is required to undergo reverification regularly and we remain committed to working with our business partners to meet the expectations detailed in our Global Human Rights Statement ."

In Brazil, labor officials have cracked down on several reported Starbucks suppliers over abusive and unsafe labor practices in recent years, including garnishing the cost of harvesting equipment from farm workers wages, not providing clean drinking water, personal protective equipment and bathrooms, and employing underaged workers. In 2022, 17 workers, including three minors, were rescued by Brazilian inspectors from “modern slavery,” according to Reporter Brasil , at a coffee farm managed by a man whose coffee roaster company received Starbucks’ seal of certification a month earlier.

In response to the Reporter Brasil stories and reported labor abuses in Kenya and Guatemala cited in the lawsuit, Starbucks issued statements at the time that the company was “deeply concerned,” and that it would “thoroughly investigate” claims of labor violations, “take immediate action” to suspend purchases or “ensure corrective action” occurred.

Starbucks told NBC News it has since taken corrective action in both Guatemala and Kenya.

A coffee roaster takes a scoop of coffee beans from a roaster

In a promotional video on its coffee academy website, a Starbucks coffee buyer says the company’s ethical sourcing stamp “means that we are buying coffee, making sure that it’s good for the planet and good for the people who produce it.”

Greenberg said the suit aims to prevent Starbucks from making claims like those — particularly its “Committed to 100% Ethical Coffee Sourcing” advertising — unless the company improves labor practices within its supply chain.

Starbucks, like many companies, uses third-party certification programs to ensure the integrity of its supply chains for tea and cocoa. The company launched its own sourcing standards, called C.A.F.E. Practices, in 2004 to oversee its coffee sourcing in more than 30 countries. The verification program is administered by a company called SCS Global Services in collaboration with Conservation International.

The verification program holds Starbucks coffee suppliers to more than 200 environmental, labor and quality standards. Farms that fail to meet those can be barred from supplying the company until corrective action is confirmed.

But there have long been issues with how effective such programs are, according to experts.

In 2021, Rainforest Alliance, the third-party that certifies Starbucks’ supply chains for tea and cocoa, was sued in D.C. court by another consumer advocacy group over “false and deceptive marketing” of Hershey’s cocoa as “100 percent certified and sustainable.” A judge ruled last year that the case could move forward only against Hershey, as the manufacturer of the products. 

Rainforest Alliance did not immediately respond to a request for comment. 

“There is this huge pile of evidence that shows that the mechanisms that [certifiers are] relying on to address problems like forced labor, child labor, gender based violence, are extremely flawed and not working very well,” said Genevieve LeBaron, director of the School of Public Policy at Canada’s Simon Fraser University.

“We have incident after incident that’s uncovered in these supply chains. And still, companies go around and make these kinds of claims that they have 100% sustainable or ethical sourcing” said LeBaron, whose research into cocoa and tea has shown that the prevalence and severity of labor violations on certified and uncertified farms was “basically identical."

LeBaron, who has consulted for the United Nations on global supply chain ethics, said the issue is not unique to Starbucks, but ethical commitments from large purchasing players like Starbucks can have an outsize impact on the integrity of supply chains if they are backed up.

Starbucks has 10 “farmer support centers ” in coffee-producing regions around the globe, including Brazil and Guatemala, but does not release public lists of certified suppliers, making it difficult to track how often its suppliers are found to be engaging in labor abuses.

“I think it is really hard to have an ethical supply chain. And I would say, you know, a lot of the reason for that is that, especially in agriculture, there’s a sort of status quo of sourcing goods way below the cost of actually producing them. And as long as you have that, you’re gonna have problems,” LeBaron said.

Kenzi Abou-Sabe is a reporter and producer in the NBC News Investigative Unit.

case study unethical practices in business

Adiel Kaplan is a reporter with the NBC News Investigative Unit.

  • Work & Careers
  • Life & Arts

Business school teaching case study: risks of the AI arms race

case study unethical practices in business

  • Business school teaching case study: risks of the AI arms race on x (opens in a new window)
  • Business school teaching case study: risks of the AI arms race on facebook (opens in a new window)
  • Business school teaching case study: risks of the AI arms race on linkedin (opens in a new window)
  • Business school teaching case study: risks of the AI arms race on whatsapp (opens in a new window)

David De Cremer

Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.

Prabhakar Raghavan, Google’s search chief, was preparing for the Paris launch of its much-anticipated artificial intelligence chatbot in February last year when he received some unpleasant news.

Two days earlier, his chief executive, Sundar Pichai, had boasted that the chatbot, Bard, “draws on information from the web to provide fresh, high-quality responses”. But, within hours of Google posting a short gif video on Twitter demonstrating Bard in action, observers spotted that the bot had given a wrong answer.

Bard’s response to “What new discoveries from the James Webb Space Telescope (JWST) can I tell my 9-year-old about?” was that the telescope had taken the very first pictures of a planet outside the Earth’s solar system. In fact, those images were generated by the European Southern Observatory’s Very Large Telescope nearly two decades before. It was an error that harmed Bard’s credibility and wiped $100bn off the market value of Google’s parent company, Alphabet.

The incident highlighted the dangers in the high-pressure arms race around AI. It has the potential to improve accuracy, efficiency and decision-making. However, while developers are expected to have clear boundaries for what they will do and to act responsibly when bringing technology to the market, the temptation is to prioritise profit over reliability.

The genesis of the AI arms race can be traced back to 2019, when Microsoft chief executive Satya Nadella realised that the AI-powered auto-complete function Google’s in Gmail was becoming so effective that his own company was at risk of being left behind in AI development.

Test yourself

This article is part of a collection of ‘instant teaching case studies ’ exploring business challenges. Read the piece then consider the questions at the end.

About the author: David De Cremer is the Dunton Family Dean and a professor of management and technology at D’Amore-McKim School of Business at Northeastern University in Boston. He is author of ‘The AI-Savvy Leader: 9 ways to take back control and make AI work’ (Harvard Business Review Press, 2024).

Technology start-up OpenAI, which needed external capital to secure additional computing resources, provided an opportunity. Nadella quietly made an initial $1bn investment. He believed that a collaboration between the two companies would allow Microsoft to commercialise OpenAI’s future discoveries, making Google “dance” and eating into its dominant market share. He was soon proved right.

Microsoft’s swift integration of OpenAI’s ChatGPT into Bing marked a strategic coup, projecting an image of technological ascendancy over Google. In an effort not to be left behind, Google rushed to release its own chatbot — even though the company knew that Bard was not ready to compete with ChatGPT. Its haste-driven error cost Alphabet $100bn in market capitalisation.

Nowadays, it seems the prevailing modus operandi in the tech industry is a myopic fixation on pioneering ever-more-sophisticated AI software. Fear of missing out compels companies to rush unfinished products to market, disregarding inherent risks and costs. Meta , for exampl e , recently confirmed its intention to double down in the AI arms race, despite rising costs and a nearly 12 per cent drop in its share price.

There appears to be a conspicuous absence of purpose-driven initiatives, with a focus on profit eclipsing societal welfare considerations. Tesla rushed to launch its AI-based “Fully Self Driving” (FSD) features, for example, with technology nowhere near the maturity needed for safe deployment on roads. FSD, with driver inattention, has been linked  to hundreds of crashes and dozens of deaths.

As a result, Tesla has had to recall more than 2mn vehicles because of FSD/autopilot issues. Despite identifying concerns about drivers’ ability to reverse necessary software updates, regulators argue that Tesla did not make those suggested changes part of the recall.

Compounding the issue is the proliferation of sub-par “ so-so technologies ”. For example, two new GenAI-based portable gadgets, Rabbit R1 and Humane AI Pin, triggered a backlash, accused of being unusable, overpriced, and not solving any meaningful problem. 

Unfortunately, this trend will not slow: driven by a desire to capitalise as quickly as possible on incremental improvements of ChatGPT, some start-ups are rushing to launch “so-so” GenAI-based hardware devices. They appear to show little interest in whether a market exists; the goal seems to be winning any possible AI race available, regardless of whether it adds value for end users. In response, OpenAI has warned start-ups to stop engaging in an opportunistic and short-term strategy of pursuing purposeless innovations and noted that more powerful versions of ChatGPT are coming that can easily replicate any GPT-based apps that the start-ups are launching.

In response, governments are preparing regulations to govern AI development and deployment. Some tech companies are responding with greater responsibility. A recent open letter  signed by industry leaders endorsed the idea that: “It is our collective responsibility to make choices that maximise AI’s benefits and mitigate the risks, for today and for the future generations”.

As the tech industry grapples with the ethical and societal implications of AI proliferation, some consultants, customers and external groups are making the case for purpose-driven innovation. While regulators offer a semblance of oversight, progress will require industry stakeholders to take responsibility for fostering an ecosystem that gives greater priority to societal welfare .

Questions for discussion

Do tech companies bear responsibility for how businesses deploy artificial intelligence in possibly wrong and unethical ways?

What strategies can tech companies follow to keep purpose centre stage and see profit as an outcome of purpose?

Should bringing AI to market be more regulated? And if so, how?

How do you predict that the tendency to race to the bottom will play out in the next five to 10 years in businesses working with AI? Which factors are most important?

What risks for companies are associated with not joining the race to the bottom in AI development? How can these risks be managed by adopting a more purpose-driven strategy? What factors are important in that scenario?

Promoted Content

Follow the topics in this article.

  • Executive education Add to myFT
  • Technology sector Add to myFT
  • Business school case Add to myFT
  • Artificial intelligence Add to myFT
  • Google LLC Add to myFT

International Edition

IMAGES

  1. Unethical Practices In Business 2022

    case study unethical practices in business

  2. Addressing Unethical Practices in Workplace: Ways to Avoid

    case study unethical practices in business

  3. Unethical Business Practices

    case study unethical practices in business

  4. Case study: ethical, unethical behavior in business

    case study unethical practices in business

  5. 15 Unethical Business Practices [From the Distasteful to Despicable]

    case study unethical practices in business

  6. 30 Unethical Behavior in Business and How to Avert?

    case study unethical practices in business

VIDEO

  1. The Study That Broke Economics

  2. Top 3 Unethical #science #facts 😨😨

  3. Unethical Business Practices to Avoid #shorts #ytshorts #technology #corporate

  4. Ethical Governance: dilemmas and trade-offs?

  5. Private Hospitals Ka Bhayanak SACH ?

  6. The Unethical 1932-1972 Tuskegee Study of Untreated Syphilis

COMMENTS

  1. A case study of ethical issue at Gucci in Shenzhen, China

    We shall draw on two very different perspectives to conduct a moral evaluation of the labor management practices in the Gucci case. The first perspective is that of traditional Confucian ethics, the second is modern labor rights theory. 1. Confucianism. The core of Confucian ethics is comprised of five values.

  2. Business Ethics Cases

    A Business Ethics Case Study. An employee at an after-school learning institution must balance a decision to accept or decline an offered gift, while considering the cultural norms of the client, upholding the best interests of all stakeholders, and following the operational rules of his employer.

  3. Case Studies

    Case Studies. More than 70 cases pair ethics concepts with real world situations. From journalism, performing arts, and scientific research to sports, law, and business, these case studies explore current and historic ethical dilemmas, their motivating biases, and their consequences. Each case includes discussion questions, related videos, and ...

  4. PDF The Coca-Cola Company Struggles with Ethical Crises

    Ferrell, and Linda Ferrell. Julian Mathias provided crucial updates and editorial assistance for this case. It is intended for classroom discussion rather than to illustrate effective or ineffective handling of administrative, ethical, or legal decisions by management. (2014) The Coca-Cola Company Struggles with Ethical Crises

  5. Wells Fargo Banking Scandal

    Wells Fargo also confirmed that it had fired over 5,300 employees over the past few years related to shady sales practices. CEO John Stumpf claimed that the scandal was the result of a few bad apples who did not honor the company's values and that there were no incentives to commit unethical behavior. The board initially stood behind the CEO ...

  6. How Common Is Unethical Behavior in U.S. Organizations?

    Unethical behavior is not unique to a time or place and it happens in organizations of all types and across industries: The U.S. Army and Central Intelligence Agency personnel was humiliating ...

  7. Apple Suppliers & Labor Practices

    We have chosen to stay engaged and attempt to drive changes on the ground.". In an effort for greater transparency, Apple has released annual reports detailing their work with suppliers and labor practices. While more recent investigations have shown some improvements to suppliers' working conditions, Apple continues to face criticism as ...

  8. Ethics: Articles, Research, & Case Studies on ...

    Corporate misconduct has grown in the past 30 years, with losses often totaling billions of dollars. What businesses may not realize is that misconduct often results from managers who set unrealistic expectations, leading decent people to take unethical shortcuts, says Lynn S. Paine. 23 Apr 2024.

  9. Building an Ethical Company

    Building an Ethical Company. Create an organization that helps employees behave more honorably. Summary. Just as people can develop skills and abilities over time, they can learn to be more or ...

  10. Ethical Business Practices: Case Studies and Lessons Learned

    by Seneca ESG. 2023-10-10. Introduction. Ethical business practices are a cornerstone of any successful company, influencing not only the public perception of a brand but also its long-term profitability. However, understanding what constitutes ethical behavior and how to implement it can be a complex process.

  11. Ethical Research in Business Ethics

    Litz's and Turner's study of unethical practices in inherited family firms provides an interesting case of how researchers can productively describe the dilemmas they face methodologically. Given the difficulty of gathering data about the unethical practices of family members, they candidly ask "how does one approach a question so laced ...

  12. Cases in Global Business Ethics

    A joint project of students at Santa Clara University: Loyola Institute of Business Administration, Chennai, India; and Atteneo de Manila, Philippines, these case studies highlight issues in global business ethics.

  13. The Psychology Behind Unethical Behavior

    First, there's omnipotence: when someone feels so aggrandized and entitled that they believe the rules of decent behavior don't apply to them. Second, consider cultural numbness: when others ...

  14. What Are Business Ethics & Their Importance?

    These unethical practices can result in multi-million dollar lawsuits, as well as highly dissatisfied customers. Ethical Responsibilities to Employees. You also have ethical responsibilities to your employees—from the beginning to the end of their employment. One area of business ethics that receives a lot of attention is employee termination.

  15. Wells Fargo: Fall from Great to Miserable: A Case Study on Corporate

    It turned out to be the beginning of a series of events that ultimately cost him his job and the bank USD185 million in penalties. The report accused Wells Fargo, a major retail bank in the USA with 6,300 offices and USD237 billion in market valuation, of gross unethical practices in cross-selling (Reckard, 2013). Using a cross-selling strategy ...

  16. Business and the Ethical Implications of Technology: Introduction to

    While the ethics of technology is analyzed across disciplines from science and technology studies (STS), engineering, computer science, critical management studies, and law, less attention is paid to the role that firms and managers play in the design, development, and dissemination of technology across communities and within their firm. Although firms play an important role in the development ...

  17. 15.9: Examples of Unethical Business Behavior

    Learning Objectives. Give examples of unethical corporate or business behavior. In business, sometimes ethics comes down to deciding whether or not to tell the truth. Admitting an error, disclosing material facts, or sending a customer to a competitor are all decisions that business people make based on issues of honesty and integrity.

  18. Starbucks sued for allegedly using coffee from farms with rights abuses

    The case, filed in a Washington, D.C., court on Wednesday on behalf of American consumers, alleges that the coffee giant is misleading the public by widely marketing its "100% ethical ...

  19. Ethical and unethical leadership issues, cases, and dilemmas with case

    Lecturer, University of. Nairobi, Kenya. Ethical and unethical leadership issues, cases, and. dilemmas with case studies. Moses JB Kabeyi. Abstract. Ethical issues have become important to ...

  20. Ethical Issues in Business-A Case Study of Selected Firms

    Abstract. Business Ethics' is a study of the policies of an organization that are related to corporate governance and corporate social responsibility. It is moral responsibility of the businesses ...

  21. PDF Impact of Unethical Practices on Business Environment: a Case Study on

    i. To examine the impact of unethical practices on the stakeholders of Toyota. 4. Methodology 5. This paper is based on the TOYOTA case study and is used to examine the impact of unethical practices on the business environment. The case study method is a "preferred strategy when "how" or "why" questions are being

  22. Actual cases of an unethical practice of a Philippine company

    practice of a Philippine company. 1. Jollibee Food Corporation. Jollibee is a Filipino multinational chain of fast food restaurants owned by Jollibee Foods Corporation. As of April 2018, JFC had a total of about 1,200 Jollibee outlets worldwide; with presence in Southeast Asia, the Middle East, East Asia, North America, and Europe.

  23. Nestle Unethical Practices Case Study

    Nestle Unethical Practices Case Study - Free download as PDF File (.pdf), Text File (.txt) or read online for free. Its about maggi ban case

  24. Josh Allen Wegotit Inc Case Study (docx)

    CONCLUSION : The potential legal issues that may arise during the encouragement of unethical business practices by senior management could be comprised of employee exploitation, trade secret misappropriation, fraud, or unethical accounting, to achieve company standards. See CO Code § 18-5-301 (2022).

  25. Business school teaching case study: risks of the AI arms race

    This article is part of a collection of 'instant teaching case studies' exploring business challenges. Read the piece then consider the questions at the end. About the author: David De Cremer ...