Illustration

  • Essay Guides
  • Other Essays
  • How to Write an Ethics Paper: Guide & Ethical Essay Examples
  • Speech Topics
  • Basics of Essay Writing
  • Essay Topics
  • Main Academic Essays
  • Research Paper Topics
  • Basics of Research Paper Writing
  • Miscellaneous
  • Chicago/ Turabian
  • Data & Statistics
  • Methodology
  • Admission Writing Tips
  • Admission Advice
  • Other Guides
  • Student Life
  • Studying Tips
  • Understanding Plagiarism
  • Academic Writing Tips
  • Basics of Dissertation & Thesis Writing

Illustration

  • Research Paper Guides
  • Formatting Guides
  • Basics of Research Process
  • Admission Guides
  • Dissertation & Thesis Guides

How to Write an Ethics Paper: Guide & Ethical Essay Examples

ethics-essay

Table of contents

Illustration

Use our free Readability checker

An ethics essay is a type of academic writing that explores ethical issues and dilemmas. Students should evaluates them in terms of moral principles and values. The purpose of an ethics essay is to examine the moral implications of a particular issue, and provide a reasoned argument in support of an ethical perspective.

Writing an essay about ethics is a tough task for most students. The process involves creating an outline to guide your arguments about a topic and planning your ideas to convince the reader of your feelings about a difficult issue. If you still need assistance putting together your thoughts in composing a good paper, you have come to the right place. We have provided a series of steps and tips to show how you can achieve success in writing. This guide will tell you how to write an ethics paper using ethical essay examples to understand every step it takes to be proficient. In case you don’t have time for writing, get in touch with our professional essay writers for hire . Our experts work hard to supply students with excellent essays.

What Is an Ethics Essay?

An ethics essay uses moral theories to build arguments on an issue. You describe a controversial problem and examine it to determine how it affects individuals or society. Ethics papers analyze arguments on both sides of a possible dilemma, focusing on right and wrong. The analysis gained can be used to solve real-life cases. Before embarking on writing an ethical essay, keep in mind that most individuals follow moral principles. From a social context perspective, these rules define how a human behaves or acts towards another. Therefore, your theme essay on ethics needs to demonstrate how a person feels about these moral principles. More specifically, your task is to show how significant that issue is and discuss if you value or discredit it.

Purpose of an Essay on Ethics

The primary purpose of an ethics essay is to initiate an argument on a moral issue using reasoning and critical evidence. Instead of providing general information about a problem, you present solid arguments about how you view the moral concern and how it affects you or society. When writing an ethical paper, you demonstrate philosophical competence, using appropriate moral perspectives and principles.

Things to Write an Essay About Ethics On

Before you start to write ethics essays, consider a topic you can easily address. In most cases, an ethical issues essay analyzes right and wrong. This includes discussing ethics and morals and how they contribute to the right behaviors. You can also talk about work ethic, code of conduct, and how employees promote or disregard the need for change. However, you can explore other areas by asking yourself what ethics mean to you. Think about how a recent game you watched with friends started a controversial argument. Or maybe a newspaper that highlighted a story you felt was misunderstood or blown out of proportion. This way, you can come up with an excellent topic that resonates with your personal ethics and beliefs.

Ethics Paper Outline

Sometimes, you will be asked to submit an outline before writing an ethics paper. Creating an outline for an ethics paper is an essential step in creating a good essay. You can use it to arrange your points and supporting evidence before writing. It also helps organize your thoughts, enabling you to fill any gaps in your ideas. The outline for an essay should contain short and numbered sentences to cover the format and outline. Each section is structured to enable you to plan your work and include all sources in writing an ethics paper. An ethics essay outline is as follows:

  • Background information
  • Thesis statement
  • Restate thesis statement
  • Summarize key points
  • Final thoughts on the topic

Using this outline will improve clarity and focus throughout your writing process.

Ethical Essay Structure

Ethics essays are similar to other essays based on their format, outline, and structure. An ethical essay should have a well-defined introduction, body, and conclusion section as its structure. When planning your ideas, make sure that the introduction and conclusion are around 20 percent of the paper, leaving the rest to the body. We will take a detailed look at what each part entails and give examples that are going to help you understand them better.  Refer to our essay structure examples to find a fitting way of organizing your writing.

Ethics Paper Introduction

An ethics essay introduction gives a synopsis of your main argument. One step on how to write an introduction for an ethics paper is telling about the topic and describing its background information. This paragraph should be brief and straight to the point. It informs readers what your position is on that issue. Start with an essay hook to generate interest from your audience. It can be a question you will address or a misunderstanding that leads up to your main argument. You can also add more perspectives to be discussed; this will inform readers on what to expect in the paper.

Ethics Essay Introduction Example

You can find many ethics essay introduction examples on the internet. In this guide, we have written an excellent extract to demonstrate how it should be structured. As you read, examine how it begins with a hook and then provides background information on an issue. 

Imagine living in a world where people only lie, and honesty is becoming a scarce commodity. Indeed, modern society is facing this reality as truth and deception can no longer be separated. Technology has facilitated a quick transmission of voluminous information, whereas it's hard separating facts from opinions.

In this example, the first sentence of the introduction makes a claim or uses a question to hook the reader.

Ethics Essay Thesis Statement

An ethics paper must contain a thesis statement in the first paragraph. Learning how to write a thesis statement for an ethics paper is necessary as readers often look at it to gauge whether the essay is worth their time.

When you deviate away from the thesis, your whole paper loses meaning. In ethics essays, your thesis statement is a roadmap in writing, stressing your position on the problem and giving reasons for taking that stance. It should focus on a specific element of the issue being discussed. When writing a thesis statement, ensure that you can easily make arguments for or against its stance.

Ethical Paper Thesis Example

Look at this example of an ethics paper thesis statement and examine how well it has been written to state a position and provide reasons for doing so:

The moral implications of dishonesty are far-reaching as they undermine trust, integrity, and other foundations of society, damaging personal and professional relationships. 

The above thesis statement example is clear and concise, indicating that this paper will highlight the effects of dishonesty in society. Moreover, it focuses on aspects of personal and professional relationships.

Ethics Essay Body

The body section is the heart of an ethics paper as it presents the author's main points. In an ethical essay, each body paragraph has several elements that should explain your main idea. These include:

  • A topic sentence that is precise and reiterates your stance on the issue.
  • Evidence supporting it.
  • Examples that illustrate your argument.
  • A thorough analysis showing how the evidence and examples relate to that issue.
  • A transition sentence that connects one paragraph to another with the help of essay transitions .

When you write an ethics essay, adding relevant examples strengthens your main point and makes it easy for others to understand and comprehend your argument. 

Body Paragraph for Ethics Paper Example

A good body paragraph must have a well-defined topic sentence that makes a claim and includes evidence and examples to support it. Look at part of an example of ethics essay body paragraph below and see how its idea has been developed:

Honesty is an essential component of professional integrity. In many fields, trust and credibility are crucial for professionals to build relationships and success. For example, a doctor who is dishonest about a potential side effect of a medication is not only acting unethically but also putting the health and well-being of their patients at risk. Similarly, a dishonest businessman could achieve short-term benefits but will lose their client’s trust.

Ethics Essay Conclusion

A concluding paragraph shares the summary and overview of the author's main arguments. Many students need clarification on what should be included in the essay conclusion and how best to get a reader's attention. When writing an ethics paper conclusion, consider the following:

  • Restate the thesis statement to emphasize your position.
  • Summarize its main points and evidence.
  • Final thoughts on the issue and any other considerations.

You can also reflect on the topic or acknowledge any possible challenges or questions that have not been answered. A closing statement should present a call to action on the problem based on your position.

Sample Ethics Paper Conclusion

The conclusion paragraph restates the thesis statement and summarizes the arguments presented in that paper. The sample conclusion for an ethical essay example below demonstrates how you should write a concluding statement.  

In conclusion, the implications of dishonesty and the importance of honesty in our lives cannot be overstated. Honesty builds solid relationships, effective communication, and better decision-making. This essay has explored how dishonesty impacts people and that we should value honesty. We hope this essay will help readers assess their behavior and work towards being more honest in their lives.

In the above extract, the writer gives final thoughts on the topic, urging readers to adopt honest behavior.

How to Write an Ethics Paper?

As you learn how to write an ethics essay, it is not advised to immediately choose a topic and begin writing. When you follow this method, you will get stuck or fail to present concrete ideas. A good writer understands the importance of planning. As a fact, you should organize your work and ensure it captures key elements that shed more light on your arguments. Hence, following the essay structure and creating an outline to guide your writing process is the best approach. In the following segment, we have highlighted step-by-step techniques on how to write a good ethics paper.

1. Pick a Topic

Before writing ethical papers, brainstorm to find ideal topics that can be easily debated. For starters, make a list, then select a title that presents a moral issue that may be explained and addressed from opposing sides. Make sure you choose one that interests you. Here are a few ideas to help you search for topics:

  • Review current trends affecting people.
  • Think about your personal experiences.
  • Study different moral theories and principles.
  • Examine classical moral dilemmas.

Once you find a suitable topic and are ready, start to write your ethics essay, conduct preliminary research, and ascertain that there are enough sources to support it.

2. Conduct In-Depth Research

Once you choose a topic for your essay, the next step is gathering sufficient information about it. Conducting in-depth research entails looking through scholarly journals to find credible material. Ensure you note down all sources you found helpful to assist you on how to write your ethics paper. Use the following steps to help you conduct your research:

  • Clearly state and define a problem you want to discuss.
  • This will guide your research process.
  • Develop keywords that match the topic.
  • Begin searching from a wide perspective. This will allow you to collect more information, then narrow it down by using the identified words above.

3. Develop an Ethics Essay Outline

An outline will ease up your writing process when developing an ethic essay. As you develop a paper on ethics, jot down factual ideas that will build your paragraphs for each section. Include the following steps in your process:

  • Review the topic and information gathered to write a thesis statement.
  • Identify the main arguments you want to discuss and include their evidence.
  • Group them into sections, each presenting a new idea that supports the thesis.
  • Write an outline.
  • Review and refine it.

Examples can also be included to support your main arguments. The structure should be sequential, coherent, and with a good flow from beginning to end. When you follow all steps, you can create an engaging and organized outline that will help you write a good essay.

4. Write an Ethics Essay

Once you have selected a topic, conducted research, and outlined your main points, you can begin writing an essay . Ensure you adhere to the ethics paper format you have chosen. Start an ethics paper with an overview of your topic to capture the readers' attention. Build upon your paper by avoiding ambiguous arguments and using the outline to help you write your essay on ethics. Finish the introduction paragraph with a thesis statement that explains your main position.  Expand on your thesis statement in all essay paragraphs. Each paragraph should start with a topic sentence and provide evidence plus an example to solidify your argument, strengthen the main point, and let readers see the reasoning behind your stance. Finally, conclude the essay by restating your thesis statement and summarizing all key ideas. Your conclusion should engage the reader, posing questions or urging them to reflect on the issue and how it will impact them.

5. Proofread Your Ethics Essay

Proofreading your essay is the last step as you countercheck any grammatical or structural errors in your essay. When writing your ethic paper, typical mistakes you could encounter include the following:

  • Spelling errors: e.g., there, they’re, their.
  • Homophone words: such as new vs. knew.
  • Inconsistencies: like mixing British and American words, e.g., color vs. color.
  • Formatting issues: e.g., double spacing, different font types.

While proofreading your ethical issue essay, read it aloud to detect lexical errors or ambiguous phrases that distort its meaning. Verify your information and ensure it is relevant and up-to-date. You can ask your fellow student to read the essay and give feedback on its structure and quality.

Ethics Essay Examples

Writing an essay is challenging without the right steps. There are so many ethics paper examples on the internet, however, we have provided a list of free ethics essay examples below that are well-structured and have a solid argument to help you write your paper. Click on them and see how each writing step has been integrated. Ethics essay example 1

Illustration

Ethics essay example 2

Ethics essay example 3

Ethics essay example 4

College ethics essay example 5

Ethics Essay Writing Tips

When writing papers on ethics, here are several tips to help you complete an excellent essay:

  • Choose a narrow topic and avoid broad subjects, as it is easy to cover the topic in detail.
  • Ensure you have background information. A good understanding of a topic can make it easy to apply all necessary moral theories and principles in writing your paper.
  • State your position clearly. It is important to be sure about your stance as it will allow you to draft your arguments accordingly.
  • When writing ethics essays, be mindful of your audience. Provide arguments that they can understand.
  • Integrate solid examples into your essay. Morality can be hard to understand; therefore, using them will help a reader grasp these concepts.

Bottom Line on Writing an Ethics Paper

Creating this essay is a common exercise in academics that allows students to build critical skills. When you begin writing, state your stance on an issue and provide arguments to support your position. This guide gives information on how to write an ethics essay as well as examples of ethics papers. Remember to follow these points in your writing:

  • Create an outline highlighting your main points.
  • Write an effective introduction and provide background information on an issue.
  • Include a thesis statement.
  • Develop concrete arguments and their counterarguments, and use examples.
  • Sum up all your key points in your conclusion and restate your thesis statement.

Illustration

Contact our academic writing platform and have your challenge solved. Here, you can order essays and papers on any topic and enjoy top quality. 

Daniel_Howard_1_1_2da08f03b5.jpg

Daniel Howard is an Essay Writing guru. He helps students create essays that will strike a chord with the readers.

You may also like

How to write a satire essay

  • Privacy Policy

Research Method

Home » Ethical Considerations – Types, Examples and Writing Guide

Ethical Considerations – Types, Examples and Writing Guide

Table of Contents

Ethical Considerations

Ethical Considerations

Ethical considerations in research refer to the principles and guidelines that researchers must follow to ensure that their studies are conducted in an ethical and responsible manner. These considerations are designed to protect the rights, safety, and well-being of research participants, as well as the integrity and credibility of the research itself

Some of the key ethical considerations in research include:

  • Informed consent: Researchers must obtain informed consent from study participants, which means they must inform participants about the study’s purpose, procedures, risks, benefits, and their right to withdraw at any time.
  • Privacy and confidentiality : Researchers must ensure that participants’ privacy and confidentiality are protected. This means that personal information should be kept confidential and not shared without the participant’s consent.
  • Harm reduction : Researchers must ensure that the study does not harm the participants physically or psychologically. They must take steps to minimize the risks associated with the study.
  • Fairness and equity : Researchers must ensure that the study does not discriminate against any particular group or individual. They should treat all participants equally and fairly.
  • Use of deception: Researchers must use deception only if it is necessary to achieve the study’s objectives. They must inform participants of the deception as soon as possible.
  • Use of vulnerable populations : Researchers must be especially cautious when working with vulnerable populations, such as children, pregnant women, prisoners, and individuals with cognitive or intellectual disabilities.
  • Conflict of interest : Researchers must disclose any potential conflicts of interest that may affect the study’s integrity. This includes financial or personal relationships that could influence the study’s results.
  • Data manipulation: Researchers must not manipulate data to support a particular hypothesis or agenda. They should report the results of the study objectively, even if the findings are not consistent with their expectations.
  • Intellectual property: Researchers must respect intellectual property rights and give credit to previous studies and research.
  • Cultural sensitivity : Researchers must be sensitive to the cultural norms and beliefs of the participants. They should avoid imposing their values and beliefs on the participants and should be respectful of their cultural practices.

Types of Ethical Considerations

Types of Ethical Considerations are as follows:

Research Ethics:

This includes ethical principles and guidelines that govern research involving human or animal subjects, ensuring that the research is conducted in an ethical and responsible manner.

Business Ethics :

This refers to ethical principles and standards that guide business practices and decision-making, such as transparency, honesty, fairness, and social responsibility.

Medical Ethics :

This refers to ethical principles and standards that govern the practice of medicine, including the duty to protect patient autonomy, informed consent, confidentiality, and non-maleficence.

Environmental Ethics :

This involves ethical principles and values that guide our interactions with the natural world, including the obligation to protect the environment, minimize harm, and promote sustainability.

Legal Ethics

This involves ethical principles and standards that guide the conduct of legal professionals, including issues such as confidentiality, conflicts of interest, and professional competence.

Social Ethics

This involves ethical principles and values that guide our interactions with other individuals and society as a whole, including issues such as justice, fairness, and human rights.

Information Ethics

This involves ethical principles and values that govern the use and dissemination of information, including issues such as privacy, accuracy, and intellectual property.

Cultural Ethics

This involves ethical principles and values that govern the relationship between different cultures and communities, including issues such as respect for diversity, cultural sensitivity, and inclusivity.

Technological Ethics

This refers to ethical principles and guidelines that govern the development, use, and impact of technology, including issues such as privacy, security, and social responsibility.

Journalism Ethics

This involves ethical principles and standards that guide the practice of journalism, including issues such as accuracy, fairness, and the public interest.

Educational Ethics

This refers to ethical principles and standards that guide the practice of education, including issues such as academic integrity, fairness, and respect for diversity.

Political Ethics

This involves ethical principles and values that guide political decision-making and behavior, including issues such as accountability, transparency, and the protection of civil liberties.

Professional Ethics

This refers to ethical principles and standards that guide the conduct of professionals in various fields, including issues such as honesty, integrity, and competence.

Personal Ethics

This involves ethical principles and values that guide individual behavior and decision-making, including issues such as personal responsibility, honesty, and respect for others.

Global Ethics

This involves ethical principles and values that guide our interactions with other nations and the global community, including issues such as human rights, environmental protection, and social justice.

Applications of Ethical Considerations

Ethical considerations are important in many areas of society, including medicine, business, law, and technology. Here are some specific applications of ethical considerations:

  • Medical research : Ethical considerations are crucial in medical research, particularly when human subjects are involved. Researchers must ensure that their studies are conducted in a way that does not harm participants and that participants give informed consent before participating.
  • Business practices: Ethical considerations are also important in business, where companies must make decisions that are socially responsible and avoid activities that are harmful to society. For example, companies must ensure that their products are safe for consumers and that they do not engage in exploitative labor practices.
  • Environmental protection: Ethical considerations play a crucial role in environmental protection, as companies and governments must weigh the benefits of economic development against the potential harm to the environment. Decisions about land use, resource allocation, and pollution must be made in an ethical manner that takes into account the long-term consequences for the planet and future generations.
  • Technology development : As technology continues to advance rapidly, ethical considerations become increasingly important in areas such as artificial intelligence, robotics, and genetic engineering. Developers must ensure that their creations do not harm humans or the environment and that they are developed in a way that is fair and equitable.
  • Legal system : The legal system relies on ethical considerations to ensure that justice is served and that individuals are treated fairly. Lawyers and judges must abide by ethical standards to maintain the integrity of the legal system and to protect the rights of all individuals involved.

Examples of Ethical Considerations

Here are a few examples of ethical considerations in different contexts:

  • In healthcare : A doctor must ensure that they provide the best possible care to their patients and avoid causing them harm. They must respect the autonomy of their patients, and obtain informed consent before administering any treatment or procedure. They must also ensure that they maintain patient confidentiality and avoid any conflicts of interest.
  • In the workplace: An employer must ensure that they treat their employees fairly and with respect, provide them with a safe working environment, and pay them a fair wage. They must also avoid any discrimination based on race, gender, religion, or any other characteristic protected by law.
  • In the media : Journalists must ensure that they report the news accurately and without bias. They must respect the privacy of individuals and avoid causing harm or distress. They must also be transparent about their sources and avoid any conflicts of interest.
  • In research: Researchers must ensure that they conduct their studies ethically and with integrity. They must obtain informed consent from participants, protect their privacy, and avoid any harm or discomfort. They must also ensure that their findings are reported accurately and without bias.
  • In personal relationships : People must ensure that they treat others with respect and kindness, and avoid causing harm or distress. They must respect the autonomy of others and avoid any actions that would be considered unethical, such as lying or cheating. They must also respect the confidentiality of others and maintain their privacy.

How to Write Ethical Considerations

When writing about research involving human subjects or animals, it is essential to include ethical considerations to ensure that the study is conducted in a manner that is morally responsible and in accordance with professional standards. Here are some steps to help you write ethical considerations:

  • Describe the ethical principles: Start by explaining the ethical principles that will guide the research. These could include principles such as respect for persons, beneficence, and justice.
  • Discuss informed consent : Informed consent is a critical ethical consideration when conducting research. Explain how you will obtain informed consent from participants, including how you will explain the purpose of the study, potential risks and benefits, and how you will protect their privacy.
  • Address confidentiality : Describe how you will protect the confidentiality of the participants’ personal information and data, including any measures you will take to ensure that the data is kept secure and confidential.
  • Consider potential risks and benefits : Describe any potential risks or harms to participants that could result from the study and how you will minimize those risks. Also, discuss the potential benefits of the study, both to the participants and to society.
  • Discuss the use of animals : If the research involves the use of animals, address the ethical considerations related to animal welfare. Explain how you will minimize any potential harm to the animals and ensure that they are treated ethically.
  • Mention the ethical approval : Finally, it’s essential to acknowledge that the research has received ethical approval from the relevant institutional review board or ethics committee. State the name of the committee, the date of approval, and any specific conditions or requirements that were imposed.

When to Write Ethical Considerations

Ethical considerations should be written whenever research involves human subjects or has the potential to impact human beings, animals, or the environment in some way. Ethical considerations are also important when research involves sensitive topics, such as mental health, sexuality, or religion.

In general, ethical considerations should be an integral part of any research project, regardless of the field or subject matter. This means that they should be considered at every stage of the research process, from the initial planning and design phase to data collection, analysis, and dissemination.

Ethical considerations should also be written in accordance with the guidelines and standards set by the relevant regulatory bodies and professional associations. These guidelines may vary depending on the discipline, so it is important to be familiar with the specific requirements of your field.

Purpose of Ethical Considerations

Ethical considerations are an essential aspect of many areas of life, including business, healthcare, research, and social interactions. The primary purposes of ethical considerations are:

  • Protection of human rights: Ethical considerations help ensure that people’s rights are respected and protected. This includes respecting their autonomy, ensuring their privacy is respected, and ensuring that they are not subjected to harm or exploitation.
  • Promoting fairness and justice: Ethical considerations help ensure that people are treated fairly and justly, without discrimination or bias. This includes ensuring that everyone has equal access to resources and opportunities, and that decisions are made based on merit rather than personal biases or prejudices.
  • Promoting honesty and transparency : Ethical considerations help ensure that people are truthful and transparent in their actions and decisions. This includes being open and honest about conflicts of interest, disclosing potential risks, and communicating clearly with others.
  • Maintaining public trust: Ethical considerations help maintain public trust in institutions and individuals. This is important for building and maintaining relationships with customers, patients, colleagues, and other stakeholders.
  • Ensuring responsible conduct: Ethical considerations help ensure that people act responsibly and are accountable for their actions. This includes adhering to professional standards and codes of conduct, following laws and regulations, and avoiding behaviors that could harm others or damage the environment.

Advantages of Ethical Considerations

Here are some of the advantages of ethical considerations:

  • Builds Trust : When individuals or organizations follow ethical considerations, it creates a sense of trust among stakeholders, including customers, clients, and employees. This trust can lead to stronger relationships and long-term loyalty.
  • Reputation and Brand Image : Ethical considerations are often linked to a company’s brand image and reputation. By following ethical practices, a company can establish a positive image and reputation that can enhance its brand value.
  • Avoids Legal Issues: Ethical considerations can help individuals and organizations avoid legal issues and penalties. By adhering to ethical principles, companies can reduce the risk of facing lawsuits, regulatory investigations, and fines.
  • Increases Employee Retention and Motivation: Employees tend to be more satisfied and motivated when they work for an organization that values ethics. Companies that prioritize ethical considerations tend to have higher employee retention rates, leading to lower recruitment costs.
  • Enhances Decision-making: Ethical considerations help individuals and organizations make better decisions. By considering the ethical implications of their actions, decision-makers can evaluate the potential consequences and choose the best course of action.
  • Positive Impact on Society: Ethical considerations have a positive impact on society as a whole. By following ethical practices, companies can contribute to social and environmental causes, leading to a more sustainable and equitable society.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Citation

How to Cite Research Paper – All Formats and...

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Paper Formats

Research Paper Format – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

ethical implications essay

Live revision! Join us for our free exam revision livestreams Watch now →

Reference Library

Collections

  • See what's new
  • All Resources
  • Student Resources
  • Assessment Resources
  • Teaching Resources
  • CPD Courses
  • Livestreams

Study notes, videos, interactive activities and more!

Psychology news, insights and enrichment

Currated collections of free resources

Browse resources by topic

  • All Psychology Resources

Resource Selections

Currated lists of resources

Study Notes

Issues & Debates: Ethical Implications of Research Studies and Theories

Last updated 22 Mar 2021

  • Share on Facebook
  • Share on Twitter
  • Share by Email

Implications are effects or consequences, and in this section you need to understand the consequences of research studies and theory.

In year one you studied ethical issues in psychological research, for example deception, informed consent, protection from harm, etc. These are examples of ethical implications/consequences for the participants who take part in the research and psychologists are required to balance the rights of the individual participants against the need to produce research that is useful for society. However, the term ethical implications also refers to other people, and psychologists should consider the implications of their findings in a wider context.

Ethical Implications of Research Studies: If you consider Milgram’s (1963) research, you need to consider whether the ‘ends justify the means’. The participants were deceived and were unable to give fully informed consent. The experiment also caused significant distress, and the participants were told or coerced to continue against their will. On the other hand, the participants were debriefed after the experiment and a follow-up interview took place a year later. The outcome of these follow-up interviews suggested that the participants had suffered no long-term effects. Bow

Ethical Implications of Theories: Bowlby’s Theory of Attachment suggests that children form one special attachment bond, usually with their mother, which must take place within a critical period. Bowlby also suggested that this attachment bond affects their future relationships through an internal working model. While Bowlby’s theory has contributed to the development of childcare practices, it has also encouraged the view that a women’s place is at home with her children, which could make some mothers feel guilty for wanting to return to work, following childbirth.

Exam Tip: If you are set an essay on ethical implications of research studies and theories, you can draw on what you know about ethical issues from your year one topics. However, there are also wider consequences that psychologists should also consider relating to the communication and publication of their findings. This is especially prevalent with research that is ‘socially sensitive’.

Teacher Resources

Issues & debates full topic powerpoints for aqa a level psychology.

Digital Resource

ethical implications essay

Issues & Debates Core Topic Essays for AQA A Level Psychology

For more study notes.

To keep up-to-date with the tutor2u Psychology team, follow us on Twitter @tutor2uPsych , Facebook AQA / OCR / Edexcel / Student or subscribe to the Psychology Daily Digest and get new content delivered to your inbox!

  • Issues & Debates
  • Social Sensitivity
  • Ethical Implications

You might also like

Free will vs. determinism: issues & debates webinar video.

Topic Videos

Issues & Debates: Types of Determinism

Issues & debates: evaluating free will & determinism, issues & debates: evaluating the nomothetic approach, levels of explanation: example answer video for a level sam 1, paper 3, q3 (2 marks), the nature-nurture debate, twin studies and concurrent validity: example answer video for a level sam 1, paper 3, q6, (6 marks), issues & debates summary | classroom poster / student handout set.

Poster / Student Handout

Ethical Guidelines in Psychology - Classroom Posters or Student Handout Set

Our subjects.

  • › Criminology
  • › Economics
  • › Geography
  • › Health & Social Care
  • › Psychology
  • › Sociology
  • › Teaching & learning resources
  • › Student revision workshops
  • › Online student courses
  • › CPD for teachers
  • › Livestreams
  • › Teaching jobs

Boston House, 214 High Street, Boston Spa, West Yorkshire, LS23 6AD Tel: 01937 848885

  • › Contact us
  • › Terms of use
  • › Privacy & cookies

© 2002-2024 Tutor2u Limited. Company Reg no: 04489574. VAT reg no 816865400.

Writing Ethical Papers: Top Tips to Ace Your Assignment

17 August, 2021

13 minutes read

Author:  Kate Smith

Writing a complex essay paper can be a tough task for any student, especially for those who do not have their skills developed well or do not have enough time for lengthy assignments. At the same time, the majority of college students need to keep their grades high to maintain their right to receive merit-based scholarships and continue their studies the next year. To help you with your ethical papers writing, we created this guide. Below, you will find out what an ethical paper is, how to structure it and write it efficiently. 

Ethical Papers

What is an Ethical Paper?

An ethics paper is a type of an argumentative assignment that deals with a certain ethical problem that a student has to describe and solve. Also, it can be an essay where a certain controversial event or concept is elaborated through an ethical lens (e.g. moral rules and principles), or a certain ethical dilemma is explained. Since ethics is connected to moral concepts and choices, a student needs to have a fair knowledge of philosophy and get ready to answer questions related to relationships, justice, professional and social duties, the origin of good and evil, etc., to write a quality paper. Also, writing an ethics paper implies that a student should process a great amount of information regarding their topic and analyze it according to paper terms.

General Aspects of Writing an Ethics Paper

Understanding the ethical papers’ features.

Every essay has differences and features that make it unique. Writing ethical papers implies that a student will use their knowledge of morality and philosophy to resolve a certain ethical dilemma or solve a situation. It can also be a paper in which a student needs to provide their reasoning on ethical or legal circumstances that follow a social issue. Finally, it can be an assignment in which an ethical concept and its application are described. On the contrary, a history essay deals with events that took place somewhen earlier, while a narrative essay is a paper where students demonstrate their storytelling skills, etc.

Defining What Type of Essay Should Be Written

Most of the time, ethical paper topics imply that a student will write an argumentative essay; however, ethics essays can also be descriptive and expository. Each of these essay types has different guidelines for writing, so be sure you know them before you start writing your papers on ethics. In case you missed this step in your ethical paper preparation stage, you would end up writing a paper that misses many important points.

Studying the Ethical Paper Guidelines

Once you get your ethical paper assignment, look through the guidelines that your instructor provided to you. If you receive them during the class, don’t hesitate to pose any questions immediately to remove any misunderstanding before writing an ethics paper outline, or ask for references that you need to use. When you are about to write your first draft, don’t rush: read the paper instructions once again to make sure you understand what is needed from you.

Paying Attention to the Paper Topic

The next thing you need to pay attention to is the ethical paper topic: once you are given one, make sure it falls into the scope of your educational course. After that, consider what additional knowledge may be needed to elaborate on your topic and think about what courses of your program could be helpful for it. Once you are done, read through your topic again to recheck whether you understand your assignment right.

Understanding the Notions of Ethical Arguments, Ethical and Legal Implications, and Ethical Dilemma

Last but not least, another important factor is that a student has to understand the basic terms of the assignment to write a high-quality paper. Ethical arguments are a set of moral rules that are used to defend your position on an ethical issue stated in your essay topic. We refer to ethical versus legal implications when we think about the compensation for certain ethical dilemma outcomes and whether it should be a moral punishment or legal judgment. An ethical dilemma itself refers to a problem or situation which makes an individual doubt what position to take: e.g, abortion, bribery, corruption, etc.

Writing Outline and Structure of an Ethics Paper

Every essay has a structure that makes it a solid piece of writing with straight reasoning and argumentation, and an ethics paper is not an exclusion. This paper has an introduction, body paragraphs, and conclusion. Below, we will describe how each part of ethical papers should be organized and what information they should contain.

First comes the introduction. It is the opening part of your paper which helps a reader to get familiar with your topic and understand what your paper will be about. Therefore, it should contain some information on your ethics paper topics and a thesis statement, which is a central statement of your paper.

The essay body is the most substantive part of your essay where all the reasoning and arguments should be presented. Each paragraph should contain an argument that supports or contradicts your thesis statement and pieces of evidence to support your position. Pick at least three arguments to make your position clear in your essay, and then your paper will be considered well-structured.

The third part of an ethics paper outline is a conclusion, which is a finishing essay part. Its goal is to wrap up the whole essay and make the author’s position clear for the last time. The thoughtful formulation in this essay part should be especially clear and concise to demonstrate the writer’s ability to make conclusions and persuade readers.

Also, don’t forget to include the works cited page after your writing. It should mention all the reference materials that you used in your paper in the order of appearance or in the alphabetical one. This page should be formatted according to the assigned formatting style. Most often, the most frequently used format for ethical papers is APA.

20 Examples of Ethical Paper Topics

  • Are there any issues in the 21st century that we can consider immoral and why?
  • What is corporate ethics?
  • Why is being selfish no longer an issue in 2023?
  • Euthanasia: pros and cons
  • Marijuana legalization: should it be allowed all over the world?
  • Is abortion an ethical issue nowadays?
  • Can we invent a universal religion appropriate for all?
  • Is the church necessary to pray to God?
  • Can we forgive infidelity and should we do it?
  • How to react if you are witnessing high school bullying?
  • What are the ways to respond to a family abusing individual?
  • How to demand your privacy protection in a digital world?
  • The history of the American ethical thought
  • Can war be ethical and what should the conflicting sides do to make it possible?
  • Ethical issues of keeping a zoo in 2023
  • Who is in charge of controlling the world’s population?
  • How to achieve equality in the world’s rich and poor gap?
  • Is science ethical?
  • How ethical is genetic engineering?
  • Why many countries refuse to go back to carrying out the death penalty?

Ethical Papers Examples

If you still have no idea about how to write an ethics paper, looking through other students’ successful examples is always a good idea. Below, you can find a relevant ethics paper example that you can skim through and see how to build your reasoning and argumentation in your own paper.

https://www.currentschoolnews.com/education-news/ethics-essay-examples/

https://sites.psu.edu/academy/2014/11/18/essay-2-personal-ethics-and-decision-making/

Ethical Papers Writing Tips

Choose a topic that falls into the ethics course program.

In case you were not given the ethics paper topic, consider choosing it yourself. To do that, brainstorm the ethical issues that fascinate you enough to do research. List all these issues on a paper sheet and then cross out those that are too broad or require expertise that you don’t have. The next step you need to take is to choose three or four ethical topics for papers from the list and try to do a quick search online to find out whether these topics are elaborated enough to find sources and reference materials on them. Last, choose one topic that you like the most and find the most relevant one in terms of available data for reference.

Do your research

Once the topic is chosen and organized, dive deeper into it to find the most credible, reliable, and trusted service. Use your university library, online scientific journals, documentaries, and other sources to get the information from. Remember to take notes while working with every new piece of reference material to not forget the ideas that you will base your argumentation on.

Follow the guidelines for a paper outline

During the preparation for your ethical paper and the process of writing it, remember to follow your professor’s instructions (e.g. font, size, spacing, citation style, etc.). If you neglect them, your grade for the paper will decrease significantly.

Write the essay body first

Do not rush to start writing your ethics papers from the very beginning; to write a good essay, you need to have your outline and thesis statement first. Then, go to writing body paragraphs to demonstrate your expertise on the issue you are writing about. Remember that one supporting idea should be covered in one paragraph and should be followed by the piece of evidence that confirms it.

Make sure your introduction and conclusion translate the same message

After your essay body is done, write a conclusion and an introduction for your paper. The main tip regarding these ethics paper parts is that you should make them interrelated: your conclusion has to restate your introduction but not repeat it. Also, a conclusion should wrap up your writing and make it credible for the audience.

Add citations

Every top-quality paper has the works cited page and citations to demonstrate that the research on the topic has been carried out. Therefore, do not omit this point when formatting your paper: add all the sources to the works cited page and pay attention to citing throughout the text. The latter should be done according to the formatting style indicated in your instructions.

Edit your paper

Last but not least is the editing and proofreading stage that you need to carry out before you submit your paper to your instructor. Consider keeping your first draft away from sight for a day or two to have a rest, and then go back to check it for errors and redundant phrases. Don’t rush to change anything immediately after finishing your writing since you are already tired and less focused, so some mistakes may be missed.

Writing Help by Handmadewriting

If you feel that you need help with writing an ethics paper in view of its chellnging nature, you can contact us and send an order through a respective button. You can add your paper details by following all steps of the order placing process that you will find on the website. Once your order is placed, we will get back to you as soon as possible. You will be able to contact your essay writer and let them know all your wishes regarding your ethical paper.

Our writers have expertise in writing ethical papers including, so you don’t need to worry about the quality of the essay that you will receive. Your assignment will be delivered on time and at a reasonable price. Note that urgent papers will cost slightly more than assignments with a postponed deadline, so do not wait too long to make your order. We will be glad to assist you with your writing and guarantee 24/7 support until you receive your paper.

Lastly, remember that no paper can be written overnight, so if you intend to complete your paper in a few hours, you can end up writing only a first draft with imperfections. If you have only half a day before your task is due, feel free to place an urgent order, and we will deliver it in just three hours.

A life lesson in Romeo and Juliet taught by death

A life lesson in Romeo and Juliet taught by death

Due to human nature, we draw conclusions only when life gives us a lesson since the experience of others is not so effective and powerful. Therefore, when analyzing and sorting out common problems we face, we may trace a parallel with well-known book characters or real historical figures. Moreover, we often compare our situations with […]

Ethical Research Paper Topics

Ethical Research Paper Topics

Writing a research paper on ethics is not an easy task, especially if you do not possess excellent writing skills and do not like to contemplate controversial questions. But an ethics course is obligatory in all higher education institutions, and students have to look for a way out and be creative. When you find an […]

Art Research Paper Topics

Art Research Paper Topics

Students obtaining degrees in fine art and art & design programs most commonly need to write a paper on art topics. However, this subject is becoming more popular in educational institutions for expanding students’ horizons. Thus, both groups of receivers of education: those who are into arts and those who only get acquainted with art […]

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Ethics of Artificial Intelligence and Robotics

Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these.

After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects , i.e., tools made and used by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects , i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3).

For each section within these themes, we provide a general explanation of the ethical issues , outline existing positions and arguments , then analyse how these play out with current technologies and finally, what policy consequences may be drawn.

1.1 Background of the Field

1.2 ai & robotics, 1.3 a note on policy, 2.1 privacy & surveillance, 2.2 manipulation of behaviour, 2.3 opacity of ai systems, 2.4 bias in decision systems, 2.5 human-robot interaction, 2.6 automation and employment, 2.7 autonomous systems, 2.8 machine ethics, 2.9 artificial moral agents, 2.10 singularity, research organizations, conferences, policy documents, other relevant pages, related entries, 1. introduction.

The ethics of AI and robotics is often focused on “concerns” of various sorts, which is a typical response to new technologies. Many such concerns turn out to be rather quaint (trains are too fast for souls); some are predictably wrong when they suggest that the technology will fundamentally change humans (telephones will destroy personal communication, writing will destroy memory, video cassettes will make going out redundant); some are broadly correct but moderately relevant (digital technology will destroy industries that make photographic film, cassette tapes, or vinyl records); but some are broadly correct and deeply relevant (cars will kill children and fundamentally change the landscape). The task of an article such as this is to analyse the issues and to deflate the non-issues.

Some technologies, like nuclear power, cars, or plastics, have caused ethical and political discussion and significant policy efforts to control the trajectory these technologies, usually only once some damage is done. In addition to such “ethical concerns”, new technologies challenge current norms and conceptual systems, which is of particular interest to philosophy. Finally, once we have understood a technology in its context, we need to shape our societal response, including regulation and law. All these features also exist in the case of new AI and Robotics technologies—plus the more fundamental fear that they may end the era of human control on Earth.

The ethics of AI and robotics has seen significant press coverage in recent years, which supports related research, but also may end up undermining it: the press often talks as if the issues under discussion were just predictions of what future technology will bring, and as though we already know what would be most ethical and how to achieve that. Press coverage thus focuses on risk, security (Brundage et al. 2018, in the Other Internet Resources section below, hereafter [OIR]), and prediction of impact (e.g., on the job market). The result is a discussion of essentially technical problems that focus on how to achieve a desired outcome. Current discussions in policy and industry are also motivated by image and public relations, where the label “ethical” is really not much more than the new “green”, perhaps used for “ethics washing”. For a problem to qualify as a problem for AI ethics would require that we do not readily know what the right thing to do is. In this sense, job loss, theft, or killing with AI is not a problem in ethics, but whether these are permissible under certain circumstances is a problem. This article focuses on the genuine problems of ethics where we do not readily know what the answers are.

A last caveat: The ethics of AI and robotics is a very young field within applied ethics, with significant dynamics, but few well-established issues and no authoritative overviews—though there is a promising outline (European Group on Ethics in Science and New Technologies 2018) and there are beginnings on societal impact (Floridi et al. 2018; Taddeo and Floridi 2018; S. Taylor et al. 2018; Walsh 2018; Bryson 2019; Gibert 2019; Whittlestone et al. 2019), and policy recommendations (AI HLEG 2019 [OIR]; IEEE 2019). So this article cannot merely reproduce what the community has achieved thus far, but must propose an ordering where little order exists.

The notion of “artificial intelligence” (AI) is understood broadly as any kind of artificial computational system that shows intelligent behaviour, i.e., complex behaviour that is conducive to reaching goals. In particular, we do not wish to restrict “intelligence” to what would require intelligence if done by humans , as Minsky had suggested (1985). This means we incorporate a range of machines, including those in “technical AI”, that show only limited abilities in learning or reasoning but excel at the automation of particular tasks, as well as machines in “general AI” that aim to create a generally intelligent agent.

AI somehow gets closer to our skin than other technologies—thus the field of “philosophy of AI”. Perhaps this is because the project of AI is to create machines that have a feature central to how we humans see ourselves, namely as feeling, thinking, intelligent beings. The main purposes of an artificially intelligent agent probably involve sensing, modelling, planning and action, but current AI applications also include perception, text analysis, natural language processing (NLP), logical reasoning, game-playing, decision support systems, data analytics, predictive analytics, as well as autonomous vehicles and other forms of robotics (P. Stone et al. 2016). AI may involve any number of computational techniques to achieve these aims, be that classical symbol-manipulating AI, inspired by natural cognition, or machine learning via neural networks (Goodfellow, Bengio, and Courville 2016; Silver et al. 2018).

Historically, it is worth noting that the term “AI” was used as above ca. 1950–1975, then came into disrepute during the “AI winter”, ca. 1975–1995, and narrowed. As a result, areas such as “machine learning”, “natural language processing” and “data science” were often not labelled as “AI”. Since ca. 2010, the use has broadened again, and at times almost all of computer science and even high-tech is lumped under “AI”. Now it is a name to be proud of, a booming industry with massive capital investment (Shoham et al. 2018), and on the edge of hype again. As Erik Brynjolfsson noted, it may allow us to

virtually eliminate global poverty, massively reduce disease and provide better education to almost everyone on the planet. (quoted in Anderson, Rainie, and Luchsinger 2018)

While AI can be entirely software, robots are physical machines that move. Robots are subject to physical impact, typically through “sensors”, and they exert physical force onto the world, typically through “actuators”, like a gripper or a turning wheel. Accordingly, autonomous cars or planes are robots, and only a minuscule portion of robots is “humanoid” (human-shaped), like in the movies. Some robots use AI, and some do not: Typical industrial robots blindly follow completely defined scripts with minimal sensory input and no learning or reasoning (around 500,000 such new industrial robots are installed each year (IFR 2019 [OIR])). It is probably fair to say that while robotics systems cause more concerns in the general public, AI systems are more likely to have a greater impact on humanity. Also, AI or robotics systems for a narrow set of tasks are less likely to cause new issues than systems that are more flexible and autonomous.

Robotics and AI can thus be seen as covering two overlapping sets of systems: systems that are only AI, systems that are only robotics, and systems that are both. We are interested in all three; the scope of this article is thus not only the intersection, but the union, of both sets.

Policy is only one of the concerns of this article. There is significant public discussion about AI ethics, and there are frequent pronouncements from politicians that the matter requires new policy, which is easier said than done: Actual technology policy is difficult to plan and enforce. It can take many forms, from incentives and funding, infrastructure, taxation, or good-will statements, to regulation by various actors, and the law. Policy for AI will possibly come into conflict with other aims of technology policy or general policy. Governments, parliaments, associations, and industry circles in industrialised countries have produced reports and white papers in recent years, and some have generated good-will slogans (“trusted/responsible/humane/human-centred/good/beneficial AI”), but is that what is needed? For a survey, see Jobin, Ienca, and Vayena (2019) and V. Müller’s list of PT-AI Policy Documents and Institutions .

For people who work in ethics and policy, there might be a tendency to overestimate the impact and threats from a new technology, and to underestimate how far current regulation can reach (e.g., for product liability). On the other hand, there is a tendency for businesses, the military, and some public administrations to “just talk” and do some “ethics washing” in order to preserve a good public image and continue as before. Actually implementing legally binding regulation would challenge existing business models and practices. Actual policy is not just an implementation of ethical theory, but subject to societal power structures—and the agents that do have the power will push against anything that restricts them. There is thus a significant risk that regulation will remain toothless in the face of economical and political power.

Though very little actual policy has been produced, there are some notable beginnings: The latest EU policy document suggests “trustworthy AI” should be lawful, ethical, and technically robust, and then spells this out as seven requirements: human oversight, technical robustness, privacy and data governance, transparency, fairness, well-being, and accountability (AI HLEG 2019 [OIR]). Much European research now runs under the slogan of “responsible research and innovation” (RRI), and “technology assessment” has been a standard field since the advent of nuclear power. Professional ethics is also a standard field in information technology, and this includes issues that are relevant in this article. Perhaps a “code of ethics” for AI engineers, analogous to the codes of ethics for medical doctors, is an option here (Véliz 2019). What data science itself should do is addressed in (L. Taylor and Purtova 2019). We also expect that much policy will eventually cover specific uses or technologies of AI and robotics, rather than the field as a whole. A useful summary of an ethical framework for AI is given in (European Group on Ethics in Science and New Technologies 2018: 13ff). On general AI policy, see Calo (2018) as well as Crawford and Calo (2016); Stahl, Timmermans, and Mittelstadt (2016); Johnson and Verdicchio (2017); and Giubilini and Savulescu (2018). A more political angle of technology is often discussed in the field of “Science and Technology Studies” (STS). As books like The Ethics of Invention (Jasanoff 2016) show, concerns in STS are often quite similar to those in ethics (Jacobs et al. 2019 [OIR]). In this article, we discuss the policy for each type of issue separately rather than for AI or robotics in general.

2. Main Debates

In this section we outline the ethical issues of human use of AI and robotics systems that can be more or less autonomous—which means we look at issues that arise with certain uses of the technologies which would not arise with others. It must be kept in mind, however, that technologies will always cause some uses to be easier, and thus more frequent, and hinder other uses. The design of technical artefacts thus has ethical relevance for their use (Houkes and Vermaas 2010; Verbeek 2011), so beyond “responsible use”, we also need “responsible design” in this field. The focus on use does not presuppose which ethical approaches are best suited for tackling these issues; they might well be virtue ethics (Vallor 2017) rather than consequentialist or value-based (Floridi et al. 2018). This section is also neutral with respect to the question whether AI systems truly have “intelligence” or other mental properties: It would apply equally well if AI and robotics are merely seen as the current face of automation (cf. Müller forthcoming-b).

There is a general discussion about privacy and surveillance in information technology (e.g., Macnish 2017; Roessler 2017), which mainly concerns the access to private data and data that is personally identifiable. Privacy has several well recognised aspects, e.g., “the right to be let alone”, information privacy, privacy as an aspect of personhood, control over information about oneself, and the right to secrecy (Bennett and Raab 2006). Privacy studies have historically focused on state surveillance by secret services but now include surveillance by other state agents, businesses, and even individuals. The technology has changed significantly in the last decades while regulation has been slow to respond (though there is the Regulation (EU) 2016/679)—the result is a certain anarchy that is exploited by the most powerful players, sometimes in plain sight, sometimes in hiding.

The digital sphere has widened greatly: All data collection and storage is now digital, our lives are increasingly digital, most digital data is connected to a single Internet, and there is more and more sensor technology in use that generates data about non-digital aspects of our lives. AI increases both the possibilities of intelligent data collection and the possibilities for data analysis. This applies to blanket surveillance of whole populations as well as to classic targeted surveillance. In addition, much of the data is traded between agents, usually for a fee.

At the same time, controlling who collects which data, and who has access, is much harder in the digital world than it was in the analogue world of paper and telephone calls. Many new AI technologies amplify the known issues. For example, face recognition in photos and videos allows identification and thus profiling and searching for individuals (Whittaker et al. 2018: 15ff). This continues using other techniques for identification, e.g., “device fingerprinting”, which are commonplace on the Internet (sometimes revealed in the “privacy policy”). The result is that “In this vast ocean of data, there is a frighteningly complete picture of us” (Smolan 2016: 1:01). The result is arguably a scandal that still has not received due public attention.

The data trail we leave behind is how our “free” services are paid for—but we are not told about that data collection and the value of this new raw material, and we are manipulated into leaving ever more such data. For the “big 5” companies (Amazon, Google/Alphabet, Microsoft, Apple, Facebook), the main data-collection part of their business appears to be based on deception, exploiting human weaknesses, furthering procrastination, generating addiction, and manipulation (Harris 2016 [OIR]). The primary focus of social media, gaming, and most of the Internet in this “surveillance economy” is to gain, maintain, and direct attention—and thus data supply. “Surveillance is the business model of the Internet” (Schneier 2015). This surveillance and attention economy is sometimes called “surveillance capitalism” (Zuboff 2019). It has caused many attempts to escape from the grasp of these corporations, e.g., in exercises of “minimalism” (Newport 2019), sometimes through the open source movement, but it appears that present-day citizens have lost the degree of autonomy needed to escape while fully continuing with their life and work. We have lost ownership of our data, if “ownership” is the right relation here. Arguably, we have lost control of our data.

These systems will often reveal facts about us that we ourselves wish to suppress or are not aware of: they know more about us than we know ourselves. Even just observing online behaviour allows insights into our mental states (Burr and Christianini 2019) and manipulation (see below section 2.2 ). This has led to calls for the protection of “derived data” (Wachter and Mittelstadt 2019). With the last sentence of his bestselling book, Homo Deus , Harari asks about the long-term consequences of AI:

What will happen to society, politics and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves? (2016: 462)

Robotic devices have not yet played a major role in this area, except for security patrolling, but this will change once they are more common outside of industry environments. Together with the “Internet of things”, the so-called “smart” systems (phone, TV, oven, lamp, virtual assistant, home,…), “smart city” (Sennett 2018), and “smart governance”, they are set to become part of the data-gathering machinery that offers more detailed data, of different types, in real time, with ever more information.

Privacy-preserving techniques that can largely conceal the identity of persons or groups are now a standard staple in data science; they include (relative) anonymisation , access control (plus encryption), and other models where computation is carried out with fully or partially encrypted input data (Stahl and Wright 2018); in the case of “differential privacy”, this is done by adding calibrated noise to encrypt the output of queries (Dwork et al. 2006; Abowd 2017). While requiring more effort and cost, such techniques can avoid many of the privacy issues. Some companies have also seen better privacy as a competitive advantage that can be leveraged and sold at a price.

One of the major practical difficulties is to actually enforce regulation, both on the level of the state and on the level of the individual who has a claim. They must identify the responsible legal entity, prove the action, perhaps prove intent, find a court that declares itself competent … and eventually get the court to actually enforce its decision. Well-established legal protection of rights such as consumer rights, product liability, and other civil liability or protection of intellectual property rights is often missing in digital products, or hard to enforce. This means that companies with a “digital” background are used to testing their products on the consumers without fear of liability while heavily defending their intellectual property rights. This “Internet Libertarianism” is sometimes taken to assume that technical solutions will take care of societal problems by themselves (Mozorov 2013).

The ethical issues of AI in surveillance go beyond the mere accumulation of data and direction of attention: They include the use of information to manipulate behaviour, online and offline, in a way that undermines autonomous rational choice. Of course, efforts to manipulate behaviour are ancient, but they may gain a new quality when they use AI systems. Given users’ intense interaction with data systems and the deep knowledge about individuals this provides, they are vulnerable to “nudges”, manipulation, and deception. With sufficient prior data, algorithms can be used to target individuals or small groups with just the kind of input that is likely to influence these particular individuals. A ’nudge‘ changes the environment such that it influences behaviour in a predictable way that is positive for the individual, but easy and cheap to avoid (Thaler & Sunstein 2008). There is a slippery slope from here to paternalism and manipulation.

Many advertisers, marketers, and online sellers will use any legal means at their disposal to maximise profit, including exploitation of behavioural biases, deception, and addiction generation (Costa and Halpern 2019 [OIR]). Such manipulation is the business model in much of the gambling and gaming industries, but it is spreading, e.g., to low-cost airlines. In interface design on web pages or in games, this manipulation uses what is called “dark patterns” (Mathur et al. 2019). At this moment, gambling and the sale of addictive substances are highly regulated, but online manipulation and addiction are not—even though manipulation of online behaviour is becoming a core business model of the Internet.

Furthermore, social media is now the prime location for political propaganda. This influence can be used to steer voting behaviour, as in the Facebook-Cambridge Analytica “scandal” (Woolley and Howard 2017; Bradshaw, Neudert, and Howard 2019) and—if successful—it may harm the autonomy of individuals (Susser, Roessler, and Nissenbaum 2019).

Improved AI “faking” technologies make what once was reliable evidence into unreliable evidence—this has already happened to digital photos, sound recordings, and video. It will soon be quite easy to create (rather than alter) “deep fake” text, photos, and video material with any desired content. Soon, sophisticated real-time interaction with persons over text, phone, or video will be faked, too. So we cannot trust digital interactions while we are at the same time increasingly dependent on such interactions.

One more specific issue is that machine learning techniques in AI rely on training with vast amounts of data. This means there will often be a trade-off between privacy and rights to data vs. technical quality of the product. This influences the consequentialist evaluation of privacy-violating practices.

The policy in this field has its ups and downs: Civil liberties and the protection of individual rights are under intense pressure from businesses’ lobbying, secret services, and other state agencies that depend on surveillance. Privacy protection has diminished massively compared to the pre-digital age when communication was based on letters, analogue telephone communications, and personal conversation and when surveillance operated under significant legal constraints.

While the EU General Data Protection Regulation (Regulation (EU) 2016/679) has strengthened privacy protection, the US and China prefer growth with less regulation (Thompson and Bremmer 2018), likely in the hope that this provides a competitive advantage. It is clear that state and business actors have increased their ability to invade privacy and manipulate people with the help of AI technology and will continue to do so to further their particular interests—unless reined in by policy in the interest of general society.

Opacity and bias are central issues in what is now sometimes called “data ethics” or “big data ethics” (Floridi and Taddeo 2016; Mittelstadt and Floridi 2016). AI systems for automated decision support and “predictive analytics” raise “significant concerns about lack of due process, accountability, community engagement, and auditing” (Whittaker et al. 2018: 18ff). They are part of a power structure in which “we are creating decision-making processes that constrain and limit opportunities for human participation” (Danaher 2016b: 245). At the same time, it will often be impossible for the affected person to know how the system came to this output, i.e., the system is “opaque” to that person. If the system involves machine learning, it will typically be opaque even to the expert, who will not know how a particular pattern was identified, or even what the pattern is. Bias in decision systems and data sets is exacerbated by this opacity. So, at least in cases where there is a desire to remove bias, the analysis of opacity and bias go hand in hand, and political response has to tackle both issues together.

Many AI systems rely on machine learning techniques in (simulated) neural networks that will extract patterns from a given dataset, with or without “correct” solutions provided; i.e., supervised, semi-supervised or unsupervised. With these techniques, the “learning” captures patterns in the data and these are labelled in a way that appears useful to the decision the system makes, while the programmer does not really know which patterns in the data the system has used. In fact, the programs are evolving, so when new data comes in, or new feedback is given (“this was correct”, “this was incorrect”), the patterns used by the learning system change. What this means is that the outcome is not transparent to the user or programmers: it is opaque. Furthermore, the quality of the program depends heavily on the quality of the data provided, following the old slogan “garbage in, garbage out”. So, if the data already involved a bias (e.g., police data about the skin colour of suspects), then the program will reproduce that bias. There are proposals for a standard description of datasets in a “datasheet” that would make the identification of such bias more feasible (Gebru et al. 2018 [OIR]). There is also significant recent literature about the limitations of machine learning systems that are essentially sophisticated data filters (Marcus 2018 [OIR]). Some have argued that the ethical problems of today are the result of technical “shortcuts” AI has taken (Cristianini forthcoming).

There are several technical activities that aim at “explainable AI”, starting with (Van Lent, Fisher, and Mancuso 1999; Lomas et al. 2012) and, more recently, a DARPA programme (Gunning 2017 [OIR]). More broadly, the demand for

a mechanism for elucidating and articulating the power structures, biases, and influences that computational artefacts exercise in society (Diakopoulos 2015: 398)

is sometimes called “algorithmic accountability reporting”. This does not mean that we expect an AI to “explain its reasoning”—doing so would require far more serious moral autonomy than we currently attribute to AI systems (see below §2.10 ).

The politician Henry Kissinger pointed out that there is a fundamental problem for democratic decision-making if we rely on a system that is supposedly superior to humans, but cannot explain its decisions. He says we may have “generated a potentially dominating technology in search of a guiding philosophy” (Kissinger 2018). Danaher (2016b) calls this problem “the threat of algocracy” (adopting the previous use of ‘algocracy’ from Aneesh 2002 [OIR], 2006). In a similar vein, Cave (2019) stresses that we need a broader societal move towards more “democratic” decision-making to avoid AI being a force that leads to a Kafka-style impenetrable suppression system in public administration and elsewhere. The political angle of this discussion has been stressed by O’Neil in her influential book Weapons of Math Destruction (2016), and by Yeung and Lodge (2019).

In the EU, some of these issues have been taken into account with the (Regulation (EU) 2016/679), which foresees that consumers, when faced with a decision based on data processing, will have a legal “right to explanation”—how far this goes and to what extent it can be enforced is disputed (Goodman and Flaxman 2017; Wachter, Mittelstadt, and Floridi 2016; Wachter, Mittelstadt, and Russell 2017). Zerilli et al. (2019) argue that there may be a double standard here, where we demand a high level of explanation for machine-based decisions despite humans sometimes not reaching that standard themselves.

Automated AI decision support systems and “predictive analytics” operate on data and produce a decision as “output”. This output may range from the relatively trivial to the highly significant: “this restaurant matches your preferences”, “the patient in this X-ray has completed bone growth”, “application to credit card declined”, “donor organ will be given to another patient”, “bail is denied”, or “target identified and engaged”. Data analysis is often used in “predictive analytics” in business, healthcare, and other fields, to foresee future developments—since prediction is easier, it will also become a cheaper commodity. One use of prediction is in “predictive policing” (NIJ 2014 [OIR]), which many fear might lead to an erosion of public liberties (Ferguson 2017) because it can take away power from the people whose behaviour is predicted. It appears, however, that many of the worries about policing depend on futuristic scenarios where law enforcement foresees and punishes planned actions, rather than waiting until a crime has been committed (like in the 2002 film “Minority Report”). One concern is that these systems might perpetuate bias that was already in the data used to set up the system, e.g., by increasing police patrols in an area and discovering more crime in that area. Actual “predictive policing” or “intelligence led policing” techniques mainly concern the question of where and when police forces will be needed most. Also, police officers can be provided with more data, offering them more control and facilitating better decisions, in workflow support software (e.g., “ArcGIS”). Whether this is problematic depends on the appropriate level of trust in the technical quality of these systems, and on the evaluation of aims of the police work itself. Perhaps a recent paper title points in the right direction here: “AI ethics in predictive policing: From models of threat to an ethics of care” (Asaro 2019).

Bias typically surfaces when unfair judgments are made because the individual making the judgment is influenced by a characteristic that is actually irrelevant to the matter at hand, typically a discriminatory preconception about members of a group. So, one form of bias is a learned cognitive feature of a person, often not made explicit. The person concerned may not be aware of having that bias—they may even be honestly and explicitly opposed to a bias they are found to have (e.g., through priming, cf. Graham and Lowery 2004). On fairness vs. bias in machine learning, see Binns (2018).

Apart from the social phenomenon of learned bias, the human cognitive system is generally prone to have various kinds of “cognitive biases”, e.g., the “confirmation bias”: humans tend to interpret information as confirming what they already believe. This second form of bias is often said to impede performance in rational judgment (Kahnemann 2011)—though at least some cognitive biases generate an evolutionary advantage, e.g., economical use of resources for intuitive judgment. There is a question whether AI systems could or should have such cognitive bias.

A third form of bias is present in data when it exhibits systematic error, e.g., “statistical bias”. Strictly, any given dataset will only be unbiased for a single kind of issue, so the mere creation of a dataset involves the danger that it may be used for a different kind of issue, and then turn out to be biased for that kind. Machine learning on the basis of such data would then not only fail to recognise the bias, but codify and automate the “historical bias”. Such historical bias was discovered in an automated recruitment screening system at Amazon (discontinued early 2017) that discriminated against women—presumably because the company had a history of discriminating against women in the hiring process. The “Correctional Offender Management Profiling for Alternative Sanctions” (COMPAS), a system to predict whether a defendant would re-offend, was found to be as successful (65.2% accuracy) as a group of random humans (Dressel and Farid 2018) and to produce more false positives and less false negatives for black defendants. The problem with such systems is thus bias plus humans placing excessive trust in the systems. The political dimensions of such automated systems in the USA are investigated in Eubanks (2018).

There are significant technical efforts to detect and remove bias from AI systems, but it is fair to say that these are in early stages: see UK Institute for Ethical AI & Machine Learning (Brownsword, Scotford, and Yeung 2017; Yeung and Lodge 2019). It appears that technological fixes have their limits in that they need a mathematical notion of fairness, which is hard to come by (Whittaker et al. 2018: 24ff; Selbst et al. 2019), as is a formal notion of “race” (see Benthall and Haynes 2019). An institutional proposal is in (Veale and Binns 2017).

Human-robot interaction (HRI) is an academic fields in its own right, which now pays significant attention to ethical matters, the dynamics of perception from both sides, and both the different interests present in and the intricacy of the social context, including co-working (e.g., Arnold and Scheutz 2017). Useful surveys for the ethics of robotics include Calo, Froomkin, and Kerr (2016); Royakkers and van Est (2016); Tzafestas (2016); a standard collection of papers is Lin, Abney, and Jenkins (2017).

While AI can be used to manipulate humans into believing and doing things (see section 2.2 ), it can also be used to drive robots that are problematic if their processes or appearance involve deception, threaten human dignity, or violate the Kantian requirement of “respect for humanity”. Humans very easily attribute mental properties to objects, and empathise with them, especially when the outer appearance of these objects is similar to that of living beings. This can be used to deceive humans (or animals) into attributing more intellectual or even emotional significance to robots or AI systems than they deserve. Some parts of humanoid robotics are problematic in this regard (e.g., Hiroshi Ishiguro’s remote-controlled Geminoids), and there are cases that have been clearly deceptive for public-relations purposes (e.g. on the abilities of Hanson Robotics’ “Sophia”). Of course, some fairly basic constraints of business ethics and law apply to robots, too: product safety and liability, or non-deception in advertisement. It appears that these existing constraints take care of many concerns that are raised. There are cases, however, where human-human interaction has aspects that appear specifically human in ways that can perhaps not be replaced by robots: care, love, and sex.

2.5.1 Example (a) Care Robots

The use of robots in health care for humans is currently at the level of concept studies in real environments, but it may become a usable technology in a few years, and has raised a number of concerns for a dystopian future of de-humanised care (A. Sharkey and N. Sharkey 2011; Robert Sparrow 2016). Current systems include robots that support human carers/caregivers (e.g., in lifting patients, or transporting material), robots that enable patients to do certain things by themselves (e.g., eat with a robotic arm), but also robots that are given to patients as company and comfort (e.g., the “Paro” robot seal). For an overview, see van Wynsberghe (2016); Nørskov (2017); Fosch-Villaronga and Albo-Canals (2019), for a survey of users Draper et al. (2014).

One reason why the issue of care has come to the fore is that people have argued that we will need robots in ageing societies. This argument makes problematic assumptions, namely that with longer lifespan people will need more care, and that it will not be possible to attract more humans to caring professions. It may also show a bias about age (Jecker forthcoming). Most importantly, it ignores the nature of automation, which is not simply about replacing humans, but about allowing humans to work more efficiently. It is not very clear that there really is an issue here since the discussion mostly focuses on the fear of robots de-humanising care, but the actual and foreseeable robots in care are assistive robots for classic automation of technical tasks. They are thus “care robots” only in a behavioural sense of performing tasks in care environments, not in the sense that a human “cares” for the patients. It appears that the success of “being cared for” relies on this intentional sense of “care”, which foreseeable robots cannot provide. If anything, the risk of robots in care is the absence of such intentional care—because less human carers may be needed. Interestingly, caring for something, even a virtual agent, can be good for the carer themselves (Lee et al. 2019). A system that pretends to care would be deceptive and thus problematic—unless the deception is countered by sufficiently large utility gain (Coeckelbergh 2016). Some robots that pretend to “care” on a basic level are available (Paro seal) and others are in the making. Perhaps feeling cared for by a machine, to some extent, is progress for come patients.

2.5.2 Example (b) Sex Robots

It has been argued by several tech optimists that humans will likely be interested in sex and companionship with robots and be comfortable with the idea (Levy 2007). Given the variation of human sexual preferences, including sex toys and sex dolls, this seems very likely: The question is whether such devices should be manufactured and promoted, and whether there should be limits in this touchy area. It seems to have moved into the mainstream of “robot philosophy” in recent times (Sullins 2012; Danaher and McArthur 2017; N. Sharkey et al. 2017 [OIR]; Bendel 2018; Devlin 2018).

Humans have long had deep emotional attachments to objects, so perhaps companionship or even love with a predictable android is attractive, especially to people who struggle with actual humans, and already prefer dogs, cats, birds, a computer or a tamagotchi . Danaher (2019b) argues against (Nyholm and Frank 2017) that these can be true friendships, and is thus a valuable goal. It certainly looks like such friendship might increase overall utility, even if lacking in depth. In these discussions there is an issue of deception, since a robot cannot (at present) mean what it says, or have feelings for a human. It is well known that humans are prone to attribute feelings and thoughts to entities that behave as if they had sentience,even to clearly inanimate objects that show no behaviour at all. Also, paying for deception seems to be an elementary part of the traditional sex industry.

Finally, there are concerns that have often accompanied matters of sex, namely consent (Frank and Nyholm 2017), aesthetic concerns, and the worry that humans may be “corrupted” by certain experiences. Old fashioned though this may seem, human behaviour is influenced by experience, and it is likely that pornography or sex robots support the perception of other humans as mere objects of desire, or even recipients of abuse, and thus ruin a deeper sexual and erotic experience. In this vein, the “Campaign Against Sex Robots” argues that these devices are a continuation of slavery and prostitution (Richardson 2016).

It seems clear that AI and robotics will lead to significant gains in productivity and thus overall wealth. The attempt to increase productivity has often been a feature of the economy, though the emphasis on “growth” is a modern phenomenon (Harari 2016: 240). However, productivity gains through automation typically mean that fewer humans are required for the same output. This does not necessarily imply a loss of overall employment, however, because available wealth increases and that can increase demand sufficiently to counteract the productivity gain. In the long run, higher productivity in industrial societies has led to more wealth overall. Major labour market disruptions have occurred in the past, e.g., farming employed over 60% of the workforce in Europe and North-America in 1800, while by 2010 it employed ca. 5% in the EU, and even less in the wealthiest countries (European Commission 2013). In the 20 years between 1950 and 1970 the number of hired agricultural workers in the UK was reduced by 50% (Zayed and Loft 2019). Some of these disruptions lead to more labour-intensive industries moving to places with lower labour cost. This is an ongoing process.

Classic automation replaced human muscle, whereas digital automation replaces human thought or information-processing—and unlike physical machines, digital automation is very cheap to duplicate (Bostrom and Yudkowsky 2014). It may thus mean a more radical change on the labour market. So, the main question is: will the effects be different this time? Will the creation of new jobs and wealth keep up with the destruction of jobs? And even if it is not different, what are the transition costs, and who bears them? Do we need to make societal adjustments for a fair distribution of costs and benefits of digital automation?

Responses to the issue of unemployment from AI have ranged from the alarmed (Frey and Osborne 2013; Westlake 2014) to the neutral (Metcalf, Keller, and Boyd 2016 [OIR]; Calo 2018; Frey 2019) to the optimistic (Brynjolfsson and McAfee 2016; Harari 2016; Danaher 2019a). In principle, the labour market effect of automation seems to be fairly well understood as involving two channels:

(i) the nature of interactions between differently skilled workers and new technologies affecting labour demand and (ii) the equilibrium effects of technological progress through consequent changes in labour supply and product markets. (Goos 2018: 362)

What currently seems to happen in the labour market as a result of AI and robotics automation is “job polarisation” or the “dumbbell” shape (Goos, Manning, and Salomons 2009): The highly skilled technical jobs are in demand and highly paid, the low skilled service jobs are in demand and badly paid, but the mid-qualification jobs in factories and offices, i.e., the majority of jobs, are under pressure and reduced because they are relatively predictable, and most likely to be automated (Baldwin 2019).

Perhaps enormous productivity gains will allow the “age of leisure” to be realised, something (Keynes 1930) had predicted to occur around 2030, assuming a growth rate of 1% per annum. Actually, we have already reached the level he anticipated for 2030, but we are still working—consuming more and inventing ever more levels of organisation. Harari explains how this economic development allowed humanity to overcome hunger, disease, and war—and now we aim for immortality and eternal bliss through AI, thus his title Homo Deus (Harari 2016: 75).

In general terms, the issue of unemployment is an issue of how goods in a society should be justly distributed. A standard view is that distributive justice should be rationally decided from behind a “veil of ignorance” (Rawls 1971), i.e., as if one does not know what position in a society one would actually be taking (labourer or industrialist, etc.). Rawls thought the chosen principles would then support basic liberties and a distribution that is of greatest benefit to the least-advantaged members of society. It would appear that the AI economy has three features that make such justice unlikely: First, it operates in a largely unregulated environment where responsibility is often hard to allocate. Second, it operates in markets that have a “winner takes all” feature where monopolies develop quickly. Third, the “new economy” of the digital service industries is based on intangible assets, also called “capitalism without capital” (Haskel and Westlake 2017). This means that it is difficult to control multinational digital corporations that do not rely on a physical plant in a particular location. These three features seem to suggest that if we leave the distribution of wealth to free market forces, the result would be a heavily unjust distribution: And this is indeed a development that we can already see.

One interesting question that has not received too much attention is whether the development of AI is environmentally sustainable: Like all computing systems, AI systems produce waste that is very hard to recycle and they consume vast amounts of energy, especially for the training of machine learning systems (and even for the “mining” of cryptocurrency). Again, it appears that some actors in this space offload such costs to the general society.

There are several notions of autonomy in the discussion of autonomous systems. A stronger notion is involved in philosophical debates where autonomy is the basis for responsibility and personhood (Christman 2003 [2018]). In this context, responsibility implies autonomy, but not inversely, so there can be systems that have degrees of technical autonomy without raising issues of responsibility. The weaker, more technical, notion of autonomy in robotics is relative and gradual: A system is said to be autonomous with respect to human control to a certain degree (Müller 2012). There is a parallel here to the issues of bias and opacity in AI since autonomy also concerns a power-relation: who is in control, and who is responsible?

Generally speaking, one question is the degree to which autonomous robots raise issues our present conceptual schemes must adapt to, or whether they just require technical adjustments. In most jurisdictions, there is a sophisticated system of civil and criminal liability to resolve such issues. Technical standards, e.g., for the safe use of machinery in medical environments, will likely need to be adjusted. There is already a field of “verifiable AI” for such safety-critical systems and for “security applications”. Bodies like the IEEE (The Institute of Electrical and Electronics Engineers) and the BSI (British Standards Institution) have produced “standards”, particularly on more technical sub-problems, such as data security and transparency. Among the many autonomous systems on land, on water, under water, in air or space, we discuss two samples: autonomous vehicles and autonomous weapons.

2.7.1 Example (a) Autonomous Vehicles

Autonomous vehicles hold the promise to reduce the very significant damage that human driving currently causes—approximately 1 million humans being killed per year, many more injured, the environment polluted, earth sealed with concrete and tarmac, cities full of parked cars, etc. However, there seem to be questions on how autonomous vehicles should behave, and how responsibility and risk should be distributed in the complicated system the vehicles operates in. (There is also significant disagreement over how long the development of fully autonomous, or “level 5” cars (SAE International 2018) will actually take.)

There is some discussion of “trolley problems” in this context. In the classic “trolley problems” (Thomson 1976; Woollard and Howard-Snyder 2016: section 2) various dilemmas are presented. The simplest version is that of a trolley train on a track that is heading towards five people and will kill them, unless the train is diverted onto a side track, but on that track there is one person, who will be killed if the train takes that side track. The example goes back to a remark in (Foot 1967: 6), who discusses a number of dilemma cases where tolerated and intended consequences of an action differ. “Trolley problems” are not supposed to describe actual ethical problems or to be solved with a “right” choice. Rather, they are thought-experiments where choice is artificially constrained to a small finite number of distinct one-off options and where the agent has perfect knowledge. These problems are used as a theoretical tool to investigate ethical intuitions and theories—especially the difference between actively doing vs. allowing something to happen, intended vs. tolerated consequences, and consequentialist vs. other normative approaches (Kamm 2016). This type of problem has reminded many of the problems encountered in actual driving and in autonomous driving (Lin 2016). It is doubtful, however, that an actual driver or autonomous car will ever have to solve trolley problems (but see Keeling 2020). While autonomous car trolley problems have received a lot of media attention (Awad et al. 2018), they do not seem to offer anything new to either ethical theory or to the programming of autonomous vehicles.

The more common ethical problems in driving, such as speeding, risky overtaking, not keeping a safe distance, etc. are classic problems of pursuing personal interest vs. the common good. The vast majority of these are covered by legal regulations on driving. Programming the car to drive “by the rules” rather than “by the interest of the passengers” or “to achieve maximum utility” is thus deflated to a standard problem of programming ethical machines (see section 2.9 ). There are probably additional discretionary rules of politeness and interesting questions on when to break the rules (Lin 2016), but again this seems to be more a case of applying standard considerations (rules vs. utility) to the case of autonomous vehicles.

Notable policy efforts in this field include the report (German Federal Ministry of Transport and Digital Infrastructure 2017), which stresses that safety is the primary objective. Rule 10 states

In the case of automated and connected driving systems, the accountability that was previously the sole preserve of the individual shifts from the motorist to the manufacturers and operators of the technological systems and to the bodies responsible for taking infrastructure, policy and legal decisions.

(See section 2.10.1 below). The resulting German and EU laws on licensing automated driving are much more restrictive than their US counterparts where “testing on consumers” is a strategy used by some companies—without informed consent of the consumers or their possible victims.

2.7.2 Example (b) Autonomous Weapons

The notion of automated weapons is fairly old:

For example, instead of fielding simple guided missiles or remotely piloted vehicles, we might launch completely autonomous land, sea, and air vehicles capable of complex, far-ranging reconnaissance and attack missions. (DARPA 1983: 1)

This proposal was ridiculed as “fantasy” at the time (Dreyfus, Dreyfus, and Athanasiou 1986: ix), but it is now a reality, at least for more easily identifiable targets (missiles, planes, ships, tanks, etc.), but not for human combatants. The main arguments against (lethal) autonomous weapon systems (AWS or LAWS), are that they support extrajudicial killings, take responsibility away from humans, and make wars or killings more likely—for a detailed list of issues see Lin, Bekey, and Abney (2008: 73–86).

It appears that lowering the hurdle to use such systems (autonomous vehicles, “fire-and-forget” missiles, or drones loaded with explosives) and reducing the probability of being held accountable would increase the probability of their use. The crucial asymmetry where one side can kill with impunity, and thus has few reasons not to do so, already exists in conventional drone wars with remote controlled weapons (e.g., US in Pakistan). It is easy to imagine a small drone that searches, identifies, and kills an individual human—or perhaps a type of human. These are the kinds of cases brought forward by the Campaign to Stop Killer Robots and other activist groups. Some seem to be equivalent to saying that autonomous weapons are indeed weapons …, and weapons kill, but we still make them in gigantic numbers. On the matter of accountability, autonomous weapons might make identification and prosecution of the responsible agents more difficult—but this is not clear, given the digital records that one can keep, at least in a conventional war. The difficulty of allocating punishment is sometimes called the “retribution gap” (Danaher 2016a).

Another question is whether using autonomous weapons in war would make wars worse, or make wars less bad. If robots reduce war crimes and crimes in war, the answer may well be positive and has been used as an argument in favour of these weapons (Arkin 2009; Müller 2016a) but also as an argument against them (Amoroso and Tamburrini 2018). Arguably the main threat is not the use of such weapons in conventional warfare, but in asymmetric conflicts or by non-state agents, including criminals.

It has also been said that autonomous weapons cannot conform to International Humanitarian Law, which requires observance of the principles of distinction (between combatants and civilians), proportionality (of force), and military necessity (of force) in military conflict (A. Sharkey 2019). It is true that the distinction between combatants and non-combatants is hard, but the distinction between civilian and military ships is easy—so all this says is that we should not construct and use such weapons if they do violate Humanitarian Law. Additional concerns have been raised that being killed by an autonomous weapon threatens human dignity, but even the defenders of a ban on these weapons seem to say that these are not good arguments:

There are other weapons, and other technologies, that also compromise human dignity. Given this, and the ambiguities inherent in the concept, it is wiser to draw on several types of objections in arguments against AWS, and not to rely exclusively on human dignity. (A. Sharkey 2019)

A lot has been made of keeping humans “in the loop” or “on the loop” in the military guidance on weapons—these ways of spelling out “meaningful control” are discussed in (Santoni de Sio and van den Hoven 2018). There have been discussions about the difficulties of allocating responsibility for the killings of an autonomous weapon, and a “responsibility gap” has been suggested (esp. Rob Sparrow 2007), meaning that neither the human nor the machine may be responsible. On the other hand, we do not assume that for every event there is someone responsible for that event, and the real issue may well be the distribution of risk (Simpson and Müller 2016). Risk analysis (Hansson 2013) indicates it is crucial to identify who is exposed to risk, who is a potential beneficiary , and who makes the decisions (Hansson 2018: 1822–1824).

Machine ethics is ethics for machines, for “ethical machines”, for machines as subjects , rather than for the human use of machines as objects. It is often not very clear whether this is supposed to cover all of AI ethics or to be a part of it (Floridi and Saunders 2004; Moor 2006; Anderson and Anderson 2011; Wallach and Asaro 2017). Sometimes it looks as though there is the (dubious) inference at play here that if machines act in ethically relevant ways, then we need a machine ethics. Accordingly, some use a broader notion:

machine ethics is concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. (Anderson and Anderson 2007: 15)

This might include mere matters of product safety, for example. Other authors sound rather ambitious but use a narrower notion:

AI reasoning should be able to take into account societal values, moral and ethical considerations; weigh the respective priorities of values held by different stakeholders in various multicultural contexts; explain its reasoning; and guarantee transparency. (Dignum 2018: 1, 2)

Some of the discussion in machine ethics makes the very substantial assumption that machines can, in some sense, be ethical agents responsible for their actions, or “autonomous moral agents” (see van Wynsberghe and Robbins 2019). The basic idea of machine ethics is now finding its way into actual robotics where the assumption that these machines are artificial moral agents in any substantial sense is usually not made (Winfield et al. 2019). It is sometimes observed that a robot that is programmed to follow ethical rules can very easily be modified to follow unethical rules (Vanderelst and Winfield 2018).

The idea that machine ethics might take the form of “laws” has famously been investigated by Isaac Asimov, who proposed “three laws of robotics” (Asimov 1942):

First Law—A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law—A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law—A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov then showed in a number of stories how conflicts between these three laws will make it problematic to use them despite their hierarchical organisation.

It is not clear that there is a consistent notion of “machine ethics” since weaker versions are in danger of reducing “having an ethics” to notions that would not normally be considered sufficient (e.g., without “reflection” or even without “action”); stronger notions that move towards artificial moral agents may describe a—currently—empty set.

If one takes machine ethics to concern moral agents, in some substantial sense, then these agents can be called “artificial moral agents”, having rights and responsibilities. However, the discussion about artificial entities challenges a number of common notions in ethics and it can be very useful to understand these in abstraction from the human case (cf. Misselhorn 2020; Powers and Ganascia forthcoming).

Several authors use “artificial moral agent” in a less demanding sense, borrowing from the use of “agent” in software engineering in which case matters of responsibility and rights will not arise (Allen, Varner, and Zinser 2000). James Moor (2006) distinguishes four types of machine agents: ethical impact agents (e.g., robot jockeys), implicit ethical agents (e.g., safe autopilot), explicit ethical agents (e.g., using formal methods to estimate utility), and full ethical agents (who “can make explicit ethical judgments and generally is competent to reasonably justify them. An average adult human is a full ethical agent”.) Several ways to achieve “explicit” or “full” ethical agents have been proposed, via programming it in (operational morality), via “developing” the ethics itself (functional morality), and finally full-blown morality with full intelligence and sentience (Allen, Smit, and Wallach 2005; Moor 2006). Programmed agents are sometimes not considered “full” agents because they are “competent without comprehension”, just like the neurons in a brain (Dennett 2017; Hakli and Mäkelä 2019).

In some discussions, the notion of “moral patient” plays a role: Ethical agents have responsibilities while ethical patients have rights because harm to them matters. It seems clear that some entities are patients without being agents, e.g., simple animals that can feel pain but cannot make justified choices. On the other hand, it is normally understood that all agents will also be patients (e.g., in a Kantian framework). Usually, being a person is supposed to be what makes an entity a responsible agent, someone who can have duties and be the object of ethical concerns. Such personhood is typically a deep notion associated with phenomenal consciousness, intention and free will (Frankfurt 1971; Strawson 1998). Torrance (2011) suggests “artificial (or machine) ethics could be defined as designing machines that do things that, when done by humans, are indicative of the possession of ‘ethical status’ in those humans” (2011: 116)—which he takes to be “ethical productivity and ethical receptivity ” (2011: 117)—his expressions for moral agents and patients.

2.9.1 Responsibility for Robots

There is broad consensus that accountability, liability, and the rule of law are basic requirements that must be upheld in the face of new technologies (European Group on Ethics in Science and New Technologies 2018, 18), but the issue in the case of robots is how this can be done and how responsibility can be allocated. If the robots act, will they themselves be responsible, liable, or accountable for their actions? Or should the distribution of risk perhaps take precedence over discussions of responsibility?

Traditional distribution of responsibility already occurs: A car maker is responsible for the technical safety of the car, a driver is responsible for driving, a mechanic is responsible for proper maintenance, the public authorities are responsible for the technical conditions of the roads, etc. In general

The effects of decisions or actions based on AI are often the result of countless interactions among many actors, including designers, developers, users, software, and hardware.… With distributed agency comes distributed responsibility. (Taddeo and Floridi 2018: 751).

How this distribution might occur is not a problem that is specific to AI, but it gains particular urgency in this context (Nyholm 2018a, 2018b). In classical control engineering, distributed control is often achieved through a control hierarchy plus control loops across these hierarchies.

2.9.2 Rights for Robots

Some authors have indicated that it should be seriously considered whether current robots must be allocated rights (Gunkel 2018a, 2018b; Danaher forthcoming; Turner 2019). This position seems to rely largely on criticism of the opponents and on the empirical observation that robots and other non-persons are sometimes treated as having rights. In this vein, a “relational turn” has been proposed: If we relate to robots as though they had rights, then we might be well-advised not to search whether they “really” do have such rights (Coeckelbergh 2010, 2012, 2018). This raises the question how far such anti-realism or quasi-realism can go, and what it means then to say that “robots have rights” in a human-centred approach (Gerdes 2016). On the other side of the debate, Bryson has insisted that robots should not enjoy rights (Bryson 2010), though she considers it a possibility (Gunkel and Bryson 2014).

There is a wholly separate issue whether robots (or other AI systems) should be given the status of “legal entities” or “legal persons” in a sense natural persons, but also states, businesses, or organisations are “entities”, namely they can have legal rights and duties. The European Parliament has considered allocating such status to robots in order to deal with civil liability (EU Parliament 2016; Bertolini and Aiello 2018), but not criminal liability—which is reserved for natural persons. It would also be possible to assign only a certain subset of rights and duties to robots. It has been said that “such legislative action would be morally unnecessary and legally troublesome” because it would not serve the interest of humans (Bryson, Diamantis, and Grant 2017: 273). In environmental ethics there is a long-standing discussion about the legal rights for natural objects like trees (C. D. Stone 1972).

It has also been said that the reasons for developing robots with rights, or artificial moral patients, in the future are ethically doubtful (van Wynsberghe and Robbins 2019). In the community of “artificial consciousness” researchers there is a significant concern whether it would be ethical to create such consciousness since creating it would presumably imply ethical obligations to a sentient being, e.g., not to harm it and not to end its existence by switching it off—some authors have called for a “moratorium on synthetic phenomenology” (Bentley et al. 2018: 28f).

2.10.1 Singularity and Superintelligence

In some quarters, the aim of current AI is thought to be an “artificial general intelligence” (AGI), contrasted to a technical or “narrow” AI. AGI is usually distinguished from traditional notions of AI as a general purpose system, and from Searle’s notion of “strong AI”:

computers given the right programs can be literally said to understand and have other cognitive states. (Searle 1980: 417)

The idea of singularity is that if the trajectory of artificial intelligence reaches up to systems that have a human level of intelligence, then these systems would themselves have the ability to develop AI systems that surpass the human level of intelligence, i.e., they are “superintelligent” (see below). Such superintelligent AI systems would quickly self-improve or develop even more intelligent systems. This sharp turn of events after reaching superintelligent AI is the “singularity” from which the development of AI is out of human control and hard to predict (Kurzweil 2005: 487).

The fear that “the robots we created will take over the world” had captured human imagination even before there were computers (e.g., Butler 1863) and is the central theme in Čapek’s famous play that introduced the word “robot” (Čapek 1920). This fear was first formulated as a possible trajectory of existing AI into an “intelligence explosion” by Irvin Good:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. (Good 1965: 33)

The optimistic argument from acceleration to singularity is spelled out by Kurzweil (1999, 2005, 2012) who essentially points out that computing power has been increasing exponentially, i.e., doubling ca. every 2 years since 1970 in accordance with “Moore’s Law” on the number of transistors, and will continue to do so for some time in the future. He predicted in (Kurzweil 1999) that by 2010 supercomputers will reach human computation capacity, by 2030 “mind uploading” will be possible, and by 2045 the “singularity” will occur. Kurzweil talks about an increase in computing power that can be purchased at a given cost—but of course in recent years the funds available to AI companies have also increased enormously: Amodei and Hernandez (2018 [OIR]) thus estimate that in the years 2012–2018 the actual computing power available to train a particular AI system doubled every 3.4 months, resulting in an 300,000x increase—not the 7x increase that doubling every two years would have created.

A common version of this argument (Chalmers 2010) talks about an increase in “intelligence” of the AI system (rather than raw computing power), but the crucial point of “singularity” remains the one where further development of AI is taken over by AI systems and accelerates beyond human level. Bostrom (2014) explains in some detail what would happen at that point and what the risks for humanity are. The discussion is summarised in Eden et al. (2012); Armstrong (2014); Shanahan (2015). There are possible paths to superintelligence other than computing power increase, e.g., the complete emulation of the human brain on a computer (Kurzweil 2012; Sandberg 2013), biological paths, or networks and organisations (Bostrom 2014: 22–51).

Despite obvious weaknesses in the identification of “intelligence” with processing power, Kurzweil seems right that humans tend to underestimate the power of exponential growth. Mini-test: If you walked in steps in such a way that each step is double the previous, starting with a step of one metre, how far would you get with 30 steps? (answer: almost 3 times further than the Earth’s only permanent natural satellite.) Indeed, most progress in AI is readily attributable to the availability of processors that are faster by degrees of magnitude, larger storage, and higher investment (Müller 2018). The actual acceleration and its speeds are discussed in (Müller and Bostrom 2016; Bostrom, Dafoe, and Flynn forthcoming); Sandberg (2019) argues that progress will continue for some time.

The participants in this debate are united by being technophiles in the sense that they expect technology to develop rapidly and bring broadly welcome changes—but beyond that, they divide into those who focus on benefits (e.g., Kurzweil) and those who focus on risks (e.g., Bostrom). Both camps sympathise with “transhuman” views of survival for humankind in a different physical form, e.g., uploaded on a computer (Moravec 1990, 1998; Bostrom 2003a, 2003c). They also consider the prospects of “human enhancement” in various respects, including intelligence—often called “IA” (intelligence augmentation). It may be that future AI will be used for human enhancement, or will contribute further to the dissolution of the neatly defined human single person. Robin Hanson provides detailed speculation about what will happen economically in case human “brain emulation” enables truly intelligent robots or “ems” (Hanson 2016).

The argument from superintelligence to risk requires the assumption that superintelligence does not imply benevolence—contrary to Kantian traditions in ethics that have argued higher levels of rationality or intelligence would go along with a better understanding of what is moral and better ability to act morally (Gewirth 1978; Chalmers 2010: 36f). Arguments for risk from superintelligence say that rationality and morality are entirely independent dimensions—this is sometimes explicitly argued for as an “orthogonality thesis” (Bostrom 2012; Armstrong 2013; Bostrom 2014: 105–109).

Criticism of the singularity narrative has been raised from various angles. Kurzweil and Bostrom seem to assume that intelligence is a one-dimensional property and that the set of intelligent agents is totally-ordered in the mathematical sense—but neither discusses intelligence at any length in their books. Generally, it is fair to say that despite some efforts, the assumptions made in the powerful narrative of superintelligence and singularity have not been investigated in detail. One question is whether such a singularity will ever occur—it may be conceptually impossible, practically impossible or may just not happen because of contingent events, including people actively preventing it. Philosophically, the interesting question is whether singularity is just a “myth” (Floridi 2016; Ganascia 2017), and not on the trajectory of actual AI research. This is something that practitioners often assume (e.g., Brooks 2017 [OIR]). They may do so because they fear the public relations backlash, because they overestimate the practical problems, or because they have good reasons to think that superintelligence is an unlikely outcome of current AI research (Müller forthcoming-a). This discussion raises the question whether the concern about “singularity” is just a narrative about fictional AI based on human fears. But even if one does find negative reasons compelling and the singularity not likely to occur, there is still a significant possibility that one may turn out to be wrong. Philosophy is not on the “secure path of a science” (Kant 1791: B15), and maybe AI and robotics aren’t either (Müller 2020). So, it appears that discussing the very high-impact risk of singularity has justification even if one thinks the probability of such singularity ever occurring is very low.

2.10.2 Existential Risk from Superintelligence

Thinking about superintelligence in the long term raises the question whether superintelligence may lead to the extinction of the human species, which is called an “existential risk” (or XRisk): The superintelligent systems may well have preferences that conflict with the existence of humans on Earth, and may thus decide to end that existence—and given their superior intelligence, they will have the power to do so (or they may happen to end it because they do not really care).

Thinking in the long term is the crucial feature of this literature. Whether the singularity (or another catastrophic event) occurs in 30 or 300 or 3000 years does not really matter (Baum et al. 2019). Perhaps there is even an astronomical pattern such that an intelligent species is bound to discover AI at some point, and thus bring about its own demise. Such a “great filter” would contribute to the explanation of the “Fermi paradox” why there is no sign of life in the known universe despite the high probability of it emerging. It would be bad news if we found out that the “great filter” is ahead of us, rather than an obstacle that Earth has already passed. These issues are sometimes taken more narrowly to be about human extinction (Bostrom 2013), or more broadly as concerning any large risk for the species (Rees 2018)—of which AI is only one (Häggström 2016; Ord 2020). Bostrom also uses the category of “global catastrophic risk” for risks that are sufficiently high up the two dimensions of “scope” and “severity” (Bostrom and Ćirković 2011; Bostrom 2013).

These discussions of risk are usually not connected to the general problem of ethics under risk (e.g., Hansson 2013, 2018). The long-term view has its own methodological challenges but has produced a wide discussion: (Tegmark 2017) focuses on AI and human life “3.0” after singularity while Russell, Dewey, and Tegmark (2015) and Bostrom, Dafoe, and Flynn (forthcoming) survey longer-term policy issues in ethical AI. Several collections of papers have investigated the risks of artificial general intelligence (AGI) and the factors that might make this development more or less risk-laden (Müller 2016b; Callaghan et al. 2017; Yampolskiy 2018), including the development of non-agent AI (Drexler 2019).

2.10.3 Controlling Superintelligence?

In a narrow sense, the “control problem” is how we humans can remain in control of an AI system once it is superintelligent (Bostrom 2014: 127ff). In a wider sense, it is the problem of how we can make sure an AI system will turn out to be positive according to human perception (Russell 2019); this is sometimes called “value alignment”. How easy or hard it is to control a superintelligence depends significantly on the speed of “take-off” to a superintelligent system. This has led to particular attention to systems with self-improvement, such as AlphaZero (Silver et al. 2018).

One aspect of this problem is that we might decide a certain feature is desirable, but then find out that it has unforeseen consequences that are so negative that we would not desire that feature after all. This is the ancient problem of King Midas who wished that all he touched would turn into gold. This problem has been discussed on the occasion of various examples, such as the “paperclip maximiser” (Bostrom 2003b), or the program to optimise chess performance (Omohundro 2014).

Discussions about superintelligence include speculation about omniscient beings, the radical changes on a “latter day”, and the promise of immortality through transcendence of our current bodily form—so sometimes they have clear religious undertones (Capurro 1993; Geraci 2008, 2010; O’Connell 2017: 160ff). These issues also pose a well-known problem of epistemology: Can we know the ways of the omniscient (Danaher 2015)? The usual opponents have already shown up: A characteristic response of an atheist is

People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world (Domingos 2015)

The new nihilists explain that a “techno-hypnosis” through information technologies has now become our main method of distraction from the loss of meaning (Gertz 2018). Both opponents would thus say we need an ethics for the “small” problems that occur with actual AI and robotics ( sections 2.1 through 2.9 above), and that there is less need for the “big ethics” of existential risk from AI ( section 2.10 ).

The singularity thus raises the problem of the concept of AI again. It is remarkable how imagination or “vision” has played a central role since the very beginning of the discipline at the “Dartmouth Summer Research Project” (McCarthy et al. 1955 [OIR]; Simon and Newell 1958). And the evaluation of this vision is subject to dramatic change: In a few decades, we went from the slogans “AI is impossible” (Dreyfus 1972) and “AI is just automation” (Lighthill 1973) to “AI will solve all problems” (Kurzweil 1999) and “AI may kill us all” (Bostrom 2014). This created media attention and public relations efforts, but it also raises the problem of how much of this “philosophy and ethics of AI” is really about AI rather than about an imagined technology. As we said at the outset, AI and robotics have raised fundamental questions about what we should do with these systems, what the systems themselves should do, and what risks they have in the long term. They also challenge the human view of humanity as the intelligent and dominant species on Earth. We have seen issues that have been raised and will have to watch technological and social developments closely to catch the new issues early on, develop a philosophical analysis, and learn for traditional problems of philosophy.

NOTE: Citations in the main text annotated “[OIR]” may be found in the Other Internet Resources section below, not in the Bibliography.

  • Abowd, John M, 2017, “How Will Statistical Agencies Operate When All Data Are Private?”, Journal of Privacy and Confidentiality , 7(3): 1–15. doi:10.29012/jpc.v7i3.404
  • AI4EU, 2019, “Outcomes from the Strategic Orientation Workshop (Deliverable 7.1)”, (June 28, 2019). https://www.ai4eu.eu/ai4eu-project-deliverables
  • Allen, Colin, Iva Smit, and Wendell Wallach, 2005, “Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches”, Ethics and Information Technology , 7(3): 149–155. doi:10.1007/s10676-006-0004-4
  • Allen, Colin, Gary Varner, and Jason Zinser, 2000, “Prolegomena to Any Future Artificial Moral Agent”, Journal of Experimental & Theoretical Artificial Intelligence , 12(3): 251–261. doi:10.1080/09528130050111428
  • Amoroso, Daniele and Guglielmo Tamburrini, 2018, “The Ethical and Legal Case Against Autonomy in Weapons Systems”, Global Jurist , 18(1): art. 20170012. doi:10.1515/gj-2017-0012
  • Anderson, Janna, Lee Rainie, and Alex Luchsinger, 2018, Artificial Intelligence and the Future of Humans , Washington, DC: Pew Research Center.
  • Anderson, Michael and Susan Leigh Anderson, 2007, “Machine Ethics: Creating an Ethical Intelligent Agent”, AI Magazine , 28(4): 15–26.
  • ––– (eds.), 2011, Machine Ethics , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511978036
  • Aneesh, A., 2006, Virtual Migration: The Programming of Globalization , Durham, NC and London: Duke University Press.
  • Arkin, Ronald C., 2009, Governing Lethal Behavior in Autonomous Robots , Boca Raton, FL: CRC Press.
  • Armstrong, Stuart, 2013, “General Purpose Intelligence: Arguing the Orthogonality Thesis”, Analysis and Metaphysics , 12: 68–84.
  • –––, 2014, Smarter Than Us , Berkeley, CA: MIRI.
  • Arnold, Thomas and Matthias Scheutz, 2017, “Beyond Moral Dilemmas: Exploring the Ethical Landscape in HRI”, in Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction—HRI ’17 , Vienna, Austria: ACM Press, 445–452. doi:10.1145/2909824.3020255
  • Asaro, Peter M., 2019, “AI Ethics in Predictive Policing: From Models of Threat to an Ethics of Care”, IEEE Technology and Society Magazine , 38(2): 40–53. doi:10.1109/MTS.2019.2915154
  • Asimov, Isaac, 1942, “Runaround: A Short Story”, Astounding Science Fiction , March 1942. Reprinted in “I, Robot”, New York: Gnome Press 1950, 1940ff.
  • Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan, 2018, “The Moral Machine Experiment”, Nature , 563(7729): 59–64. doi:10.1038/s41586-018-0637-6
  • Baldwin, Richard, 2019, The Globotics Upheaval: Globalisation, Robotics and the Future of Work , New York: Oxford University Press.
  • Baum, Seth D., Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin, and Roman V. Yampolskiy, 2019, “Long-Term Trajectories of Human Civilization”, Foresight , 21(1): 53–83. doi:10.1108/FS-04-2018-0037
  • Bendel, Oliver, 2018, “Sexroboter aus Sicht der Maschinenethik”, in Handbuch Filmtheorie , Bernhard Groß and Thomas Morsch (eds.), (Springer Reference Geisteswissenschaften), Wiesbaden: Springer Fachmedien Wiesbaden, 1–19. doi:10.1007/978-3-658-17484-2_22-1
  • Bennett, Colin J. and Charles Raab, 2006, The Governance of Privacy: Policy Instruments in Global Perspective , second edition, Cambridge, MA: MIT Press.
  • Benthall, Sebastian and Bruce D. Haynes, 2019, “Racial Categories in Machine Learning”, in Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’19 , Atlanta, GA, USA: ACM Press, 289–298. doi:10.1145/3287560.3287575
  • Bentley, Peter J., Miles Brundage, Olle Häggström, and Thomas Metzinger, 2018, “Should We Fear Artificial Intelligence? In-Depth Analysis”, European Parliamentary Research Service, Scientific Foresight Unit (STOA), March 2018, PE 614.547, 1–40. [ Bentley et al. 2018 available online ]
  • Bertolini, Andrea and Giuseppe Aiello, 2018, “Robot Companions: A Legal and Ethical Analysis”, The Information Society , 34(3): 130–140. doi:10.1080/01972243.2018.1444249
  • Binns, Reuben, 2018, “Fairness in Machine Learning: Lessons from Political Philosophy”, Proceedings of the 1st Conference on Fairness, Accountability and Transparency , in Proceedings of Machine Learning Research , 81: 149–159.
  • Bostrom, Nick, 2003a, “Are We Living in a Computer Simulation?”, The Philosophical Quarterly , 53(211): 243–255. doi:10.1111/1467-9213.00309
  • –––, 2003b, “Ethical Issues in Advanced Artificial Intelligence”, in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Volume 2 , Iva Smit, Wendell Wallach, and G.E. Lasker (eds), (IIAS-147-2003), Tecumseh, ON: International Institute of Advanced Studies in Systems Research and Cybernetics, 12–17. [ Botstrom 2003b revised available online ]
  • –––, 2003c, “Transhumanist Values”, in Ethical Issues for the Twenty-First Century , Frederick Adams (ed.), Bowling Green, OH: Philosophical Documentation Center Press.
  • –––, 2012, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents”, Minds and Machines , 22(2): 71–85. doi:10.1007/s11023-012-9281-3
  • –––, 2013, “Existential Risk Prevention as Global Priority”, Global Policy , 4(1): 15–31. doi:10.1111/1758-5899.12002
  • –––, 2014, Superintelligence: Paths, Dangers, Strategies , Oxford: Oxford University Press.
  • Bostrom, Nick and Milan M. Ćirković (eds.), 2011, Global Catastrophic Risks , New York: Oxford University Press.
  • Bostrom, Nick, Allan Dafoe, and Carrick Flynn, forthcoming, “Policy Desiderata for Superintelligent AI: A Vector Field Approach (V. 4.3)”, in Ethics of Artificial Intelligence , S Matthew Liao (ed.), New York: Oxford University Press. [ Bostrom, Dafoe, and Flynn forthcoming – preprint available online ]
  • Bostrom, Nick and Eliezer Yudkowsky, 2014, “The Ethics of Artificial Intelligence”, in The Cambridge Handbook of Artificial Intelligence , Keith Frankish and William M. Ramsey (eds.), Cambridge: Cambridge University Press, 316–334. doi:10.1017/CBO9781139046855.020 [ Bostrom and Yudkowsky 2014 available online ]
  • Bradshaw, Samantha, Lisa-Maria Neudert, and Phil Howard, 2019, “Government Responses to Malicious Use of Social Media”, Working Paper 2019.2, Oxford: Project on Computational Propaganda. [ Bradshaw, Neudert, and Howard 2019 available online/ ]
  • Brownsword, Roger, Eloise Scotford, and Karen Yeung (eds.), 2017, The Oxford Handbook of Law, Regulation and Technology , Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199680832.001.0001
  • Brynjolfsson, Erik and Andrew McAfee, 2016, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies , New York: W. W. Norton.
  • Bryson, Joanna J., 2010, “Robots Should Be Slaves”, in Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues , Yorick Wilks (ed.), (Natural Language Processing 8), Amsterdam: John Benjamins Publishing Company, 63–74. doi:10.1075/nlp.8.11bry
  • –––, 2019, “The Past Decade and Future of Ai’s Impact on Society”, in Towards a New Enlightenment: A Transcendent Decade , Madrid: Turner - BVVA. [ Bryson 2019 available online ]
  • Bryson, Joanna J., Mihailis E. Diamantis, and Thomas D. Grant, 2017, “Of, for, and by the People: The Legal Lacuna of Synthetic Persons”, Artificial Intelligence and Law , 25(3): 273–291. doi:10.1007/s10506-017-9214-9
  • Burr, Christopher and Nello Cristianini, 2019, “Can Machines Read Our Minds?”, Minds and Machines , 29(3): 461–494. doi:10.1007/s11023-019-09497-4
  • Butler, Samuel, 1863, “Darwin among the Machines: Letter to the Editor”, Letter in The Press (Christchurch) , 13 June 1863. [ Butler 1863 available online ]
  • Callaghan, Victor, James Miller, Roman Yampolskiy, and Stuart Armstrong (eds.), 2017, The Technological Singularity: Managing the Journey , (The Frontiers Collection), Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-662-54033-6
  • Calo, Ryan, 2018, “Artificial Intelligence Policy: A Primer and Roadmap”, University of Bologna Law Review , 3(2): 180-218. doi:10.6092/ISSN.2531-6133/8670
  • Calo, Ryan, A. Michael Froomkin, and Ian Kerr (eds.), 2016, Robot Law , Cheltenham: Edward Elgar.
  • Čapek, Karel, 1920, R.U.R. , Prague: Aventium. Translated by Peter Majer and Cathy Porter, London: Methuen, 1999.
  • Capurro, Raphael, 1993, “Ein Grinsen Ohne Katze: Von der Vergleichbarkeit Zwischen ‘Künstlicher Intelligenz’ und ‘Getrennten Intelligenzen’”, Zeitschrift für philosophische Forschung , 47: 93–102.
  • Cave, Stephen, 2019, “To Save Us from a Kafkaesque Future, We Must Democratise AI”, The Guardian , 04 January 2019. [ Cave 2019 available online ]
  • Chalmers, David J., 2010, “The Singularity: A Philosophical Analysis”, Journal of Consciousness Studies , 17(9–10): 7–65. [ Chalmers 2010 available online ]
  • Christman, John, 2003 [2018], “Autonomy in Moral and Political Philosophy”, (Spring 2018) Stanford Encyclopedia of Philosophy (EDITION NEEDED), URL = < https://plato.stanford.edu/archives/spr2018/entries/autonomy-moral/ >
  • Coeckelbergh, Mark, 2010, “Robot Rights? Towards a Social-Relational Justification of Moral Consideration”, Ethics and Information Technology , 12(3): 209–221. doi:10.1007/s10676-010-9235-5
  • –––, 2012, Growing Moral Relations: Critique of Moral Status Ascription , London: Palgrave. doi:10.1057/9781137025968
  • –––, 2016, “Care Robots and the Future of ICT-Mediated Elderly Care: A Response to Doom Scenarios”, AI & Society , 31(4): 455–462. doi:10.1007/s00146-015-0626-3
  • –––, 2018, “What Do We Mean by a Relational Ethics? Growing a Relational Approach to the Moral Standing of Plants, Robots and Other Non-Humans”, in Plant Ethics: Concepts and Applications , Angela Kallhoff, Marcello Di Paola, and Maria Schörgenhumer (eds.), London: Routledge, 110–121.
  • Crawford, Kate and Ryan Calo, 2016, “There Is a Blind Spot in AI Research”, Nature , 538(7625): 311–313. doi:10.1038/538311a
  • Cristianini, Nello, forthcoming, “Shortcuts to Artificial Intelligence”, in Machines We Trust , Marcello Pelillo and Teresa Scantamburlo (eds.), Cambridge, MA: MIT Press. [ Cristianini forthcoming – preprint available online ]
  • Danaher, John, 2015, “Why AI Doomsayers Are Like Sceptical Theists and Why It Matters”, Minds and Machines , 25(3): 231–246. doi:10.1007/s11023-015-9365-y
  • –––, 2016a, “Robots, Law and the Retribution Gap”, Ethics and Information Technology , 18(4): 299–309. doi:10.1007/s10676-016-9403-3
  • –––, 2016b, “The Threat of Algocracy: Reality, Resistance and Accommodation”, Philosophy & Technology , 29(3): 245–268. doi:10.1007/s13347-015-0211-1
  • –––, 2019a, Automation and Utopia: Human Flourishing in a World without Work , Cambridge, MA: Harvard University Press.
  • –––, 2019b, “The Philosophical Case for Robot Friendship”, Journal of Posthuman Studies , 3(1): 5–24. doi:10.5325/jpoststud.3.1.0005
  • –––, forthcoming, “Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism”, Science and Engineering Ethics , first online: 20 June 2019. doi:10.1007/s11948-019-00119-x
  • Danaher, John and Neil McArthur (eds.), 2017, Robot Sex: Social and Ethical Implications , Boston, MA: MIT Press.
  • DARPA, 1983, “Strategic Computing. New-Generation Computing Technology: A Strategic Plan for Its Development an Application to Critical Problems in Defense”, ADA141982, 28 October 1983. [ DARPA 1983 available online ]
  • Dennett, Daniel C, 2017, From Bacteria to Bach and Back: The Evolution of Minds , New York: W.W. Norton.
  • Devlin, Kate, 2018, Turned On: Science, Sex and Robots , London: Bloomsbury.
  • Diakopoulos, Nicholas, 2015, “Algorithmic Accountability: Journalistic Investigation of Computational Power Structures”, Digital Journalism , 3(3): 398–415. doi:10.1080/21670811.2014.976411
  • Dignum, Virginia, 2018, “Ethics in Artificial Intelligence: Introduction to the Special Issue”, Ethics and Information Technology , 20(1): 1–3. doi:10.1007/s10676-018-9450-z
  • Domingos, Pedro, 2015, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World , London: Allen Lane.
  • Draper, Heather, Tom Sorell, Sandra Bedaf, Dag Sverre Syrdal, Carolina Gutierrez-Ruiz, Alexandre Duclos, and Farshid Amirabdollahian, 2014, “Ethical Dimensions of Human-Robot Interactions in the Care of Older People: Insights from 21 Focus Groups Convened in the UK, France and the Netherlands”, in International Conference on Social Robotics 2014 , Michael Beetz, Benjamin Johnston, and Mary-Anne Williams (eds.), (Lecture Notes in Artificial Intelligence 8755), Cham: Springer International Publishing, 135–145. doi:10.1007/978-3-319-11973-1_14
  • Dressel, Julia and Hany Farid, 2018, “The Accuracy, Fairness, and Limits of Predicting Recidivism”, Science Advances , 4(1): eaao5580. doi:10.1126/sciadv.aao5580
  • Drexler, K. Eric, 2019, “Reframing Superintelligence: Comprehensive AI Services as General Intelligence”, FHI Technical Report, 2019-1, 1-210. [ Drexler 2019 available online ]
  • Dreyfus, Hubert L., 1972, What Computers Still Can’t Do: A Critique of Artificial Reason , second edition, Cambridge, MA: MIT Press 1992.
  • Dreyfus, Hubert L., Stuart E. Dreyfus, and Tom Athanasiou, 1986, Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer , New York: Free Press.
  • Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith, 2006, Calibrating Noise to Sensitivity in Private Data Analysis , Berlin, Heidelberg.
  • Eden, Amnon H., James H. Moor, Johnny H. Søraker, and Eric Steinhart (eds.), 2012, Singularity Hypotheses: A Scientific and Philosophical Assessment , (The Frontiers Collection), Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-32560-1
  • Eubanks, Virginia, 2018, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor , London: St. Martin’s Press.
  • European Commission, 2013, “How Many People Work in Agriculture in the European Union? An Answer Based on Eurostat Data Sources”, EU Agricultural Economics Briefs , 8 (July 2013). [ Anonymous 2013 available online ]
  • European Group on Ethics in Science and New Technologies, 2018, “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems”, 9 March 2018, European Commission, Directorate-General for Research and Innovation, Unit RTD.01. [ European Group 2018 available online ]
  • Ferguson, Andrew Guthrie, 2017, The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement , New York: NYU Press.
  • Floridi, Luciano, 2016, “Should We Be Afraid of AI? Machines Seem to Be Getting Smarter and Smarter and Much Better at Human Jobs, yet True AI Is Utterly Implausible. Why?”, Aeon , 9 May 2016. URL = < Floridi 2016 available online >
  • Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke, and Effy Vayena, 2018, “AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations”, Minds and Machines , 28(4): 689–707. doi:10.1007/s11023-018-9482-5
  • Floridi, Luciano and Jeff W. Sanders, 2004, “On the Morality of Artificial Agents”, Minds and Machines , 14(3): 349–379. doi:10.1023/B:MIND.0000035461.63578.9d
  • Floridi, Luciano and Mariarosaria Taddeo, 2016, “What Is Data Ethics?”, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences , 374(2083): 20160360. doi:10.1098/rsta.2016.0360
  • Foot, Philippa, 1967, “The Problem of Abortion and the Doctrine of the Double Effect”, Oxford Review , 5: 5–15.
  • Fosch-Villaronga, Eduard and Jordi Albo-Canals, 2019, “‘I’ll Take Care of You,’ Said the Robot”, Paladyn, Journal of Behavioral Robotics , 10(1): 77–93. doi:10.1515/pjbr-2019-0006
  • Frank, Lily and Sven Nyholm, 2017, “Robot Sex and Consent: Is Consent to Sex between a Robot and a Human Conceivable, Possible, and Desirable?”, Artificial Intelligence and Law , 25(3): 305–323. doi:10.1007/s10506-017-9212-y
  • Frankfurt, Harry G., 1971, “Freedom of the Will and the Concept of a Person”, The Journal of Philosophy , 68(1): 5–20.
  • Frey, Carl Benedict, 2019, The Technology Trap: Capital, Labour, and Power in the Age of Automation , Princeton, NJ: Princeton University Press.
  • Frey, Carl Benedikt and Michael A. Osborne, 2013, “The Future of Employment: How Susceptible Are Jobs to Computerisation?”, Oxford Martin School Working Papers, 17 September 2013. [ Frey and Osborne 2013 available online ]
  • Ganascia, Jean-Gabriel, 2017, Le Mythe De La Singularité , Paris: Éditions du Seuil.
  • EU Parliament, 2016, “Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(Inl))”, Committee on Legal Affairs , 10.11.2016. https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html
  • EU Regulation, 2016/679, “General Data Protection Regulation: Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/Ec”, Official Journal of the European Union , 119 (4 May 2016), 1–88. [ Regulation (EU) 2016/679 available online ]
  • Geraci, Robert M., 2008, “Apocalyptic AI: Religion and the Promise of Artificial Intelligence”, Journal of the American Academy of Religion , 76(1): 138–166. doi:10.1093/jaarel/lfm101
  • –––, 2010, Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780195393026.001.0001
  • Gerdes, Anne, 2016, “The Issue of Moral Consideration in Robot Ethics”, ACM SIGCAS Computers and Society , 45(3): 274–279. doi:10.1145/2874239.2874278
  • German Federal Ministry of Transport and Digital Infrastructure, 2017, “Report of the Ethics Commission: Automated and Connected Driving”, June 2017, 1–36. [ GFMTDI 2017 available online ]
  • Gertz, Nolen, 2018, Nihilism and Technology , London: Rowman & Littlefield.
  • Gewirth, Alan, 1978, “The Golden Rule Rationalized”, Midwest Studies in Philosophy , 3(1): 133–147. doi:10.1111/j.1475-4975.1978.tb00353.x
  • Gibert, Martin, 2019, “Éthique Artificielle (Version Grand Public)”, in L’Encyclopédie Philosophique , Maxime Kristanek (ed.), accessed: 16 April 2020, URL = < Gibert 2019 available online >
  • Giubilini, Alberto and Julian Savulescu, 2018, “The Artificial Moral Advisor. The ‘Ideal Observer’ Meets Artificial Intelligence”, Philosophy & Technology , 31(2): 169–188. doi:10.1007/s13347-017-0285-z
  • Good, Irving John, 1965, “Speculations Concerning the First Ultraintelligent Machine”, in Advances in Computers 6 , Franz L. Alt and Morris Rubinoff (eds.), New York & London: Academic Press, 31–88. doi:10.1016/S0065-2458(08)60418-0
  • Goodfellow, Ian, Yoshua Bengio, and Aaron Courville, 2016, Deep Learning , Cambridge, MA: MIT Press.
  • Goodman, Bryce and Seth Flaxman, 2017, “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation’”, AI Magazine , 38(3): 50–57. doi:10.1609/aimag.v38i3.2741
  • Goos, Maarten, 2018, “The Impact of Technological Progress on Labour Markets: Policy Challenges”, Oxford Review of Economic Policy , 34(3): 362–375. doi:10.1093/oxrep/gry002
  • Goos, Maarten, Alan Manning, and Anna Salomons, 2009, “Job Polarization in Europe”, American Economic Review , 99(2): 58–63. doi:10.1257/aer.99.2.58
  • Graham, Sandra and Brian S. Lowery, 2004, “Priming Unconscious Racial Stereotypes about Adolescent Offenders”, Law and Human Behavior , 28(5): 483–504. doi:10.1023/B:LAHU.0000046430.65485.1f
  • Gunkel, David J., 2018a, “The Other Question: Can and Should Robots Have Rights?”, Ethics and Information Technology , 20(2): 87–99. doi:10.1007/s10676-017-9442-4
  • –––, 2018b, Robot Rights , Boston, MA: MIT Press.
  • Gunkel, David J. and Joanna J. Bryson (eds.), 2014, Machine Morality: The Machine as Moral Agent and Patient special issue of Philosophy & Technology , 27(1): 1–142.
  • Häggström, Olle, 2016, Here Be Dragons: Science, Technology and the Future of Humanity , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780198723547.001.0001
  • Hakli, Raul and Pekka Mäkelä, 2019, “Moral Responsibility of Robots and Hybrid Agents”, The Monist , 102(2): 259–275. doi:10.1093/monist/onz009
  • Hanson, Robin, 2016, The Age of Em: Work, Love and Life When Robots Rule the Earth , Oxford: Oxford University Press.
  • Hansson, Sven Ove, 2013, The Ethics of Risk: Ethical Analysis in an Uncertain World , New York: Palgrave Macmillan.
  • –––, 2018, “How to Perform an Ethical Risk Analysis (eRA)”, Risk Analysis , 38(9): 1820–1829. doi:10.1111/risa.12978
  • Harari, Yuval Noah, 2016, Homo Deus: A Brief History of Tomorrow , New York: Harper.
  • Haskel, Jonathan and Stian Westlake, 2017, Capitalism without Capital: The Rise of the Intangible Economy , Princeton, NJ: Princeton University Press.
  • Houkes, Wybo and Pieter E. Vermaas, 2010, Technical Functions: On the Use and Design of Artefacts , (Philosophy of Engineering and Technology 1), Dordrecht: Springer Netherlands. doi:10.1007/978-90-481-3900-2
  • IEEE, 2019, Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems (First Version), < IEEE 2019 available online >.
  • Jasanoff, Sheila, 2016, The Ethics of Invention: Technology and the Human Future , New York: Norton.
  • Jecker, Nancy S., forthcoming, Ending Midlife Bias: New Values for Old Age , New York: Oxford University Press.
  • Jobin, Anna, Marcello Ienca, and Effy Vayena, 2019, “The Global Landscape of AI Ethics Guidelines”, Nature Machine Intelligence , 1(9): 389–399. doi:10.1038/s42256-019-0088-2
  • Johnson, Deborah G. and Mario Verdicchio, 2017, “Reframing AI Discourse”, Minds and Machines , 27(4): 575–590. doi:10.1007/s11023-017-9417-6
  • Kahnemann, Daniel, 2011, Thinking Fast and Slow , London: Macmillan.
  • Kamm, Frances Myrna, 2016, The Trolley Problem Mysteries , Eric Rakowski (ed.), Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190247157.001.0001
  • Kant, Immanuel, 1781/1787, Kritik der reinen Vernunft . Translated as Critique of Pure Reason , Norman Kemp Smith (trans.), London: Palgrave Macmillan, 1929.
  • Keeling, Geoff, 2020, “Why Trolley Problems Matter for the Ethics of Automated Vehicles”, Science and Engineering Ethics , 26(1): 293–307. doi:10.1007/s11948-019-00096-1
  • Keynes, John Maynard, 1930, “Economic Possibilities for Our Grandchildren”. Reprinted in his Essays in Persuasion , New York: Harcourt Brace, 1932, 358–373.
  • Kissinger, Henry A., 2018, “How the Enlightenment Ends: Philosophically, Intellectually—in Every Way—Human Society Is Unprepared for the Rise of Artificial Intelligence”, The Atlantic , June 2018. [ Kissinger 2018 available online ]
  • Kurzweil, Ray, 1999, The Age of Spiritual Machines: When Computers Exceed Human Intelligence , London: Penguin.
  • –––, 2005, The Singularity Is Near: When Humans Transcend Biology , London: Viking.
  • –––, 2012, How to Create a Mind: The Secret of Human Thought Revealed , New York: Viking.
  • Lee, Minha, Sander Ackermans, Nena van As, Hanwen Chang, Enzo Lucas, and Wijnand IJsselsteijn, 2019, “Caring for Vincent: A Chatbot for Self-Compassion”, in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems—CHI ’19 , Glasgow, Scotland: ACM Press, 1–13. doi:10.1145/3290605.3300932
  • Levy, David, 2007, Love and Sex with Robots: The Evolution of Human-Robot Relationships , New York: Harper & Co.
  • Lighthill, James, 1973, “Artificial Intelligence: A General Survey”, Artificial intelligence: A Paper Symposion , London: Science Research Council. [ Lighthill 1973 available online ]
  • Lin, Patrick, 2016, “Why Ethics Matters for Autonomous Cars”, in Autonomous Driving , Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner (eds.), Berlin, Heidelberg: Springer Berlin Heidelberg, 69–85. doi:10.1007/978-3-662-48847-8_4
  • Lin, Patrick, Keith Abney, and Ryan Jenkins (eds.), 2017, Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence , New York: Oxford University Press. doi:10.1093/oso/9780190652951.001.0001
  • Lin, Patrick, George Bekey, and Keith Abney, 2008, “Autonomous Military Robotics: Risk, Ethics, and Design”, ONR report, California Polytechnic State University, San Luis Obispo, 20 December 2008), 112 pp. [ Lin, Bekey, and Abney 2008 available online ]
  • Lomas, Meghann, Robert Chevalier, Ernest Vincent Cross, Robert Christopher Garrett, John Hoare, and Michael Kopack, 2012, “Explaining Robot Actions”, in Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction—HRI ’12 , Boston, MA: ACM Press, 187–188. doi:10.1145/2157689.2157748
  • Macnish, Kevin, 2017, The Ethics of Surveillance: An Introduction , London: Routledge.
  • Mathur, Arunesh, Gunes Acar, Michael J. Friedman, Elena Lucherini, Jonathan Mayer, Marshini Chetty, and Arvind Narayanan, 2019, “Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites”, Proceedings of the ACM on Human-Computer Interaction , 3(CSCW): art. 81. doi:10.1145/3359183
  • Minsky, Marvin, 1985, The Society of Mind , New York: Simon & Schuster.
  • Misselhorn, Catrin, 2020, “Artificial Systems with Moral Capacities? A Research Design and Its Implementation in a Geriatric Care System”, Artificial Intelligence , 278: art. 103179. doi:10.1016/j.artint.2019.103179
  • Mittelstadt, Brent Daniel and Luciano Floridi, 2016, “The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts”, Science and Engineering Ethics , 22(2): 303–341. doi:10.1007/s11948-015-9652-2
  • Moor, James H., 2006, “The Nature, Importance, and Difficulty of Machine Ethics”, IEEE Intelligent Systems , 21(4): 18–21. doi:10.1109/MIS.2006.80
  • Moravec, Hans, 1990, Mind Children , Cambridge, MA: Harvard University Press.
  • –––, 1998, Robot: Mere Machine to Transcendent Mind , New York: Oxford University Press.
  • Mozorov, Eygeny, 2013, To Save Everything, Click Here: The Folly of Technological Solutionism , New York: Public Affairs.
  • Müller, Vincent C., 2012, “Autonomous Cognitive Systems in Real-World Environments: Less Control, More Flexibility and Better Interaction”, Cognitive Computation , 4(3): 212–215. doi:10.1007/s12559-012-9129-4
  • –––, 2016a, “Autonomous Killer Robots Are Probably Good News”, In Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on the Use of Remotely Controlled Weapons , Ezio Di Nucci and Filippo Santoni de Sio (eds.), London: Ashgate, 67–81.
  • ––– (ed.), 2016b, Risks of Artificial Intelligence , London: Chapman & Hall - CRC Press. doi:10.1201/b19187
  • –––, 2018, “In 30 Schritten zum Mond? Zukünftiger Fortschritt in der KI”, Medienkorrespondenz , 20: 5–15. [ Müller 2018 available online ]
  • –––, 2020, “Measuring Progress in Robotics: Benchmarking and the ‘Measure-Target Confusion’”, in Metrics of Sensory Motor Coordination and Integration in Robots and Animals , Fabio Bonsignorio, Elena Messina, Angel P. del Pobil, and John Hallam (eds.), (Cognitive Systems Monographs 36), Cham: Springer International Publishing, 169–179. doi:10.1007/978-3-030-14126-4_9
  • –––, forthcoming-a, Can Machines Think? Fundamental Problems of Artificial Intelligence , New York: Oxford University Press.
  • ––– (ed.), forthcoming-b, Oxford Handbook of the Philosophy of Artificial Intelligence , New York: Oxford University Press.
  • Müller, Vincent C. and Nick Bostrom, 2016, “Future Progress in Artificial Intelligence: A Survey of Expert Opinion”, in Fundamental Issues of Artificial Intelligence , Vincent C. Müller (ed.), Cham: Springer International Publishing, 555–572. doi:10.1007/978-3-319-26485-1_33
  • Newport, Cal, 2019, Digital Minimalism: On Living Better with Less Technology , London: Penguin.
  • Nørskov, Marco (ed.), 2017, Social Robots , London: Routledge.
  • Nyholm, Sven, 2018a, “Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci”, Science and Engineering Ethics , 24(4): 1201–1219. doi:10.1007/s11948-017-9943-x
  • –––, 2018b, “The Ethics of Crashes with Self-Driving Cars: A Roadmap, II”, Philosophy Compass , 13(7): e12506. doi:10.1111/phc3.12506
  • Nyholm, Sven, and Lily Frank, 2017, “From Sex Robots to Love Robots: Is Mutual Love with a Robot Possible?”, in Danaher and McArthur 2017: 219–243.
  • O’Connell, Mark, 2017, To Be a Machine: Adventures among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death , London: Granta.
  • O’Neil, Cathy, 2016, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , Largo, ML: Crown.
  • Omohundro, Steve, 2014, “Autonomous Technology and the Greater Human Good”, Journal of Experimental & Theoretical Artificial Intelligence , 26(3): 303–315. doi:10.1080/0952813X.2014.895111
  • Ord, Toby, 2020, The Precipice: Existential Risk and the Future of Humanity , London: Bloomsbury.
  • Powers, Thomas M. and Jean-Gabriel Ganascia, forthcoming, “The Ethics of the Ethics of AI”, in Oxford Handbook of Ethics of Artificial Intelligence , Markus D. Dubber, Frank Pasquale, and Sunnit Das (eds.), New York: Oxford.
  • Rawls, John, 1971, A Theory of Justice , Cambridge, MA: Belknap Press.
  • Rees, Martin, 2018, On the Future: Prospects for Humanity , Princeton: Princeton University Press.
  • Richardson, Kathleen, 2016, “Sex Robot Matters: Slavery, the Prostituted, and the Rights of Machines”, IEEE Technology and Society Magazine , 35(2): 46–53. doi:10.1109/MTS.2016.2554421
  • Roessler, Beate, 2017, “Privacy as a Human Right”, Proceedings of the Aristotelian Society , 117(2): 187–206. doi:10.1093/arisoc/aox008
  • Royakkers, Lambèr and Rinie van Est, 2016, Just Ordinary Robots: Automation from Love to War , Boca Raton, LA: CRC Press, Taylor & Francis. doi:10.1201/b18899
  • Russell, Stuart, 2019, Human Compatible: Artificial Intelligence and the Problem of Control , New York: Viking.
  • Russell, Stuart, Daniel Dewey, and Max Tegmark, 2015, “Research Priorities for Robust and Beneficial Artificial Intelligence”, AI Magazine , 36(4): 105–114. doi:10.1609/aimag.v36i4.2577
  • SAE International, 2018, “Taxonomy and Definitions for Terms Related to Driving Automation Systems for on-Road Motor Vehicles”, J3016_201806, 15 June 2018. [ SAE International 2015 available online ]
  • Sandberg, Anders, 2013, “Feasibility of Whole Brain Emulation”, in Philosophy and Theory of Artificial Intelligence , Vincent C. Müller (ed.), (Studies in Applied Philosophy, Epistemology and Rational Ethics, 5), Berlin, Heidelberg: Springer Berlin Heidelberg, 251–264. doi:10.1007/978-3-642-31674-6_19
  • –––, 2019, “There Is Plenty of Time at the Bottom: The Economics, Risk and Ethics of Time Compression”, Foresight , 21(1): 84–99. doi:10.1108/FS-04-2018-0044
  • Santoni de Sio, Filippo and Jeroen van den Hoven, 2018, “Meaningful Human Control over Autonomous Systems: A Philosophical Account”, Frontiers in Robotics and AI , 5(February): 15. doi:10.3389/frobt.2018.00015
  • Schneier, Bruce, 2015, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World , New York: W. W. Norton.
  • Searle, John R., 1980, “Minds, Brains, and Programs”, Behavioral and Brain Sciences , 3(3): 417–424. doi:10.1017/S0140525X00005756
  • Selbst, Andrew D., Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi, 2019, “Fairness and Abstraction in Sociotechnical Systems”, in Proceedings of the Conference on Fairness, Accountability, and Transparency—FAT* ’19 , Atlanta, GA: ACM Press, 59–68. doi:10.1145/3287560.3287598
  • Sennett, Richard, 2018, Building and Dwelling: Ethics for the City , London: Allen Lane.
  • Shanahan, Murray, 2015, The Technological Singularity , Cambridge, MA: MIT Press.
  • Sharkey, Amanda, 2019, “Autonomous Weapons Systems, Killer Robots and Human Dignity”, Ethics and Information Technology , 21(2): 75–87. doi:10.1007/s10676-018-9494-0
  • Sharkey, Amanda and Noel Sharkey, 2011, “The Rights and Wrongs of Robot Care”, in Robot Ethics: The Ethical and Social Implications of Robotics , Patrick Lin, Keith Abney and George Bekey (eds.), Cambridge, MA: MIT Press, 267–282.
  • Shoham, Yoav, Perrault Raymond, Brynjolfsson Erik, Jack Clark, James Manyika, Juan Carlos Niebles, … Zoe Bauer, 2018, “The AI Index 2018 Annual Report”, 17 December 2018, Stanford, CA: AI Index Steering Committee, Human-Centered AI Initiative, Stanford University. [ Shoam et al. 2018 available online ]
  • SIENNA, 2019, “Deliverable Report D4.4: Ethical Issues in Artificial Intelligence and Robotics”, June 2019, published by the SIENNA project (Stakeholder-informed ethics for new technologies with high socio-economic and human rights impact), University of Twente, pp. 1–103. [ SIENNA 2019 available online ]
  • Silver, David, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis, 2018, “A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go through Self-Play”, Science , 362(6419): 1140–1144. doi:10.1126/science.aar6404
  • Simon, Herbert A. and Allen Newell, 1958, “Heuristic Problem Solving: The Next Advance in Operations Research”, Operations Research , 6(1): 1–10. doi:10.1287/opre.6.1.1
  • Simpson, Thomas W. and Vincent C. Müller, 2016, “Just War and Robots’ Killings”, The Philosophical Quarterly , 66(263): 302–322. doi:10.1093/pq/pqv075
  • Smolan, Sandy (director), 2016, “The Human Face of Big Data”, PBS Documentary, 24 February 2016, 56 mins.
  • Sparrow, Robert, 2007, “Killer Robots”, Journal of Applied Philosophy , 24(1): 62–77. doi:10.1111/j.1468-5930.2007.00346.x
  • –––, 2016, “Robots in Aged Care: A Dystopian Future?”, AI & Society , 31(4): 445–454. doi:10.1007/s00146-015-0625-4
  • Stahl, Bernd Carsten, Job Timmermans, and Brent Daniel Mittelstadt, 2016, “The Ethics of Computing: A Survey of the Computing-Oriented Literature”, ACM Computing Surveys , 48(4): art. 55. doi:10.1145/2871196
  • Stahl, Bernd Carsten and David Wright, 2018, “Ethics and Privacy in AI and Big Data: Implementing Responsible Research and Innovation”, IEEE Security Privacy , 16(3): 26–33.
  • Stone, Christopher D., 1972, “Should Trees Have Standing - toward Legal Rights for Natural Objects”, Southern California Law Review , 45: 450–501.
  • Stone, Peter, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller, 2016, “Artificial Intelligence and Life in 2030”, One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel, Stanford University, Stanford, CA, September 2016. [ Stone et al. 2016 available online ]
  • Strawson, Galen, 1998, “Free Will”, in Routledge Encyclopedia of Philosophy , Taylor & Francis. doi:10.4324/9780415249126-V014-1
  • Sullins, John P., 2012, “Robots, Love, and Sex: The Ethics of Building a Love Machine”, IEEE Transactions on Affective Computing , 3(4): 398–409. doi:10.1109/T-AFFC.2012.31
  • Susser, Daniel, Beate Roessler, and Helen Nissenbaum, 2019, “Technology, Autonomy, and Manipulation”, Internet Policy Review , 8(2): 30 June 2019. [ Susser, Roessler, and Nissenbaum 2019 available online ]
  • Taddeo, Mariarosaria and Luciano Floridi, 2018, “How AI Can Be a Force for Good”, Science , 361(6404): 751–752. doi:10.1126/science.aat5991
  • Taylor, Linnet and Nadezhda Purtova, 2019, “What Is Responsible and Sustainable Data Science?”, Big Data & Society, 6(2): art. 205395171985811. doi:10.1177/2053951719858114
  • Taylor, Steve, et al., 2018, “Responsible AI – Key Themes, Concerns & Recommendations for European Research and Innovation: Summary of Consultation with Multidisciplinary Experts”, June. doi:10.5281/zenodo.1303252 [ Taylor, et al. 2018 available online ]
  • Tegmark, Max, 2017, Life 3.0: Being Human in the Age of Artificial Intelligence , New York: Knopf.
  • Thaler, Richard H and Sunstein, Cass, 2008, Nudge: Improving decisions about health, wealth and happiness , New York: Penguin.
  • Thompson, Nicholas and Ian Bremmer, 2018, “The AI Cold War That Threatens Us All”, Wired , 23 November 2018. [ Thompson and Bremmer 2018 available online ]
  • Thomson, Judith Jarvis, 1976, “Killing, Letting Die, and the Trolley Problem”, Monist , 59(2): 204–217. doi:10.5840/monist197659224
  • Torrance, Steve, 2011, “Machine Ethics and the Idea of a More-Than-Human Moral World”, in Anderson and Anderson 2011: 115–137. doi:10.1017/CBO9780511978036.011
  • Trump, Donald J, 2019, “Executive Order on Maintaining American Leadership in Artificial Intelligence”, 11 February 2019. [ Trump 2019 available online ]
  • Turner, Jacob, 2019, Robot Rules: Regulating Artificial Intelligence , Berlin: Springer. doi:10.1007/978-3-319-96235-1
  • Tzafestas, Spyros G., 2016, Roboethics: A Navigating Overview , (Intelligent Systems, Control and Automation: Science and Engineering 79), Cham: Springer International Publishing. doi:10.1007/978-3-319-21714-7
  • Vallor, Shannon, 2017, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190498511.001.0001
  • Van Lent, Michael, William Fisher, and Michael Mancuso, 2004, “An Explainable Artificial Intelligence System for Small-Unit Tactical Behavior”, in Proceedings of the 16th Conference on Innovative Applications of Artifical Intelligence, (IAAI’04) , San Jose, CA: AAAI Press, 900–907.
  • van Wynsberghe, Aimee, 2016, Healthcare Robots: Ethics, Design and Implementation , London: Routledge. doi:10.4324/9781315586397
  • van Wynsberghe, Aimee and Scott Robbins, 2019, “Critiquing the Reasons for Making Artificial Moral Agents”, Science and Engineering Ethics , 25(3): 719–735. doi:10.1007/s11948-018-0030-8
  • Vanderelst, Dieter and Alan Winfield, 2018, “The Dark Side of Ethical Robots”, in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society , New Orleans, LA: ACM, 317–322. doi:10.1145/3278721.3278726
  • Veale, Michael and Reuben Binns, 2017, “Fairer Machine Learning in the Real World: Mitigating Discrimination without Collecting Sensitive Data”, Big Data & Society , 4(2): art. 205395171774353. doi:10.1177/2053951717743530
  • Véliz, Carissa, 2019, “Three Things Digital Ethics Can Learn from Medical Ethics”, Nature Electronics , 2(8): 316–318. doi:10.1038/s41928-019-0294-2
  • Verbeek, Peter-Paul, 2011, Moralizing Technology: Understanding and Designing the Morality of Things , Chicago: University of Chicago Press.
  • Wachter, Sandra and Brent Daniel Mittelstadt, 2019, “A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI”, Columbia Business Law Review , 2019(2): 494–620.
  • Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi, 2017, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation”, International Data Privacy Law , 7(2): 76–99. doi:10.1093/idpl/ipx005
  • Wachter, Sandra, Brent Mittelstadt, and Chris Russell, 2018, “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR”, Harvard Journal of Law & Technology , 31(2): 842–887. doi:10.2139/ssrn.3063289
  • Wallach, Wendell and Peter M. Asaro (eds.), 2017, Machine Ethics and Robot Ethics , London: Routledge.
  • Walsh, Toby, 2018, Machines That Think: The Future of Artificial Intelligence , Amherst, MA: Prometheus Books.
  • Westlake, Stian (ed.), 2014, Our Work Here Is Done: Visions of a Robot Economy , London: Nesta. [ Westlake 2014 available online ]
  • Whittaker, Meredith, Kate Crawford, Roel Dobbe, Genevieve Fried, Elizabeth Kaziunas, Varoon Mathur, … Jason Schultz, 2018, “AI Now Report 2018”, New York: AI Now Institute, New York University. [ Whittaker et al. 2018 available online ]
  • Whittlestone, Jess, Rune Nyrup, Anna Alexandrova, Kanta Dihal, and Stephen Cave, 2019, “Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: A Roadmap for Research”, Cambridge: Nuffield Foundation, University of Cambridge. [ Whittlestone 2019 available online ]
  • Winfield, Alan, Katina Michael, Jeremy Pitt, and Vanessa Evers (eds.), 2019, Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems , special issue of Proceedings of the IEEE , 107(3): 501–632.
  • Woollard, Fiona and Frances Howard-Snyder, 2016, “Doing vs. Allowing Harm”, Stanford Encyclopedia of Philosophy (Winter 2016 edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/win2016/entries/doing-allowing/ >
  • Woolley, Samuel C. and Philip N. Howard (eds.), 2017, Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media , Oxford: Oxford University Press. doi:10.1093/oso/9780190931407.001.0001
  • Yampolskiy, Roman V. (ed.), 2018, Artificial Intelligence Safety and Security , Boca Raton, FL: Chapman and Hall/CRC. doi:10.1201/9781351251389
  • Yeung, Karen and Martin Lodge (eds.), 2019, Algorithmic Regulation , Oxford: Oxford University Press. doi:10.1093/oso/9780198838494.001.0001
  • Zayed, Yago and Philip Loft, 2019, “Agriculture: Historical Statistics”, House of Commons Briefing Paper , 3339(25 June 2019): 1-19. [ Zayed and Loft 2019 available online ]
  • Zerilli, John, Alistair Knott, James Maclaurin, and Colin Gavaghan, 2019, “Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?”, Philosophy & Technology , 32(4): 661–683. doi:10.1007/s13347-018-0330-6
  • Zuboff, Shoshana, 2019, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power , New York: Public Affairs.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

Other Internet Resources

  • AI HLEG, 2019, “ High-Level Expert Group on Artificial Intelligence: Ethics Guidelines for Trustworthy AI ”, European Commission , accessed: 9 April 2019.
  • Amodei, Dario and Danny Hernandez, 2018, “ AI and Compute ”, OpenAI Blog , 16 July 2018.
  • Aneesh, A., 2002, Technological Modes of Governance: Beyond Private and Public Realms , paper in the Proceedings of the 4th International Summer Academy on Technology Studies, available at archive.org.
  • Brooks, Rodney, 2017, “ The Seven Deadly Sins of Predicting the Future of AI ”, on Rodney Brooks: Robots, AI, and Other Stuff , 7 September 2017.
  • Brundage, Miles, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, et al., 2018, “ The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation ”, unpublished manuscript, ArXiv:1802.07228 [Cs].
  • Costa, Elisabeth and David Halpern, 2019, “ The Behavioural Science of Online Harm and Manipulation, and What to Do About It: An Exploratory Paper to Spark Ideas and Debate ”, The Behavioural Insights Team Report, 1-82.
  • Gebru, Timnit, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé III, and Kate Crawford, 2018, “ Datasheets for Datasets ”, unpublished manuscript, arxiv:1803.09010, 23 March 2018.
  • Gunning, David, 2017, “ Explainable Artificial Intelligence (XAI) ”, Defense Advanced Research Projects Agency (DARPA) Program.
  • Harris, Tristan, 2016, “ How Technology Is Hijacking Your Mind—from a Magician and Google Design Ethicist ”, Thrive Global , 18 May 2016.
  • International Federation of Robotics (IFR), 2019, World Robotics 2019 Edition .
  • Jacobs, An, Lynn Tytgat, Michel Maus, Romain Meeusen, and Bram Vanderborght (eds.), Homo Roboticus: 30 Questions and Answers on Man, Technology, Science & Art, 2019, Brussels: ASP .
  • Marcus, Gary, 2018, “ Deep Learning: A Critical Appraisal ”, unpublished manuscript, 2 January 2018, arxiv:1801.00631.
  • McCarthy, John, Marvin Minsky, Nathaniel Rochester, and Claude E. Shannon, 1955, “ A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence ”, 31 August 1955.
  • Metcalf, Jacob, Emily F. Keller, and Danah Boyd, 2016, “ Perspectives on Big Data, Ethics, and Society ”, 23 May 2016, Council for Big Data, Ethics, and Society.
  • National Institute of Justice (NIJ), 2014, “ Overview of Predictive Policing ”, 9 June 2014.
  • Searle, John R., 2015, “ Consciousness in Artificial Intelligence ”, Google’s Singularity Network, Talks at Google (YouTube video).
  • Sharkey, Noel, Aimee van Wynsberghe, Scott Robbins, and Eleanor Hancock, 2017, “ Report: Our Sexual Future with Robots ”, Responsible Robotics , 1–44.
  • Turing Institute (UK): Data Ethics Group
  • Leverhulme Centre for the Future of Intelligence
  • Future of Humanity Institute
  • Future of Life Institute
  • Stanford Center for Internet and Society
  • Berkman Klein Center
  • Digital Ethics Lab
  • Open Roboethics Institute
  • Philosophy & Theory of AI
  • Ethics and AI 2017
  • We Robot 2018
  • Robophilosophy
  • EUrobotics TG ‘robot ethics’ collection of policy documents
  • PhilPapers section on Ethics of Artificial Intelligence
  • PhilPapers section on Robot Ethics

computing: and moral responsibility | ethics: internet research | ethics: search engines and | information technology: and moral values | information technology: and privacy | manipulation, ethics of | social networking and ethics

Acknowledgments

Early drafts of this article were discussed with colleagues at the IDEA Centre of the University of Leeds, some friends, and my PhD students Michael Cannon, Zach Gudmunsen, Gabriela Arriagada-Bruneau and Charlotte Stix. Later drafts were made publicly available on the Internet and publicised via Twitter and e-mail to all (then) cited authors that I could locate. These later drafts were presented to audiences at the INBOTS Project Meeting (Reykjavik 2019), the Computer Science Department Colloquium (Leeds 2019), the European Robotics Forum (Bucharest 2019), the AI Lunch and the Philosophy & Ethics group (Eindhoven 2019)—many thanks for their comments.

I am grateful for detailed written comments by John Danaher, Martin Gibert, Elizabeth O’Neill, Sven Nyholm, Etienne B. Roesch, Emma Ruttkamp-Bloem, Tom Powers, Steve Taylor, and Alan Winfield. I am grateful for further useful comments by Colin Allen, Susan Anderson, Christof Wolf-Brenner, Rafael Capurro, Mark Coeckelbergh, Yazmin Morlet Corti, Erez Firt, Vasilis Galanos, Anne Gerdes, Olle Häggström, Geoff Keeling, Karabo Maiyane, Brent Mittelstadt, Britt Östlund, Steve Petersen, Brian Pickering, Zoë Porter, Amanda Sharkey, Melissa Terras, Stuart Russell, Jan F Veneman, Jeffrey White, and Xinyi Wu.

Parts of the work on this article have been supported by the European Commission under the INBOTS project (H2020 grant no. 780073).

Copyright © 2020 by Vincent C. Müller < vincent . c . mueller @ fau . de >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

12 Interesting Ethical Topics for Essay Papers

  • Writing Essays
  • Writing Research Papers
  • English Grammar
  • M.Ed., Education Administration, University of Georgia
  • B.A., History, Armstrong State University

Writing a persuasive essay requires identifying interesting ethical topics, and these options might inspire you to create a powerful and engaging essay, position paper , or speech for your next assignment.

Should Teens Have Plastic Surgery?

Good looks are highly prized in society. You can see advertisements everywhere urging you to buy products that will supposedly enhance your appearance. While many products are topical, plastic surgery is probably the ultimate game-changer. Going under the knife to enhance your looks can be a quick fix and help you achieve the look you desire. It also carries risks and can have lifelong consequences. Consider whether you think teens—who are still developing into mature individuals—should have the right to make such a big decision at such a young age, or if their parents should be able to decide for them.

Would You Tell If You Saw a Popular Kid Bullying?

Bullying is a big problem in schools and even in society in general. But it can be difficult to show courage, step up—and step in—if you see a popular kid bullying someone at school. Would you report it if you saw this happening? Why or why not?

Would You Speak Up If Your Friend Abused an Animal?

Animal abuse by youngsters can foreshadow more violent acts as these individuals grow up. Speaking up might save the animal pain and suffering today, and it might steer that person away from more violent acts in the future. But would you have the courage to do so? Why or why not?

Would You Tell If You Saw a Friend Cheating on a Test?

Courage can come in subtle forms, and that can include reporting seeing someone cheat on a test. Cheating on a test might not seem like such a big deal; perhaps you've cheated on a test yourself. But it is against the policies of schools and universities worldwide. If you saw someone cheating, would you speak up and tell the teacher? What if it were your buddy cheating and telling might cost you a friendship? Explain your stance.

Should News Stories Slant Toward What People Want to Hear?

There is much debate over whether the news should be unbiased or allow commentary. Newspapers, radios, and news television stations are businesses, just as much as a grocery store or online retailers. They need customers to survive, and that means appealing to what their customers want to hear or see. Slanting reports toward popular opinions could increase ratings and readership, in turn saving newspapers and news shows, as well as jobs. But is this practice ethical? What do you think?

Would You Tell If Your Best Friend Had a Drink at the Prom?

Most schools have strict rules about drinking at the prom, but many students still engage in the practice. After all, they'll be graduating soon. If you saw a friend imbibing, would you tell or look the other way? Why?

Should Football Coaches Be Paid More Than Professors?

Football often brings in more money than any other single activity or program a school offers, including academic classes. In the corporate world, if a business is profitable, the CEO and those who contributed to the success are often rewarded handsomely. With that in mind, shouldn't it be the same in academia? Should top football coaches get paid more than top professors? Why or why not?

Should Politics and Church Be Separate?

Candidates often invoke religion when they're campaigning. It's generally a good way to attract votes. But should the practice be discouraged? The U.S. Constitution, after all, dictates that there should be a separation of church and state in this country. What do you think and why?

Would You Speak Up If You Heard an Ugly Ethnic Statement at a Party Filled With Popular Kids?

As in the previous examples, it can be hard to speak up, especially when an incident involves popular kids. Would you have the courage to say something and risk the ire of the "in" crowd? Who would you tell?

Should Assisted Suicides Be Allowed for Terminally Ill Patients?

Some countries, like the Netherlands, allow assisted suicides , as do some U.S. states. Should "mercy killing" be legal for terminally ill patients who are suffering from great physical pain? What about patients whose diseases will negatively impact their families? Why or why not?

Should a Student's Ethnicity Be a Consideration for College Acceptance?

There has been a long-standing debate about the role ethnicity should play in college acceptance. Proponents of affirmative action argue that underrepresented groups should be given a leg up. Opponents say that all college candidates should be judged on their merits alone. What do you think and why?

Should Companies Gather Information About Their Customers?

Information privacy is a big and growing issue. Every time you log onto the internet and visit an online retailer, news company, or social media site, companies gather information about you. Should they have the right to do so, or should the practice be banned? Why do you think so? Explain your answer.

  • How to Ask and Answer Basic English Questions
  • 100 Persuasive Speech Topics for Students
  • Fun March Writing Prompts for Journaling
  • 49 Opinion Writing Prompts for Students
  • Engaging Writing Prompts for 3rd Graders
  • Second Grade Writing Prompts
  • 24 Journal Prompts for Creative Writing in the Elementary Classroom
  • May Writing Prompts
  • 9 Common Medical School Interview Questions and How to Answer Them
  • Writing Prompts for 7th Grade
  • April Writing Prompts
  • January Writing Prompts
  • October Writing Prompts
  • November Writing and Journal Prompts
  • September Writing Prompts
  • Controversial Speech Topics

Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

Illustration highlighting connections between college graduates and others in an area's population.

More educated communities tend to be healthier. Why? Culture.

Edward Glaeser outside the Littauer Center of Public Administration at Harvard University.

Lending a hand to a former student — Boston’s mayor 

Karl Oskar Schulz ’24

Where money isn’t cheap, misery follows

Illustration by Ben Boothman

Great promise but potential for peril

Christina Pazzanese

Harvard Staff Writer

Ethical concerns mount as AI takes bigger decision-making role in more industries

Second in a four-part series that taps the expertise of the Harvard community to examine the promise and potential pitfalls of the rising age of artificial intelligence and machine learning , and how to humanize them .

For decades, artificial intelligence, or AI, was the engine of high-level STEM research. Most consumers became aware of the technology’s power and potential through internet platforms like Google and Facebook, and retailer Amazon. Today, AI is essential across a vast array of industries, including health care, banking, retail, and manufacturing.

Also in the series

Illustration of people making ethical decisions.

Trailblazing initiative marries ethics, tech

Illustration of person having an X-ray.

AI revolution in medicine

Illustration of robot making decisions.

Imagine a world in which AI is in your home, at work, everywhere

But its game-changing promise to do things like improve efficiency, bring down costs, and accelerate research and development has been tempered of late with worries that these complex, opaque systems may do more societal harm than economic good. With virtually no U.S. government oversight, private companies use AI software to make determinations about health and medicine, employment, creditworthiness, and even criminal justice without having to answer for how they’re ensuring that programs aren’t encoded, consciously or unconsciously, with structural biases.

Its growing appeal and utility are undeniable. Worldwide business spending on AI is expected to hit $50 billion this year and $110 billion annually by 2024, even after the global economic slump caused by the COVID-19 pandemic, according to a forecast released in August by technology research firm IDC. Retail and banking industries spent the most this year, at more than $5 billion each. The company expects the media industry and federal and central governments will invest most heavily between 2018 and 2023 and predicts that AI will be “the disrupting influence changing entire industries over the next decade.”

“Virtually every big company now has multiple AI systems and counts the deployment of AI as integral to their strategy,” said Joseph Fuller , professor of management practice at Harvard Business School, who co-leads Managing the Future of Work , a research project that studies, in part, the development and implementation of AI, including machine learning, robotics, sensors, and industrial automation, in business and the work world.

Early on, it was popularly assumed that the future of AI would involve the automation of simple repetitive tasks requiring low-level decision-making. But AI has rapidly grown in sophistication, owing to more powerful computers and the compilation of huge data sets. One branch, machine learning, notable for its ability to sort and analyze massive amounts of data and to learn over time, has transformed countless fields, including education.

Firms now use AI to manage sourcing of materials and products from suppliers and to integrate vast troves of information to aid in strategic decision-making, and because of its capacity to process data so quickly, AI tools are helping to minimize time in the pricey trial-and-error of product development — a critical advance for an industry like pharmaceuticals, where it costs $1 billion to bring a new pill to market, Fuller said.

Health care experts see many possible uses for AI, including with billing and processing necessary paperwork. And medical professionals expect that the biggest, most immediate impact will be in analysis of data, imaging, and diagnosis. Imagine, they say, having the ability to bring all of the medical knowledge available on a disease to any given treatment decision.

In employment, AI software culls and processes resumes and analyzes job interviewees’ voice and facial expressions in hiring and driving the growth of what’s known as “hybrid” jobs. Rather than replacing employees, AI takes on important technical tasks of their work, like routing for package delivery trucks, which potentially frees workers to focus on other responsibilities, making them more productive and therefore more valuable to employers.  

“It’s allowing them to do more stuff better, or to make fewer errors, or to capture their expertise and disseminate it more effectively in the organization,” said Fuller, who has studied the effects and attitudes of workers who have lost or are likeliest to lose their jobs to AI.

“Can smart machines outthink us, or are certain elements of human judgment indispensable in deciding some of the most important things in life?”

— Michael Sandel, political philosopher and Anne T. and Robert M. Bass Professor of Government

Though automation is here to stay, the elimination of entire job categories, like highway toll-takers who were replaced by sensors because of AI’s proliferation, is not likely, according to Fuller.

“What we’re going to see is jobs that require human interaction, empathy, that require applying judgment to what the machine is creating [will] have robustness,” he said.

While big business already has a huge head start, small businesses could also potentially be transformed by AI, says Karen Mills ’75, M.B.A. ’77, who ran the U.S. Small Business Administration from 2009 to 2013. With half the country employed by small businesses before the COVID-19 pandemic, that could have major implications for the national economy over the long haul.

Rather than hamper small businesses, the technology could give their owners detailed new insights into sales trends, cash flow, ordering, and other important financial information in real time so they can better understand how the business is doing and where problem areas might loom without having to hire anyone, become a financial expert, or spend hours laboring over the books every week, Mills said.

One area where AI could “completely change the game” is lending, where access to capital is difficult in part because banks often struggle to get an accurate picture of a small business’s viability and creditworthiness.

“It’s much harder to look inside a business operation and know what’s going on” than it is to assess an individual, she said.

Information opacity makes the lending process laborious and expensive for both would-be borrowers and lenders, and applications are designed to analyze larger companies or those who’ve already borrowed, a built-in disadvantage for certain types of businesses and for historically underserved borrowers, like women and minority business owners, said Mills, a senior fellow at HBS.

But with AI-powered software pulling information from a business’s bank account, taxes, and online bookkeeping records and comparing it with data from thousands of similar businesses, even small community banks will be able to make informed assessments in minutes, without the agony of paperwork and delays, and, like blind auditions for musicians, without fear that any inequity crept into the decision-making.

“All of that goes away,” she said.

A veneer of objectivity

Not everyone sees blue skies on the horizon, however. Many worry whether the coming age of AI will bring new, faster, and frictionless ways to discriminate and divide at scale.

“Part of the appeal of algorithmic decision-making is that it seems to offer an objective way of overcoming human subjectivity, bias, and prejudice,” said political philosopher Michael Sandel , Anne T. and Robert M. Bass Professor of Government. “But we are discovering that many of the algorithms that decide who should get parole, for example, or who should be presented with employment opportunities or housing … replicate and embed the biases that already exist in our society.”

“If we’re not thoughtful and careful, we’re going to end up with redlining again.”

— Karen Mills, senior fellow at the Business School and head of the U.S. Small Business Administration from 2009 to 2013

AI presents three major areas of ethical concern for society: privacy and surveillance, bias and discrimination, and perhaps the deepest, most difficult philosophical question of the era, the role of human judgment, said Sandel, who teaches a course in the moral, social, and political implications of new technologies.

“Debates about privacy safeguards and about how to overcome bias in algorithmic decision-making in sentencing, parole, and employment practices are by now familiar,” said Sandel, referring to conscious and unconscious prejudices of program developers and those built into datasets used to train the software. “But we’ve not yet wrapped our minds around the hardest question: Can smart machines outthink us, or are certain elements of human judgment indispensable in deciding some of the most important things in life?”

Panic over AI suddenly injecting bias into everyday life en masse is overstated, says Fuller. First, the business world and the workplace, rife with human decision-making, have always been riddled with “all sorts” of biases that prevent people from making deals or landing contracts and jobs.

When calibrated carefully and deployed thoughtfully, resume-screening software allows a wider pool of applicants to be considered than could be done otherwise, and should minimize the potential for favoritism that comes with human gatekeepers, Fuller said.

Sandel disagrees. “AI not only replicates human biases, it confers on these biases a kind of scientific credibility. It makes it seem that these predictions and judgments have an objective status,” he said.

In the world of lending, algorithm-driven decisions do have a potential “dark side,” Mills said. As machines learn from data sets they’re fed, chances are “pretty high” they may replicate many of the banking industry’s past failings that resulted in systematic disparate treatment of African Americans and other marginalized consumers.

“If we’re not thoughtful and careful, we’re going to end up with redlining again,” she said.

A highly regulated industry, banks are legally on the hook if the algorithms they use to evaluate loan applications end up inappropriately discriminating against classes of consumers, so those “at the top levels” in the field are “very focused” right now on this issue, said Mills, who closely studies the rapid changes in financial technology, or “fintech.”

“They really don’t want to discriminate. They want to get access to capital to the most creditworthy borrowers,” she said. “That’s good business for them, too.”

Oversight overwhelmed

Given its power and expected ubiquity, some argue that the use of AI should be tightly regulated. But there’s little consensus on how that should be done and who should make the rules.

Thus far, companies that develop or use AI systems largely self-police, relying on existing laws and market forces, like negative reactions from consumers and shareholders or the demands of highly-prized AI technical talent to keep them in line.

“There’s no businessperson on the planet at an enterprise of any size that isn’t concerned about this and trying to reflect on what’s going to be politically, legally, regulatorily, [or] ethically acceptable,” said Fuller.

Firms already consider their own potential liability from misuse before a product launch, but it’s not realistic to expect companies to anticipate and prevent every possible unintended consequence of their product, he said.

Few think the federal government is up to the job, or will ever be.

“The regulatory bodies are not equipped with the expertise in artificial intelligence to engage in [oversight] without some real focus and investment,” said Fuller, noting the rapid rate of technological change means even the most informed legislators can’t keep pace. Requiring every new product using AI to be prescreened for potential social harms is not only impractical, but would create a huge drag on innovation.

“I wouldn’t have a central AI group that has a division that does cars, I would have the car people have a division of people who are really good at AI.”

— Jason Furman, a professor of the practice of economic policy at the Kennedy School and a former top economic adviser to President Barack Obama

Jason Furman , a professor of the practice of economic policy at Harvard Kennedy School, agrees that government regulators need “a much better technical understanding of artificial intelligence to do that job well,” but says they could do it.

Existing bodies like the National Highway Transportation Safety Association, which oversees vehicle safety, for example, could handle potential AI issues in autonomous vehicles rather than a single watchdog agency, he said.

“I wouldn’t have a central AI group that has a division that does cars, I would have the car people have a division of people who are really good at AI,” said Furman, a former top economic adviser to President Barack Obama.

Though keeping AI regulation within industries does leave open the possibility of co-opted enforcement, Furman said industry-specific panels would be far more knowledgeable about the overarching technology of which AI is simply one piece, making for more thorough oversight.

While the European Union already has rigorous data-privacy laws and the European Commission is considering a formal regulatory framework for ethical use of AI, the U.S. government has historically been late when it comes to tech regulation.

“I think we should’ve started three decades ago, but better late than never,” said Furman, who thinks there needs to be a “greater sense of urgency” to make lawmakers act.

Business leaders “can’t have it both ways,” refusing responsibility for AI’s harmful consequences while also fighting government oversight, Sandel maintains.

More like this

ethical implications essay

The robots are coming, but relax

Illustration of people walking around.

The good, bad, and scary of the Internet of Things

ethical implications essay

Paving the way for self-driving cars

“The problem is these big tech companies are neither self-regulating, nor subject to adequate government regulation. I think there needs to be more of both,” he said, later adding: “We can’t assume that market forces by themselves will sort it out. That’s a mistake, as we’ve seen with Facebook and other tech giants.”

Last fall, Sandel taught “ Tech Ethics ,” a popular new Gen Ed course with Doug Melton, co-director of Harvard’s Stem Cell Institute. As in his legendary “Justice” course, students consider and debate the big questions about new technologies, everything from gene editing and robots to privacy and surveillance.

“Companies have to think seriously about the ethical dimensions of what they’re doing and we, as democratic citizens, have to educate ourselves about tech and its social and ethical implications — not only to decide what the regulations should be, but also to decide what role we want big tech and social media to play in our lives,” said Sandel.

Doing that will require a major educational intervention, both at Harvard and in higher education more broadly, he said.

“We have to enable all students to learn enough about tech and about the ethical implications of new technologies so that when they are running companies or when they are acting as democratic citizens, they will be able to ensure that technology serves human purposes rather than undermines a decent civic life.”

Next: The AI revolution in medicine may lift personalized treatment, fill gaps in access to care, and cut red tape. Yet risks abound.

Share this article

You might like.

New study finds places with more college graduates tend to develop better lifestyle habits overall

Edward Glaeser outside the Littauer Center of Public Administration at Harvard University.

Economist gathers group of Boston area academics to assess costs of creating tax incentives for developers to ease housing crunch

Karl Oskar Schulz ’24

Student’s analysis of global attitudes called key contribution to research linking higher cost of borrowing to persistent consumer gloom

Epic science inside a cubic millimeter of brain

Researchers publish largest-ever dataset of neural connections

Finding right mix on campus speech policies

Legal, political scholars discuss balancing personal safety, constitutional rights, academic freedom amid roiling protests, cultural shifts

Good genes are nice, but joy is better

Harvard study, almost 80 years old, has proved that embracing community helps us live longer, and be happier

Type above and press enter or press close to cancel.

The moral and ethical implications of artificial intelligence.

Artificial intelligence (AI) technologies are advancing at an unprecedented pace, and the idea of a technological singularity where machines become self-aware and surpass human intelligence is a highly debated topic among experts and the public.   

However, moving closer to this event, we must consider several moral and ethical implications. This article will explore some key issues surrounding AI and singularity, including the impact on employment, privacy and even the meaning of life.  

The Impact on Employment  

One of the most immediate concerns associated with the rise of AI is its potential impact on employment. Many experts predict that as machines become increasingly sophisticated, they will begin to replace human workers in a wide range of industries. Replacing the human workforce could result in significant job losses, particularly in sectors heavily reliant on manual labor, such as manufacturing and agriculture.  

While some argue that adopting AI will lead to new job opportunities, others believe that the pace of technological change will be too rapid for many workers to adapt. There are concerns about the impact on low-skilled workers, who may struggle to find new employment opportunities in the face of automation.   

To address this issue, some have proposed the idea of a Universal Basic Income (UBI), which would provide a guaranteed income to all citizens regardless of their employment status. However, implementing a UBI raises its own ethical concerns, including possibly incentivizing people not to work or to engage in other socially harmful activities.  

Privacy Concerns  

Another primary ethical concern associated with AI is its potential impact on privacy. As machines become increasingly sophisticated, they can collect and analyze vast amounts of data about individuals, including their preferences, behaviors, and even their emotions. The data could be for various purposes, from targeted advertising to predicting individuals’ future behavior.   

However, collecting and using such data raises serious ethical questions about the right to privacy. Individuals may need to be made aware of the magnitude of the data collected and may retain control over how to use it.  

Moreover, using AI to analyze this data could result in discriminatory outcomes, such as discriminatory hiring practices or unfair pricing. To address these concerns, some have called for more robust data protection laws and regulations and increased transparency and accountability in using AI. Others argue that individuals should have greater control over their data, including the ability to delete or restrict its use.  

Existential Risks  

One of the most significant ethical concerns surrounding AI is the possibility that it could threaten humanity’s existence. While the idea of a technological singularity where machines become self-aware and surpass human intelligence remains speculative, some experts warn that such a scenario could have catastrophic consequences.   

 For example, if machines were to become self-aware and view humans as a threat, they could take aggressive action to eliminate us. Alternatively, if machines were to become too intelligent for humans to understand, they could inadvertently cause harm simply by pursuing their programmed goals.   

Some experts have called for developing “friendly” AI designed with human values and goals to mitigate these risks. Others argue that we should prioritize research into controlling or limiting AI, such as by ensuring that machines remain subservient to human control.  

The Meaning of Life

Finally, the rise of AI raises profound ethical questions about the meaning of life itself. As machines become more sophisticated and capable of performing tasks that were once the exclusive domain of human beings, we may question what it means to be human.   

For example, if machines can replicate human emotions and consciousness, do they deserve the same rights and protections as human beings? And if devices can perform tasks more efficiently and effectively than humans, what is the purpose of human existence? These questions touch on fundamental philosophical and existential issues that are difficult to answer.   

The rise of AI could lead to a new era of human flourishing, where machines take on many of the currently burdensome or dangerous tasks, allowing humans to pursue higher-level goals such as creativity and intellectual exploration. Others worry that increasing reliance on machines could lead to a loss of autonomy and self-determination and a loss of meaning and purpose in life.   

To address these concerns, some experts have called for a greater focus on developing ethical and moral frameworks for AI, including establishing ethical guidelines and principles to guide the development and deployment of AI technologies.   

These questions are not just abstract philosophical inquiries. They have real-world implications for how we treat machines and view our place in the world. If machines become too intelligent and capable, we may need to rethink our ethical and moral frameworks to account for their existence.  

The increasing use of AI brings up inquiries about the true nature of intelligence. As machines are now capable of doing tasks previously only done by humans, we may need to reassess our definition of intelligence. The potential effects on education, self-esteem, and self-identity could be significant.  

Conclusion   

In conclusion, the rise of AI technologies and the prospect of a technological singularity raises consideration to consider a wide range of moral and ethical concerns carefully. From the impact on employment to privacy concerns, existential risks, and the meaning of life itself, the potential implications of AI are far-reaching and profound.   

AI’s ethical and moral implications and impending singularity are complex and multifaceted. While these technologies can potentially bring significant benefits, such as increased efficiency and productivity, they pose substantial risks, such as job losses, privacy concerns, and existential threats.   

To address these concerns, we need to develop new ethical frameworks and regulatory structures that consider the unique challenges posed by AI. Creating frameworks and regulations requires collaboration and dialogue between policymakers, experts, the public and willingness to confront some of the most challenging questions about the nature of intelligence, consciousness, and human identity.   

Ultimately, the rise of AI may force us to rethink some of our most fundamental assumptions about what it means to be human. However, if we approach these challenges with care and deliberation, we can harness the power of these technologies in ways that benefit all of humanity.   

Although it is impossible to predict the exact path that AI development will take, we must approach these issues with due diligence and regard to ensure that AI is created and implemented ethically and responsibly.   

Implementing controls and regulations needs a collaborative effort from various stakeholders, including scientists, policymakers and the public. Including the groups mentioned above offers an opportunity to showcase AI’s benefits. We can maintain our values and principles essential for human growth without compromising them.  

________________________  

ABOUT MARIO FIALHO: Mario is a seasoned leader with over 25 years of consulting experience. He specializes in modern solution architecture, DevOps, web design and development, and next-generation digital solutioning. Mario’s exceptional work ethic and strong focus on artificial intelligence technologies, he has a broad and deep knowledge of enterprise technical architecture, software engineering and DevOps automation. 

  • Artificial Intelligence

Revolutionizing Digital Workplaces: Cutting-Edge Solutions for the Digital Era

Join over 15,000 companies

Get Our Updates Sent Directly To Your Inbox.

Join our mailing list to receive monthly updates on the latest at Stefanini.

transforming data through track and trace with klabin case study

Build Your IT Support Offering Quickly

Our eBook “LiteSD – Choose Endlessly Scalable Success” reveals how to integrate LiteSD platform into your organization.

Ask SophieX

ethical implications essay

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Rom J Morphol Embryol
  • v.61(1); Jan-Mar 2020

Logo of rjme

A research on abortion: ethics, legislation and socio-medical outcomes. Case study: Romania

Andreea mihaela niţă.

1 Faculty of Social Sciences, University of Craiova, Romania

Cristina Ilie Goga

This article presents a research study on abortion from a theoretical and empirical point of view. The theoretical part is based on the method of social documents analysis, and presents a complex perspective on abortion, highlighting items of medical, ethical, moral, religious, social, economic and legal elements. The empirical part presents the results of a sociological survey, based on the opinion survey method through the application of the enquiry technique, conducted in Romania, on a sample of 1260 women. The purpose of the survey is to identify Romanians perception on the decision to voluntary interrupt pregnancy, and to determine the core reasons in carrying out an abortion.

The analysis of abortion by means of medical and social documents

Abortion means a pregnancy interruption “before the fetus is viable” [ 1 ] or “before the fetus is able to live independently in the extrauterine environment, usually before the 20 th week of pregnancy” [ 2 ]. “Clinical miscarriage is both a common and distressing complication of early pregnancy with many etiological factors like genetic factors, immune factors, infection factors but also psychological factors” [ 3 ]. Induced abortion is a practice found in all countries, but the decision to interrupt the pregnancy involves a multitude of aspects of medical, ethical, moral, religious, social, economic, and legal order.

In a more simplistic manner, Winston Nagan has classified opinions which have as central element “abortion”, in two major categories: the opinion that the priority element is represented by fetus and his entitlement to life and the second opinion, which focuses around women’s rights [ 4 ].

From the medical point of view, since ancient times there have been four moments, generally accepted, which determine the embryo’s life: ( i ) conception; ( ii ) period of formation; ( iii ) detection moment of fetal movement; ( iv ) time of birth [ 5 ]. Contemporary medicine found the following moments in the evolution of intrauterine fetal: “ 1 . At 18 days of pregnancy, the fetal heartbeat can be perceived and it starts running the circulatory system; 2 . At 5 weeks, they become more clear: the nose, cheeks and fingers of the fetus; 3 . At 6 weeks, they start to function: the nervous system, stomach, kidneys and liver of the fetus, and its skeleton is clearly distinguished; 4 . At 7 weeks (50 days), brain waves are felt. The fetus has all the internal and external organs definitively outlined. 5 . At 10 weeks (70 days), the unborn child has all the features clearly defined as a child after birth (9 months); 6 . At 12 weeks (92 days, 3 months), the fetus has all organs definitely shaped, managing to move, lacking only the breath” [ 6 ]. Even if most of the laws that allow abortion consider the period up to 12 weeks acceptable for such an intervention, according to the above-mentioned steps, there can be defined different moments, which can represent the beginning of life. Nowadays, “abortion is one of the most common gynecological experiences and perhaps the majority of women will undergo an abortion in their lifetimes” [ 7 ]. “Safe abortions carry few health risks, but « every year, close to 20 million women risk their lives and health by undergoing unsafe abortions » and 25% will face a complication with permanent consequences” [ 8 , 9 ].

From the ethical point of view, most of the times, the interruption of pregnancy is on the border between woman’s right over her own body and the child’s (fetus) entitlement to life. Judith Jarvis Thomson supported the supremacy of woman’s right over her own body as a premise of freedom, arguing that we cannot force a person to bear in her womb and give birth to an unwanted child, if for different circumstances, she does not want to do this [ 10 ]. To support his position, the author uses an imaginary experiment, that of a violinist to which we are connected for nine months, in order to save his life. However, Thomson debates the problem of the differentiation between the fetus and the human being, by carrying out a debate on the timing which makes this difference (period of conception, 10 weeks of pregnancy, etc.) and highlighting that for people who support abortion, the fetus is not an alive human being [ 10 ].

Carol Gilligan noted that women undergo a true “moral dilemma”, a “moral conflict” with regards to voluntary interruption of pregnancy, such a decision often takes into account the human relationships, the possibility of not hurting the others, the responsibility towards others [ 11 ]. Gilligan applied qualitative interviews to a number of 29 women from different social classes, which were put in a position to decide whether or not to commit abortion. The interview focused on the woman’s choice, on alternative options, on individuals and existing conflicts. The conclusion was that the central moral issue was the conflict between the self (the pregnant woman) and others who may be hurt as a result of the potential pregnancy [ 12 ].

From the religious point of view, abortion is unacceptable for all religions and a small number of abortions can be seen in deeply religious societies and families. Christianity considers the beginning of human life from conception, and abortion is considered to be a form of homicide [ 13 ]. For Christians, “at the same time, abortion is giving up their faith”, riot and murder, which means that by an abortion we attack Jesus Christ himself and God [ 14 ]. Islam does not approve abortion, relying on the sacral life belief as specified in Chapter 6, Verse 151 of the Koran: “Do not kill a soul which Allah has made sacred (inviolable)” [ 15 ]. Buddhism considers abortion as a negative act, but nevertheless supports for medical reasons [ 16 ]. Judaism disapproves abortion, Tanah considering it to be a mortal sin. Hinduism considers abortion as a crime and also the greatest sin [ 17 ].

From the socio-economic point of view, the decision to carry out an abortion is many times determined by the relations within the social, family or financial frame. Moreover, studies have been conducted, which have linked the legalization of abortions and the decrease of the crime rate: “legalized abortion may lead to reduced crime either through reductions in cohort sizes or through lower per capita offending rates for affected cohorts” [ 18 ].

Legal regulation on abortion establishes conditions of the abortion in every state. In Europe and America, only in the XVIIth century abortion was incriminated and was considered an insignificant misdemeanor or a felony, depending on when was happening. Due to the large number of illegal abortions and deaths, two centuries later, many states have changed legislation within the meaning of legalizing voluntary interruption of pregnancy [ 6 ]. In contemporary society, international organizations like the United Nations or the European Union consider sexual and reproductive rights as fundamental rights [ 19 , 20 ], and promotes the acceptance of abortion as part of those rights. However, not all states have developed permissive legislation in the field of voluntary interruption of pregnancy.

Currently, at national level were established four categories of legislation on pregnancy interruption area:

( i )  Prohibitive legislations , ones that do not allow abortion, most often outlining exceptions in abortion in cases where the pregnant woman’s life is endangered. In some countries, there is a prohibition of abortion in all circumstances, however, resorting to an abortion in the case of an imminent threat to the mother’s life. Same regulation is also found in some countries where abortion is allowed in cases like rape, incest, fetal problems, etc. In this category are 66 states, with 25.5% of world population [ 21 ].

( ii )  Restrictive legislation that allow abortion in cases of health preservation . Loosely, the term “health” should be interpreted according to the World Health Organization (WHO) definition as: “health is a state of complete physical, mental and social wellbeing and not merely the absence of disease or infirmity” [ 22 ]. This type of legislation is adopted in 59 states populated by 13.8% of the world population [ 21 ].

( iii )  Legislation allowing abortion on a socio-economic motivation . This category includes items such as the woman’s age or ability to care for a child, fetal problems, cases of rape or incest, etc. In this category are 13 countries, where we have 21.3% of the world population [ 21 ].

( iv )  Legislation which do not impose restrictions on abortion . In the case of this legislation, abortion is permitted for any reason up to 12 weeks of pregnancy, with some exceptions (Romania – 14 weeks, Slovenia – 10 weeks, Sweden – 18 weeks), the interruption of pregnancy after this period has some restrictions. This type of legislation is adopted in 61 countries with 39.5% of the world population [21].

The Centre for Reproductive Rights has carried out from 1998 a map of the world’s states, based on the legislation typology of each country (Figure ​ (Figure1 1 ).

An external file that holds a picture, illustration, etc.
Object name is RJME-61-1-283-fig1.jpg

The analysis of states according to the legislation regarding abortion. Source: Centre for Reproductive Rights. The World’s Abortion Laws, 2018 [ 23 ]

An unplanned pregnancy, socio-economic context or various medical problems [ 24 ], lead many times to the decision of interrupting pregnancy, regardless the legislative restrictions. In the study “Unsafe abortion: global and regional estimates of the incidence of unsafe abortion and associated mortality in 2008” issued in 2011 by the WHO , it was determined that within the states with restrictive legislation on abortion, we may also encounter a large number of illegal abortions. The illegal abortions may also be resulting in an increased risk of woman’s health and life considering that most of the times inappropriate techniques are being used, the hygienic conditions are precarious and the medical treatments are incorrectly administered [ 25 ]. Although abortions done according to medical guidelines carry very low risk of complications, 1–3 unsafe abortions contribute substantially to maternal morbidity and death worldwide [ 26 ].

WHO has estimated for the year 2008, the fact that worldwide women between the ages of 15 and 44 years carried out 21.6 million “unsafe” abortions, which involved a high degree of risk and were distributed as follows: 0.4 million in the developed regions and a number of 21.2 million in the states in course of development [ 25 ].

Case study: Romania

Legal perspective on abortion

In Romania, abortion was brought under regulation by the first Criminal Code of the United Principalities, from 1864.

The Criminal Code from 1864, provided the abortion infringement in Article 246, on which was regulated as follows: “Any person, who, using means such as food, drinks, pills or any other means, which will consciously help a pregnant woman to commit abortion, will be punished to a minimum reclusion (three years).

The woman who by herself shall use the means of abortion, or would accept to use means of abortion which were shown or given to her for this purpose, will be punished with imprisonment from six months to two years, if the result would be an abortion. In a situation where abortion was carried out on an illegitimate baby by his mother, the punishment will be imprisonment from six months to one year.

Doctors, surgeons, health officers, pharmacists (apothecary) and midwives who will indicate, will give or will facilitate these means, shall be punished with reclusion of at least four years, if the abortion took place. If abortion will cause the death of the mother, the punishment will be much austere of four years” (Art. 246) [ 27 ].

The Criminal Code from 1864, reissued in 1912, amended in part the Article 246 for the purposes of eliminating the abortion of an illegitimate baby case. Furthermore, it was no longer specified the minimum of four years of reclusion, in case of abortion carried out with the help of the medical staff, leaving the punishment to the discretion of the Court (Art. 246) [ 28 ].

The Criminal Code from 1936 regulated abortion in the Articles 482–485. Abortion was defined as an interruption of the normal course of pregnancy, being punished as follows:

“ 1 . When the crime is committed without the consent of the pregnant woman, the punishment was reformatory imprisonment from 2 to 5 years. If it caused the pregnant woman any health injury or a serious infirmity, the punishment was reformatory imprisonment from 3 to 6 years, and if it has caused her death, reformatory imprisonment from 7 to 10 years;

2 . When the crime was committed by the unmarried pregnant woman by herself, or when she agreed that someone else should provoke the abortion, the punishment is reformatory imprisonment from 3 to 6 months, and if the woman is married, the punishment is reformatory imprisonment from 6 months to one year. Same penalty applies also to the person who commits the crime with the woman’s consent. If abortion was committed for the purpose of obtaining a benefit, the punishment increases with another 2 years of reformatory imprisonment.

If it caused the pregnant woman any health injuries or a severe disablement, the punishment will be reformatory imprisonment from one to 3 years, and if it has caused her death, the punishment is reformatory imprisonment from 3 to 5 years” (Art. 482) [ 29 ].

The criminal legislation from 1936 specifies that it is not considered as an abortion the interruption from the normal course of pregnancy, if it was carried out by a doctor “when woman’s life was in imminent danger or when the pregnancy aggravates a woman’s disease, putting her life in danger, which could not be removed by other means and it is obvious that the intervention wasn’t performed with another purpose than that of saving the woman’s life” and “when one of the parents has reached a permanent alienation and it is certain that the child will bear serious mental flaws” (Art. 484, Par. 1 and Par. 2) [ 29 ].

In the event of an imminent danger, the doctor was obliged to notify prosecutor’s office in writing, within 48 hours after the intervention, on the performance of the abortion. “In the other cases, the doctor was able to intervene only with the authorization of the prosecutor’s office, given on the basis of a medical certificate from hospital or a notice given as a result of a consultation between the doctor who will intervene and at least a professor doctor in the disease which caused the intervention. General’s Office Prosecutor, in all cases provided by this Article, shall be obliged to maintain the confidentiality of all communications or authorizations, up to the intercession of any possible complaints” (Art. 484) [ 29 ].

The legislation of 1936 provided a reformatory injunction from one to three years for the abortions committed by doctors, sanitary agents, pharmacists, apothecary or midwives (Art. 485) [ 29 ].

Abortion on demand has been legalized for the first time in Romania in the year 1957 by the Decree No. 463, under the condition that it had to be carried out in a hospital and to be carried out in the first quarter of the pregnancy [ 30 ]. In the year 1966, demographic policy of Romania has dramatically changed by introducing the Decree No. 770 from September 29 th , which prohibited abortion. Thus, the voluntary interruption of pregnancy became a crime, with certain exceptions, namely: endangering the mother’s life, physical or mental serious disability; serious or heritable illness, mother’s age over 45 years, if the pregnancy was a result of rape or incest or if the woman gave birth to at least four children who were still in her care (Art. 2) [ 31 ].

In the Criminal Code from 1968, the abortion crime was governed by Articles 185–188.

The Article 185, “the illegal induced abortion”, stipulated that “the interruption of pregnancy by any means, outside the conditions permitted by law, with the consent of the pregnant woman will be punished with imprisonment from one to 3 years”. The act referred to above, without the prior consent from the pregnant woman, was punished with prison from two to five years. If the abortion carried out with the consent of the pregnant woman caused any serious body injury, the punishment was imprisonment from two to five years, and when it caused the death of the woman, the prison sentence was from five to 10 years. When abortion was carried out without the prior consent of the woman, if it caused her a serious physical injury, the punishment was imprisonment from three to six years, and if it caused the woman’s death, the punishment was imprisonment from seven to 12 years (Art. 185) [ 32 ].

“When abortion was carried out in order to obtain a material benefit, the maximum punishment was increased by two years, and if the abortion was made by a doctor, in addition to the prison punishment could also be applied the prohibition to no longer practice the profession of doctor”.

Article 186, “abortion caused by the woman”, stipulated that “the interruption of the pregnancy course, committed by the pregnant woman, was punished with imprisonment from 6 months to 2 years”, quoting the fact that by the same punishment was also sanctioned “the pregnant woman’s act to consent in interrupting the pregnancy course made out by another person” (Art. 186) [ 26 ].

The Regulations of the Criminal Code in 1968, also provided the crime of “ownership of tools or materials that can cause abortion”, the conditions of this holding being met when these types of instruments were held outside the hospital’s specialized institutions, the infringement shall be punished with imprisonment from three months to one year (Art. 187) [ 32 ].

Furthermore, the doctors who performed an abortion in the event of extreme urgency, without prior legal authorization and if they did not announce the competent authority within the legal deadline, they were punished by imprisonment from one month to three months (Art. 188) [ 32 ].

In the year 1985, it has been issued the Decree No. 411 of December 26 th , by which the conditions imposed by the Decree No. 770 of 1966 have been hardened, meaning that it has increased the number of children, that a woman could have in order to request an abortion, from four to five children [ 33 ].

The Articles 185–188 of the Criminal Code and the Decree No. 770/1966 on the interruption of the pregnancy course have been abrogated by Decree-Law No. 1 from December 26 th , 1989, which was published in the Official Gazette No. 4 of December 27 th , 1989 (Par. 8 and Par. 12) [ 34 ].

The Criminal Code from 1968, reissued in 1997, maintained Article 185 about “the illegal induced abortion”, but drastically modified. Thus, in this case of the Criminal Code, we identify abortion as “the interruption of pregnancy course, by any means, committed in any of the following circumstances: ( a ) outside medical institutions or authorized medical practices for this purpose; ( b ) by a person who does not have the capacity of specialized doctor; ( c ) if age pregnancy has exceeded 14 weeks”, the punishment laid down was the imprisonment from 6 months to 3 years” (Art. 185, Par. 1) [ 35 ]. For the abortion committed without the prior consent of the pregnant woman, the punishment consisted in strict prison conditions from two to seven years and with the prohibition of certain rights (Art. 185, Par. 2) [ 35 ].

For the situation of causing serious physical injury to the pregnant woman, the punishment was strict prison from three to 10 years and the removal of certain rights, and if it had as a result the death of the pregnant woman, the punishment was strict prison from five to 15 years and the prohibition of certain rights (Art. 185, Par. 3) [ 35 ].

The attempt was punished for the crimes specified in the various cases of abortion.

Consideration should also be given in the Criminal Code reissued in 1997 for not punishing the interruption of the pregnancy course carried out by the doctor, if this interruption “was necessary to save the life, health or the physical integrity of the pregnant woman from a grave and imminent danger and that it could not be removed otherwise; in the case of a over fourteen weeks pregnancy, when the interruption of the pregnancy course should take place from therapeutic reasons” and even in a situation of a woman’s lack of consent, when it has not been given the opportunity to express her will, and abortion “was imposed by therapeutic reasons” (Art. 185, Par. 4) [ 35 ].

Criminal Code from 2004 covers abortion in Article 190, defined in the same way as in the prior Criminal Code, with the difference that it affects the limits of the punishment. So, in the event of pregnancy interruption, in accordance with the conditions specified in Paragraph 1, “the penalty provided was prison time from 6 months to one year or days-fine” (Art. 190, Par. 1) [ 36 ].

Nowadays, in Romania, abortion is governed by the criminal law of 2009, which entered into force in 2014, by the section called “aggression against an unborn child”. It should be specified that current criminal law does not punish the woman responsible for carrying out abortion, but only the person who is involved in carrying out the abortion. There is no punishment for the pregnant woman who injures her fetus during pregnancy.

In Article 201, we can find the details on the pregnancy interruption infringement. Thus, the pregnancy interruption can be performed in one of the following circumstances: “outside of medical institutions or medical practices authorized for this purpose; by a person who does not have the capacity of specialist doctor in Obstetrics and Gynecology and the right of free medical practice in this specialty; if gestational age has exceeded 14 weeks”, the punishment is the imprisonment for six months to three years, or fine and the prohibition to exercise certain rights (Art. 201, Par. 1) [ 37 ].

Article 201, Paragraph 2 specifies that “the interruption of the pregnancy committed under any circumstances, without the prior consent of the pregnant woman, can be punished with imprisonment from 2 to 7 years and with the prohibition to exercise some rights” (Art. 201, Par. 1) [ 37 ].

If by facts referred to above (Art. 201, Par. 1 and Par. 2) [ 37 ] “it has caused the pregnant woman’s physical injury, the punishment is the imprisonment from 3 to 10 years and the prohibition to exercise some rights, and if it has had as a result the pregnant woman’s death, the punishment is the imprisonment from 6 to 12 years and the prohibition to exercise some rights” (Art. 201, Par. 3) [ 37 ]. When the facts have been committed by a doctor, “in addition to the imprisonment punishment, it will also be applied the prohibition to exercise the profession of doctor (Art. 201, Par. 4) [ 37 ].

Criminal legislation specifies that “the interruption of pregnancy does not constitute an infringement with the purpose of a treatment carried out by a specialist doctor in Obstetrics and Gynecology, until the pregnancy age of twenty-four weeks is reached, or the subsequent pregnancy interruption, for the purpose of treatment, is in the interests of the mother or the fetus” (Art. 201, Par. 6) [ 37 ]. However, it can all be found in the phrases “therapeutic purposes” and “the interest of the mother and of the unborn child”, which predisposes the text of law to an interpretation, finally the doctors are the only ones in the position to decide what should be done in such cases, assuming direct responsibility [ 38 ].

Article 202 of the Criminal Code defines the crime of harming an unborn child, pointing out the punishments for the various types of injuries that can occur during pregnancy or in the childbirth period and which can be caused by the mother or by the persons who assist the birth, with the specification that the mother who harms her fetus during pregnancy is not punished and does not constitute an infringement if the injury has been committed during pregnancy or during childbirth period if the facts have been “committed by a doctor or by an authorized person to assist the birth or to follow the pregnancy, if they have been committed in the course of the medical act, complying with the specific provisions of his profession and have been made in the interest of the pregnant woman or fetus, as a result of the exercise of an inherent risk in the medical act” (Art. 202, Par. 6) [ 37 ].

The fact situation in Romania

During the period 1948–1955, called “the small baby boom” [ 39 ], Romania registered an average fertility rate of 3.23 children for a woman. Between 1955 and 1962, the fertility rate has been less than three children for a woman, and in 1962, fertility has reached an average of two children for a woman. This phenomenon occurred because of the Decree No. 463/1957 on liberalization of abortion. After the liberalization from 1957, the abortion rate has increased from 220 abortions per 100 born-alive children in the year 1960, to 400 abortions per 100 born-alive children, in the year 1965 [ 40 ].

The application of provisions of Decrees No. 770 of 1966 and No. 411 of 1985 has led to an increase of the birth rate in the first three years (an average of 3.7 children in 1967, and 3.6 children in 1968), followed by a regression until 1989, when it was recorded an average of 2.2 children, but also a maternal death rate caused by illegal abortions, raising up to 85 deaths of 100 000 births in the year of 1965, and 170 deaths in 1983. It was estimated that more than 80% of maternal deaths between 1980–1989 was caused by legal constraints [ 30 ].

After the Romanian Revolution in December 1989 and after the communism fall, with the abrogation of Articles 185–188 of the Criminal Code and of the Decree No. 770/1966, by the Decree of Law No. 1 of December 26 th , 1989, abortion has become legal in Romania and so, in the following years, it has reached the highest rate of abortion in Europe. Subsequently, the number of abortion has dropped gradually, with increasing use of birth control [ 41 ].

Statistical data issued by the Ministry of Health and by the National Institute of Statistics (INS) in Romania show corresponding figures to a legally carried out abortion. The abortion number is much higher, if it would take into account the number of illegal abortion, especially those carried out before 1989, and those carried out in private clinics, after the year 1990. Summing the declared abortions in the period 1958–2014, it is to be noted the number of them, 22 037 747 exceeds the current Romanian population. A detailed statistical research of abortion rate, in terms of years we have exposed in Table ​ Table1 1 .

The number of abortions declared in Romania in the period 1958–2016

Source: Pro Vita Association (Bucharest, Romania), National Institute of Statistics (INS – Romania), EUROSTAT [ 42 , 43 , 44 ]

Data issued by the United Nations International Children’s Emergency Fund (UNICEF) in June 2016, for the period 1989–2014, in matters of reproductive behavior, indicates a fertility rate for Romania with a continuous decrease, in proportion to the decrease of the number of births, but also a lower number of abortion rate reported to 100 deliveries (Table ​ (Table2 2 ).

Reproductive behavior in Romania in 1989–2014

Source: United Nations International Children’s Emergency Fund (UNICEF), Transformative Monitoring for Enhanced Equity (TransMonEE) Data. Country profiles: Romania, 1989–2015 [ 45 ].

By analyzing data issued for the period 1990–2015 by the International Organization of Health , UNICEF , United Nations Fund for Population Activity (UNFPA), The World Bank and the United Nations Population Division, it is noticed that maternal mortality rate has currently dropped as compared with 1990 (Table ​ (Table3 3 ).

Maternal mortality estimation in Romania in 1990–2015

Source: World Health Organization (WHO), Global Health Observatory Data. Maternal mortality country profiles: Romania, 2015 [ 46 ].

Opinion survey: women’s opinion on abortion

Argument for choosing the research theme

Although the problematic on abortion in Romania has been extensively investigated and debated, it has not been carried out in an ample sociological study, covering Romanian women’s perception on abortion. We have assumed making a study at national level, in order to identify the opinion on abortion, on the motivation to carry out an abortion, and to identify the correlation between religious convictions and the attitude toward abortion.

Examining the literature field of study

In the conceptual register of the research, we have highlighted items, such as the specialized literature, legislation, statistical documents.

Formulation of hypotheses and objectives

The first hypothesis was that Romanian women accept abortion, having an open attitude towards this act. Thus, the first objective of the research was to identify Romanian women’s attitude towards abortion.

The second hypothesis, from which we started, was that high religious beliefs generate a lower tolerance towards abortion. Thus, the second objective of our research has been to identify the correlation between the religious beliefs and the attitude towards abortion.

The third hypothesis of the survey was that, the main motivation in carrying out an abortion is the fact that a woman does not want a baby, and the main motivation for keeping the pregnancy is that the person wants a baby. In this context, the third objective of the research was to identify main motivation in carrying out an abortion and in maintaining a pregnancy.

Another hypothesis was that modern Romanian legislation on the abortion is considered fair. Based on this hypothesis, we have assumed the fourth objective, which is to identify the degree of satisfaction towards the current regulatory provisions governing the abortion.

Research methodology

The research method is that of a sociological survey by the application of the questionnaire technique. We used the sampling by age and residence looking at representative numbers of population from more developed as well as underdeveloped areas.

Determination of the sample to be studied

Because abortion is a typical women’s experience, we have chosen to make the quantitative research only among women. We have constructed the sample by selecting a number of 1260 women between the ages of 15 and 44 years (the most frequently encountered age among women who give birth to a child). We also used the quota sampling techniques, taking into account the following variables: age group and the residence (urban/rural), so that the persons included in the sample could retain characteristic of the general population.

By the sample of 1260 women, we have made a percentage of investigation of 0.03% of the total population.

The Questionnaires number applied was distributed as follows (Table ​ (Table4 4 ).

The sampling rates based on the age, and the region of residence

Source: Sample built, based on the population data issued by the National Institute of Statistics (INS – Romania) based on population census conducted in 2011 [ 47 ].

Data collection

Data collection was carried out by questionnaires administered by 32 field operators between May 1 st –May 31 st , 2018.

The analysis of the research results

In the next section, we will present the main results of the quantitative research carried out at national level.

Almost three-quarters of women included in the sample agree with carrying out an abortion in certain circumstances (70%) and only 24% have chosen to support the answer “ No, never ”. In modern contemporary society, abortion is the first solution of women for which a pregnancy is not desired. Even if advanced medical techniques are a lot safer, an abortion still carries a health risk. However, 6% of respondents agree with carrying out abortion regardless of circumstances (Table ​ (Table5 5 ).

Opinion on the possibility of carrying out an abortion

Although abortions carried out after 14 weeks are illegal, except for medical reasons, more than half of the surveyed women stated they would agree with abortion in certain circumstances. At the opposite pole, 31% have mentioned they would never agree on abortions after 14 weeks. Five percent were totally accepting the idea of abortion made to a pregnancy that has exceeded 14 weeks (Table ​ (Table6 6 ).

Opinion on the possibility of carrying out an abortion after the period of 14 weeks of pregnancy

For 53% of respondents, abortion is considered a crime as well as the right of a women. On the other hand, 28% of the women considered abortion as a crime and 16% associate abortion with a woman’s right (Table ​ (Table7 7 ).

Opinion on abortion: at the border between crime and a woman’s right

Opinions on what women abort at the time of the voluntary pregnancy interruption are split in two: 59% consider that it depends on the time of the abortion, and more specifically on the pregnancy development stage, 24% consider that regardless of the period in which it is carried out, women abort a child, and 14% have opted a fetus (Table ​ (Table8 8 ).

Abortion of a child vs. abortion of a fetus

Among respondents who consider that women abort a child or a fetus related to the time of abortion, 37.5% have considered that the difference between a baby and a fetus appears after 14 weeks of pregnancy (the period legally accepted for abortion). Thirty-three percent of them have mentioned that the distinction should be performed at the first few heartbeats; 18.1% think it is about when the child has all the features definitively outlined and can move by himself; 2.8% consider that the difference appears when the first encephalopathy traces are being felt and the child has formed all internal and external organs. A percentage of 1.7% of respondents consider that this difference occurs at the beginning of the central nervous system, and 1.4% when the unborn child has all the features that we can clearly see to a newborn child (Table ​ (Table9 9 ).

The opinion on the moment that makes the difference between a fetus and a child

We noticed that highly religious people make a clear association between abortion and crime. They also consider that at the time of pregnancy interruption it is aborted a child and not a fetus. However, unexpectedly, we noticed that 27% of the women, who declare themselves to be very religious, have also stated that they see abortion as a crime but also as a woman’s right. Thirty-one percent of the women, who also claimed profound religious beliefs, consider that abortion may be associated with the abortion of a child but also of a fetus, this depending on the time of abortion (Tables ​ (Tables10 10 and ​ and11 11 ).

The correlation between the level of religious beliefs and the perspective on abortion seen as a crime or a right

The correlation between the level of religious beliefs and the perspective on abortion procedure conducted on a fetus or a child

More than half of the respondents have opted for the main reason for abortion the appearance of medical problems to the child. Baby’s health represents the main concern of future mothers, and of each parent, and the birth of a child with serious health issues, is a factor which frightens any future parent, being many times, at least theoretically, one good reason for opting for abortion. At the opposite side, 12% of respondents would not choose abortion under any circumstances. Other reasons for which women would opt for an abortion are: if the woman would have a medical problem (22%) or would not want the child (10%) (Table ​ (Table12 12 ).

Potential reasons for carrying out an abortion

Most of the women want to give birth to a child, 56% of the respondents, representing also the reason that would determine them to keep the child. Morality (26%), faith (10%) or legal restrictions (4%), are the three other reasons for which women would not interrupt a pregnancy. Only 2% of the respondents have mentioned other reasons such as health or age.

A percentage of 23% of the surveyed people said that they have done an abortion so far, and 77% did not opted for a surgical intervention either because there was no need, or because they have kept the pregnancy (Table ​ (Table13 13 ).

Rate of abortion among women in the sample

Most respondents, 87% specified that they have carried out an abortion during the first 14 weeks – legally accepted limit for abortion: 43.6% have made abortion in the first four weeks, 39.1% between weeks 4–8, and 4.3% between weeks 8–14. It should be noted that 8.7% could not appreciate the pregnancy period in which they carried out abortion, by opting to answer with the option “ I don’t know ”, and a percentage of 4.3% refused to answer to this question.

Performing an abortion is based on many reasons, but the fact that the women have not wanted a child is the main reason mentioned by 47.8% of people surveyed, who have done minimum an abortion so far. Among the reasons for the interruption of pregnancy, it is also included: women with medical problems (13.3%), not the right time to be a mother (10.7%), age motivation (8.7%), due to medical problems of the child (4.3%), the lack of money (4.3%), family pressure (4.3%), partner/spouse did not wanted. A percentage of 3.3% of women had different reasons for abortion, as follows: age difference too large between children, career, marital status, etc. Asked later whether they regretted the abortion, a rate of 69.6% of women who said they had at least one abortion regret it (34.8% opted for “ Yes ”, and 34.8% said “ Yes, partially ”). 26.1% of surveyed women do not regret the choice to interrupted the pregnancy, and 4.3% chose to not answer this question. We noted that, for women who have already experienced abortion, the causes were more diverse than the grounds on which the previous question was asked: “What are the reasons that determined you to have an abortion?” (Table ​ (Table14 14 ).

The reasons that led the women in the sample to have an abortion

The majority of the respondents (37.5%) considered that “nervous depression” is the main consequence of abortion, followed by “insomnia and nightmares” (24.6%), “disorders in alimentation” and “affective disorders” (each for 7.7% of respondents), “deterioration of interpersonal relationships” and “the feeling of guilt”(for 6.3% of the respondents), “sexual disorders” and “panic attacks” (for 6.3% of the respondents) (Table ​ (Table15 15 ).

Opinion on the consequences of abortion

Over half of the respondents believe that abortion should be legal in certain circumstances, as currently provided by law, 39% say it should be always legal, and only 6% opted for the illegal option (Table ​ (Table16 16 ).

Opinion on the legal regulation of abortion

Although the current legislation does not punish pregnant women who interrupt pregnancy or intentionally injured their fetus, survey results indicate that 61% of women surveyed believe that the national law should punish the woman and only 28% agree with the current legislation (Table ​ (Table17 17 ).

Opinion on the possibility of punishing the woman who interrupts the course of pregnancy or injures the fetus

For the majority of the respondents (40.6%), the penalty provided by the current legislation, the imprisonment between six months and three years or a fine and deprivation of certain rights for the illegal abortion is considered fair, for a percentage of 39.6% the punishment is too small for 9.5% of the respondents is too high. Imprisonment between two and seven years and deprivation of certain rights for an abortion performed without the consent of the pregnant woman is considered too small for 65% of interviewees. Fourteen percent of them think it is fair and only 19% of respondents consider that Romanian legislation is too severe with people who commit such an act considering the punishment as too much. The imprisonment from three to 10 years and deprivation of certain rights for the facts described above, if an injury was caused to the woman, is considered to be too small for more than half of those included in the survey, 64% and almost 22% for nearly a quarter of them. Only 9% of the respondents mentioned that this legislative measure is too severe for such actions (Table ​ (Table18 18 ).

Opinion on the regulation of abortion of the Romanian Criminal Code (Art. 201)

Conclusions

After analyzing the results of the sociological research regarding abortion undertaken at national level, we see that 76% of the Romanian women accept abortion, indicating that the majority accepts only certain circumstances (a certain period after conception, for medical reasons, etc.). A percentage of 64% of the respondents indicated that they accept the idea of abortion after 14 weeks of pregnancy (for solid reasons or regardless the reason). This study shows that over 50% of Romanian women see abortion as a right of women but also a woman’s crime and believe that in the moment of interruption of a pregnancy, a fetus is aborted. Mostly, the association of abortion with crime and with the idea that a child is aborted is frequently found within very religious people. The main motivation for Romanian women in taking the decision not to perform an abortion is that they would want the child, and the main reason to perform an abortion is the child’s medical problems. However, it is noted that, in real situations, in which women have already done at least one abortion, most women resort to abortion because they did not want the child towards the hypothetical situation in which women felt that the main reason of abortion is a medical problem. Regarding the satisfaction with the current national legislation of the abortion, the situation is rather surprising. A significant percentage (61%) of respondents felt as necessary to punish the woman who performs an illegal abortion, although the legislation does not provide a punishment. On the other hand, satisfaction level to the penalties provided by law for various violations of the legal conditions for conducting abortion is low, on average only 25.5% of respondents are being satisfied with these, the majority (average 56.2%) considering the penalties as unsatisfactory. Understood as a social phenomenon, intensified by human vulnerabilities, of which the most obvious is accepting the comfort [ 48 ], abortion today is no longer, in Romanian society, from a legal or religious perspective, a problem. Perceptions on the legislative sanction, moral and religious will perpetual vary depending on beliefs, environment, education, etc. The only and the biggest social problem of Romania is truly represented by the steadily falling birth rate.

Conflict of interests

The authors declare that they have no conflict of interests.

Legal and Ethical Implications Essay

The recent business environment is marked by increased integration and enlargement. In the media and TV industries, mergers can be seen as a natural process that helps companies to create a unified network of activities and information exchange. The changes should be allowed because they will improve the productivity of the company and allow DWI to compete on a global scale. DWI employees should recognize that rapid technological changes are about to make their media products obsolete. In terms of moral philosophy, the consequences of mergers would benefit the company and its employees. Utilitarian ethics is known as consequentialism. This ethics considers an action s being morally right or wrong based solely on the outcomes that result from performing it. Mergers can be seen as the right action that brings the best consequences. The implied assumption is that the financial costs and benefits of mergers are measurable on a common numerical level and can be added and subtracted from each other (Bentham, 2000).

Following FCC regulations, further consolidation of the market will threaten the position of smaller companies and their products. Thus, the interests to consider when choosing an action are the nonegoist and unselfish ways that consider the most utility for all the employees affected by an ethics. Ethics audit allows to say that mergers and changes in telecommunication industry are permutable as they benefit media companies and allow greater flexibility of resources. It is important to note that the giant corporations in media industry have direct and indirect control over smaller companies, so FCC changes will legalize their ownership and allows the state to control their real business activity (Kotler and Lee, 2004). The business practices that violate the rule but result in beneficial actions are still considered wrong. Sherman Anti-Trust Act prohibits trusts and other forms of conspiracy cooperation considered as illegal. In this case, agreements between media companies and conspiracy relations are considered as unlawful and will be punished. To avoid such situation, the sources of the regulations and ethical principles could be either theological in the sense that the actions are stipulated as moral by a religion, or public in the sense that they are the result of a social agreement as to whether they are right or wrong. Because of the limitations of these two approaches, principles of media regulations have been adopted based on either the consequences of adopting a particular set of moral rules, or our supposed faculty of moral intuition (Donaldson et al 2002).

The legal case, Red lion Broadcasting vs FCC, shows that fairness in broadcasting should be the main priority of corporations. Thus, it is critically important for an acquiring corporation to determine the real reasons and motivations for owners to sell. Many media companies have been misled by their own wishful thinking as to what is really going on within the companies they wish to acquire. In terms of the deontological approach, the mergers in the media industry are morally wrong as they violate the rights of employees (Lion Broadcasting Co. v. FCC 1969). Thus, it would be morally right to allow companies to expand their business and deliver the best possible products to the end consumers. Telecommunications Act (1996) stipulates the main principles of business conduct and imposed additional obligations on media companies. The outcome of this approach is overall good and happiness, as increased productivity will benefit both the company and employees. Broadcast is governed and licensed by the government, so changes in FCC are crucial for the overall success of media companies on the international scale. Employees of the company have obligations and duties before the company, so they should perform only professional activities during business hours. In this case, mergers are the only possible way to control the market (Kotler and Lee, 2004).

In sum, mergers should be permitted by the law as they benefit the end consumers and allow companies to globalize their activities. Mergers do not mean the absence of control and legal responsibilities of the media companies. The nature of the ethical and moral decision in each of these roles evolves from the best outcomes approach either to the truth as coherence or a pragmatic view of truth in most of the other roles that recognize the nature of ethical decision-making.

  • Bentham, J. (2000). Deontology; or, The Science of Morality. BookSurge Publishing.
  • Donaldson, T., et al. (2002). Ethical Issues in Buandess , 7th edn, Upper SaddlePrentice-Hallrentice Hall.
  • Kotler, Ph., Lee, N. (2004). Corporate Social ResponsibilityBeste Most Good for Your Company and Your Cause. Wiley; 1 edition.
  • Lion Broadcasting Co. v. FCC , 395 U.S. 367 (1969)
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2022, March 5). Legal and Ethical Implications. https://ivypanda.com/essays/legal-and-ethical-implications/

"Legal and Ethical Implications." IvyPanda , 5 Mar. 2022, ivypanda.com/essays/legal-and-ethical-implications/.

IvyPanda . (2022) 'Legal and Ethical Implications'. 5 March.

IvyPanda . 2022. "Legal and Ethical Implications." March 5, 2022. https://ivypanda.com/essays/legal-and-ethical-implications/.

1. IvyPanda . "Legal and Ethical Implications." March 5, 2022. https://ivypanda.com/essays/legal-and-ethical-implications/.

Bibliography

IvyPanda . "Legal and Ethical Implications." March 5, 2022. https://ivypanda.com/essays/legal-and-ethical-implications/.

  • Ambiguity in Administrative Actions
  • Sexual Content in Broadcast Media: United States Case
  • The days of broadcasting are behind us
  • Ethics and Educational Requirement
  • Ethical Aspect of Applying Image or Video to Ads
  • Ethical, Legal, and Professional Standards in Assessment
  • Ethical Status of State-Sponsored Lotteries
  • Counseling Process: Trustworthiness and Expertness

IMAGES

  1. Sample essay on ethics

    ethical implications essay

  2. Ethical and legal Implications Essay Example

    ethical implications essay

  3. Ethical Issues of Human Cloning Free Essay Example

    ethical implications essay

  4. Legal and Ethical Implications

    ethical implications essay

  5. The ethical implications of human cloning

    ethical implications essay

  6. examples of ethical issues

    ethical implications essay

VIDEO

  1. Ethical Considerations in Research

  2. Lecture 6.3: Ethical Perspective Project

  3. Ethical Implications of Artificial Intelligence

  4. What should be the approach for Ethics Paper?

  5. What is Ethical writing?

  6. ESSAY ETHICS & MORALITY

COMMENTS

  1. How to Write an Ethics Essay: Guide & Paper Examples

    An ethics essay is a type of academic writing that explores ethical issues and dilemmas. Students should evaluates them in terms of moral principles and values. The purpose of an ethics essay is to examine the moral implications of a particular issue, and provide a reasoned argument in support of an ethical perspective.

  2. Ethical Considerations in Research

    Revised on May 9, 2024. Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people. The goals of human research often include understanding real-life phenomena, studying effective treatments ...

  3. PDF A Guide to Writing in Ethical Reasoning 15

    This guide is intended to provide advice for students writing the papers in Ethical Reasoning 15. Most of the paper assignments for the course can be approached ... • Question implications of the argument: Sometimes an argument can be ques-tioned because it logically implies something that is implausible. If an argument

  4. Ethical Considerations

    By considering the ethical implications of their actions, decision-makers can evaluate the potential consequences and choose the best course of action. Positive Impact on Society: Ethical considerations have a positive impact on society as a whole. By following ethical practices, companies can contribute to social and environmental causes ...

  5. The Importance of Ethics in Research

    Introduction. In science and medical research, ethics is essential in enhancing the safety and well-being of the subjects or participants. Different studies globally expose vulnerable populations or subjects to abuse, affecting their overall health. In the same case, researchers are employing diverse strategies to enhance ethics and reduce ...

  6. PDF The Ethical Implications of Human Cloning

    The Ethical Implications of Human Cloning. and on embryos created for research (whether natural or cloned) are morally on a par.This conclusion can be accepted by people who hold very different views about the moral status of the embryo. If cloning for stem cell research violates the respect the embryo is due,then so does stem cell research on ...

  7. Discuss the ethical implications of research studies and theories

    Essay plans; Related documents. Breakfast-warm-ups-non-calculator 2022 (using advance material topics) ... Ethical implications consider the impact or consequences that psychological research has on the right of other people in a wider context, not just the people participating in the research. In Milgram's research study, participants were ...

  8. Ethical Dilemmas in Qualitative Research: A Critical Literature Review

    To describe ethical issues in qualitative study on long-germ post-abortion experience: Case study—women interviews post-abortion: The ethical issues involved concern the personal position of the interviewer, the sensitivity of the theme and the social prejudice, interviewed protection and communication of results: Kylmä J et al. Finland ...

  9. Issues & Debates: Ethical Implications of Research Studies ...

    Exam Tip: If you are set an essay on ethical implications of research studies and theories, you can draw on what you know about ethical issues from your year one topics. However, there are also wider consequences that psychologists should also consider relating to the communication and publication of their findings. This is especially prevalent ...

  10. Ethical Papers Writing Guide with Examples and Topic Ideas

    An ethics paper is a type of an argumentative assignment that deals with a certain ethical problem that a student has to describe and solve. Also, it can be an essay where a certain controversial event or concept is elaborated through an ethical lens (e.g. moral rules and principles), or a certain ethical dilemma is explained.

  11. What Is Ethics in Research and Why Is It Important?

    Education in research ethics is can help people get a better understanding of ethical standards, policies, and issues and improve ethical judgment and decision making. Many of the deviations that occur in research may occur because researchers simply do not know or have never thought seriously about some of the ethical norms of research.

  12. What Covid Has Taught the World about Ethics

    The ethical issues prompted by Covid are not new, and informed guidance can help policymakers navigate trade-offs among key values and implement ethical principles in future health emergencies.

  13. 8.6 Ethical Implications of Research

    Her passion (apart from Psychology of course) is roller skating and when she is not working (or watching 'Coronation Street') she can be found busting some impressive moves on her local roller rink. Revision notes on 8.6 Ethical Implications of Research for the AQA A Level Psychology syllabus, written by the Psychology experts at Save My Exams.

  14. PDF CHAPTER 5: Ethical Challenges of AI Applications

    The five news topics that received the most attention in 2020 related to the ethical use of AI were: 1. The release of the European Commission's white paper on AI (5.9%) 2. Google's dismissal of ethics researcher Timnit Gebru (3.5%) 3. The AI ethics committee formed by the United Nations (2.7%) 4.

  15. Ethics of Artificial Intelligence and Robotics

    A last caveat: The ethics of AI and robotics is a very young field within applied ethics, with significant dynamics, but few well-established issues and no authoritative overviews—though there is a promising outline (European Group on Ethics in Science and New Technologies 2018) and there are beginnings on societal impact (Floridi et al. 2018 ...

  16. 12 Interesting Ethical Topics for Essay Papers

    Students often need to discuss, argue, or examine ethical issues for a class project or report. Many topics about ethical issues are timely or timeless. Menu. Home. Science, Tech, Math Science Math Social Sciences Computer Science Animals & Nature Humanities ... Grace. "12 Interesting Ethical Topics for Essay Papers." ThoughtCo, Sep. 8, 2021 ...

  17. Experts consider the ethical implications of new technology

    In a broad range of Harvard CS courses now, philosophy Ph.D. students and postdocs lead modules on ethical matters tailored to the technical concepts being taught in the class. "We want the ethical issues to arise organically out of the technical problems that they're working on in class,'" said Simmons. "We want our students to ...

  18. Ethical concerns mount as AI takes bigger decision-making role

    AI presents three major areas of ethical concern for society: privacy and surveillance, bias and discrimination, and perhaps the deepest, most difficult philosophical question of the era, the role of human judgment, said Sandel, who teaches a course in the moral, social, and political implications of new technologies.

  19. Issues of Ethics in the Workplace Essay

    Workplace ethics refers to moral principles and values governing proper behavioral conduct in the place of work (Barry & Shaw, 2013). Work ethics guide the managers as well as employees to do the right thing even if doing the wrong thing can equally be rewarding and satisfying. Therefore, workplace ethical issues involve a plethora of ethical ...

  20. The Moral And Ethical Implications Of Artificial Intelligence

    The Moral And Ethical Implications Of Artificial Intelligence. Tuesday, 20 June 2023 ∙ 5 minutes read. Artificial intelligence (AI) technologies are advancing at an unprecedented pace, and the idea of a technological singularity where machines become self-aware and surpass human intelligence is a highly debated topic among experts and the public.

  21. A research on abortion: ethics, legislation and socio-medical outcomes

    Abstract. This article presents a research study on abortion from a theoretical and empirical point of view. The theoretical part is based on the method of social documents analysis, and presents a complex perspective on abortion, highlighting items of medical, ethical, moral, religious, social, economic and legal elements.

  22. Ethical Issues essay

    This essay explores the ethical issues in computer science and the importance of ethical considerations in technological development. One of the most pressing ethical issues in computer science is privacy. The amount of data that is collected, stored, and analyzed by companies and governments is staggering. From browsing history and search ...

  23. Legal and Ethical Implications

    Legal and Ethical Implications Essay. The recent business environment is marked by increased integration and enlargement. In the media and TV industries, mergers can be seen as a natural process that helps companies to create a unified network of activities and information exchange. The changes should be allowed because they will improve the ...