• Art and Images in Psychiatry
  • Breast Cancer Screening Guidelines
  • Colorectal Screening Guidelines
  • Depression Screening Guidelines
  • Evidence-Based Medicine: An Oral History
  • Fishbein Fellowship
  • Genomics and Precision Health
  • Health Disparities
  • JNC 8: Hypertension Guidelines
  • JAMA Network Audio
  • Machine Learning
  • Medical Education
  • Opioid Management Guidelines
  • Research Ethics
  • Sepsis and Septic Shock
  • Statins and Dyslipidemia
  • JAMA Network Open
  • JAMA Cardiology
  • JAMA Dermatology
  • JAMA Health Forum
  • JAMA Internal Medicine
  • JAMA Neurology
  • JAMA Oncology
  • JAMA Ophthalmology
  • JAMA Otolaryngology–Head & Neck Surgery
  • JAMA Pediatrics
  • JAMA Psychiatry
  • JAMA Surgery
  • Archives of Neurology & Psychiatry (1919-1959)
  • Institutions / Librarians

Ethical Principles for Medical Research and Practice

The Nuremberg Code, Declarations of Helsinki and Geneva, and the International Code of Medical Ethics provide a set of ethical principles regarding human experimentation and clinical care.

  • Nuremberg Code
  • Declaration of Helsinki
  • Geneva Declaration
  • International  Code  of Medical Ethics

The Nuremberg Code

The Nuremberg Code is a 10-point set of rules for the conduct of human experiments articulated in 1947 in the trials of Nazi doctors and bureaucrats convicted of crimes against humanity for their roles in concentration camp experiments.

The Nuremberg Code 70 Years Later

Jonathan D. Moreno

JAMA | Viewpoint, August 17, 2017

Related Material

Medicine against society: lessons from the third reich.

Jeremiah A. Barondess

JAMA | November 27, 1996

The Nuremberg Code and the Nuremberg Trial: A Reappraisal

Us medical researchers, the nuremberg doctors trial, and the nuremberg code: a review of findings of the advisory committee on human radiation experiments.

Ruth R. Faden and Coauthors

Nuremberg and the Issue of Wartime Experiments on US Prisoners: The Green Committee

Jon M. Harkness

Teaching of Human Rights in US Medical Schools

Jeffrey Sonis

The Social Responsibilities of Health Professionals: Lessons From Their Role in Nazi Germany

Victor W. Sidel

Legacies of Nuremberg: Medical Ethics and Human Rights

Michael A. Grodin and George J. Annas

The Foundations of Bioethics

James M. Humber

The Path to Nuremberg in the Pages of JAMA, 1933-1939

William E. Seidelman

Institutional Review Boards Under Stress: Will They Explode or Change?

Donald F. Phillips

Bioethics Advisory Commission Holds First Meeting to Define Governing Principles of Ethical Research

Charles Marwick

Nazi Origins of an Anatomy Text: The Pernkopf Atlas

Howard A. Israel and William E. Seidelman

Richard S. Panush

Nazi Origins of an Anatomy Text: The Pernkopf Atlas-Reply

Edward B. Hutton Jr

The Declaration of Helsinki

The World Medical Association’s Declaration of Helsinki provides principles for medical researchers to guide the ethical conduct of research involving human participants.

World Medical Association Declaration of Helsinki

World Medical Association

JAMA | Special Communication, November 27, 2013

Previous Versions

Fifth revision.

JAMA | Special Communication, December 20, 2000

Perspectives on the Fifth Revision of the Declaration of Helsinki

JAMA | Commentary, December 20, 2000

Fourth Revision

JAMA | March 19, 1997

Second Revision

JAMA | September 12, 1966

JAMA | Medical News, September 28, 1964

Editorials and Correspondence

The declaration of helsinki, 50 years later.

Paul Ndebele

JAMA | Viewpoint, November 27, 2013

The 50th Anniversary of the Declaration of Helsinki: Progress but Many Remaining Challenges

Joseph Millum and Coauthors

Declaration of Helsinki and Protection for Vulnerable Research Participants

Samia A. Hurst

JAMA | Comment & Response, March 26, 2014

Related material

International research ethics education.

JAMA | Viewpoint, February 3, 2015

How to Enroll Participants in Research Ethically

David Wendler

JAMA | Commentary, April 20, 2011

Legal Issues in Scientific Research

Paul E. Kalb and Kristin Graham Koehler

JAMA | Health Law and Ethics, January 2, 2002

Updating Protections for Human Subjects Involved in Research

Jonathan Moreno and Coauthors

JAMA | Policy Perspectives, December 9, 1998

The Declaration of Geneva

The World Medical Association Declaration of Geneva outlines the professional duties of physicians and affirms the ethical principles of the global medical profession.

The Revised Declaration of Geneva: A Modern-Day Physician’s Pledge

Ramin Walter Parsa-Parsi

JAMA | Viewpoint, August 14, 2017

WMA Declaration of Geneva Revisions

The international code of medical ethics.

The World Medical Association’s International Code of Medical Ethics defines and elucidates the professional duties of physicians towards their patients, other physicians and health professionals, themselves, and society as a whole, in concordance with the WMA’s Declaration of Geneva: The Physician’s Pledge, and the WMA’s entire body of policies

The International Code of Medical Ethics of the World Medical Association

JAMA | Editorial, October 13, 2022

External Links

Wikipedia: declaration of helsinki, wikipedia: nuremberg code, wikipedia: declaration of geneva.

Author and Coauthors

JAMA | ArticleType, Date

© 2024 American Medical Association. All Rights Reserved, including those for text and data mining, AI training, and similar technologies.

  • U.S. Department of Health & Human Services

National Institutes of Health (NIH) - Turning Discovery into Health

  • Virtual Tour
  • Staff Directory
  • En Español

You are here

Nih clinical research trials and you, guiding principles for ethical research.

Pursuing Potential Research Participants Protections

Female doctor talking to a senior couple at her desk.

“When people are invited to participate in research, there is a strong belief that it should be their choice based on their understanding of what the study is about, and what the risks and benefits of the study are,” said Dr. Christine Grady, chief of the NIH Clinical Center Department of Bioethics, to Clinical Center Radio in a podcast.

Clinical research advances the understanding of science and promotes human health. However, it is important to remember the individuals who volunteer to participate in research. There are precautions researchers can take – in the planning, implementation and follow-up of studies – to protect these participants in research. Ethical guidelines are established for clinical research to protect patient volunteers and to preserve the integrity of the science.

NIH Clinical Center researchers published seven main principles to guide the conduct of ethical research:

Social and clinical value

Scientific validity, fair subject selection, favorable risk-benefit ratio, independent review, informed consent.

  • Respect for potential and enrolled subjects

Every research study is designed to answer a specific question. The answer should be important enough to justify asking people to accept some risk or inconvenience for others. In other words, answers to the research question should contribute to scientific understanding of health or improve our ways of preventing, treating, or caring for people with a given disease to justify exposing participants to the risk and burden of research.

A study should be designed in a way that will get an understandable answer to the important research question. This includes considering whether the question asked is answerable, whether the research methods are valid and feasible, and whether the study is designed with accepted principles, clear methods, and reliable practices. Invalid research is unethical because it is a waste of resources and exposes people to risk for no purpose

The primary basis for recruiting participants should be the scientific goals of the study — not vulnerability, privilege, or other unrelated factors. Participants who accept the risks of research should be in a position to enjoy its benefits. Specific groups of participants  (for example, women or children) should not be excluded from the research opportunities without a good scientific reason or a particular susceptibility to risk.

Uncertainty about the degree of risks and benefits associated with a clinical research study is inherent. Research risks may be trivial or serious, transient or long-term. Risks can be physical, psychological, economic, or social. Everything should be done to minimize the risks and inconvenience to research participants to maximize the potential benefits, and to determine that the potential benefits are proportionate to, or outweigh, the risks.

To minimize potential conflicts of interest and make sure a study is ethically acceptable before it starts, an independent review panel should review the proposal and ask important questions, including: Are those conducting the trial sufficiently free of bias? Is the study doing all it can to protect research participants? Has the trial been ethically designed and is the risk–benefit ratio favorable? The panel also monitors a study while it is ongoing.

Potential participants should make their own decision about whether they want to participate or continue participating in research. This is done through a process of informed consent in which individuals (1) are accurately informed of the purpose, methods, risks, benefits, and alternatives to the research, (2) understand this information and how it relates to their own clinical situation or interests, and (3) make a voluntary decision about whether to participate.

Respect for potential and enrolled participants

Individuals should be treated with respect from the time they are approached for possible participation — even if they refuse enrollment in a study — throughout their participation and after their participation ends. This includes:

  • respecting their privacy and keeping their private information confidential
  • respecting their right to change their mind, to decide that the research does not match their interests, and to withdraw without a penalty
  • informing them of new information that might emerge in the course of research, which might change their assessment of the risks and benefits of participating
  • monitoring their welfare and, if they experience adverse reactions, unexpected effects, or changes in clinical status, ensuring appropriate treatment and, when necessary, removal from the study
  • informing them about what was learned from the research

More information on these seven guiding principles and on bioethics in general

This page last reviewed on March 16, 2016

Connect with Us

  • More Social Media from NIH

BMC Medical Ethics

Latest collections open to submissions, racial disparities in healthcare.

Guest Edited by Shervin Assari and Jordan M. Sang

Islamic medical ethics

Guest Edited by Rosie Duivenbode, Aasim I. Padela and Inass Shaltout

Ethical considerations in decision-making for gender-affirming care

Guest Edited by Asa Radix, Arjee Javellana Restar, Megan E. Sutter and Ariella R. Tabaac

Editor's choice

New Content Item

  • Most accessed

Who to engage in HIV vaccine trial benefit-sharing negotiations? An empirical proposition of a framework

Authors: Godwin Pancras, Mangi Ezekiel, Erasto Mbugi and Jon F. Merz

Navigating the ethical landscape of artificial intelligence in radiography: a cross-sectional study of radiographers’ perspectives

Authors: Faten Mane Aldhafeeri

A qualitative interview study to determine barriers and facilitators of implementing automated decision support tools for genomic data access

Authors: Vasiliki Rahimzadeh, Jinyoung Baek, Jonathan Lawson and Edward S. Dove

Facing a request for assisted death - views of Finnish physicians, a mixed method study

Authors: Reetta P. Piili, Minna Hökkä, Jukka Vänskä, Elina Tolvanen, Pekka Louhiala and Juho T. Lehto

Ethics support for ethics support: the development of the Confidentiality Compass for dealing with moral challenges concerning (breaching) confidentiality in moral case deliberation

Authors: Wieke Ligtenberg, Margreet Stolper and Bert Molewijk

Most recent articles RSS

View all articles

Controversies and considerations regarding the termination of pregnancy for Foetal Anomalies in Islam

Authors: Abdulrahman Al-Matary and Jaffar Ali

Implicit bias in healthcare professionals: a systematic review

Authors: Chloë FitzGerald and Samia Hurst

Top 10 health care ethics challenges facing the public: views of Toronto bioethicists

Authors: Jonathan M Breslin, Susan K MacRae, Jennifer Bell and Peter A Singer

The four principles: Can they be measured and do they predict ethical decision making?

Authors: Katie Page

Human cloning laws, human dignity and the poverty of the policy making dialogue

Authors: Timothy Caulfield

Most accessed articles RSS

Aims and scope

Featured news.

New Content Item

Become an Editorial Board Member

We are recruiting new, international Editorial Board Members. 

BLM

Black Lives Matter

A collection of books, journal articles and magazine content that amplifies Black voices and the issues raised by the Black Lives Matter movement.

New Content Item

Trending BMC Medical Ethics articles

View the recent trending BMC Medical Ethics articles.

BMC Series Blog

Highlights of the BMC series – April 2024

Highlights of the BMC series – April 2024

15 May 2024

Cleft Lip and Palate Awareness Week: Highlights from the BMC Series

Cleft Lip and Palate Awareness Week: Highlights from the BMC Series

14 May 2024

Welcoming BMC Agriculture: A New Journal for Sustainable Agricultural Science

Welcoming BMC Agriculture: A New Journal for Sustainable Agricultural Science

08 May 2024

Latest Tweets

Your browser needs to have JavaScript enabled to view this timeline

Important information

Editorial board

For authors

For editorial board members

For reviewers

  • Manuscript editing services

Annual Journal Metrics

2022 Citation Impact 2.7 - 2-year Impact Factor 3.5 - 5-year Impact Factor 1.410 - SNIP (Source Normalized Impact per Paper) 0.809 - SJR (SCImago Journal Rank)

2023 Speed 30 days submission to first editorial decision for all manuscripts (Median) 200 days submission to accept (Median)

2023 Usage  1,830,857 downloads 1,282 Altmetric mentions 

  • More about our metrics

Peer-review Terminology

The following summary describes the peer review process for this journal:

Identity transparency: Single anonymized

Reviewer interacts with: Editor

Review information published: Review reports. Reviewer Identities reviewer opt in. Author/reviewer communication

More information is available here

  • Follow us on Twitter

ISSN: 1472-6939

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

The Ethics of Clinical Research

Clinical research attempts to address a relatively straightforward, and important challenge: how do we determine whether one medical intervention is better than another, whether it offers greater clinical benefit and/or poses fewer risks? Clinicians may one day be able to answer these questions by relying on computer models, thereby avoiding reliance on clinical research and the ethical concerns it raises. Until that day, clinical researchers begin by testing potential new medical interventions in the laboratory, and often in animals. While these methods can provide valuable information and, in the case of animal research, raise important ethical issues of their own, potential new interventions eventually must be tested in human beings. Interventions which work miracles in test tubes and in mice, often leave humans untouched, or worse off.

Testing medical interventions in humans typically poses some risks to the participants, no matter how many laboratory and animal tests precede it. In this way, the process of collecting data through clinical trials to improve health and well-being inevitably exposes research participants to some risks for the benefit of future patients. This brings us to the central ethical challenge posed by clinical research: When is it ethically permissible to expose research participants to risks of harm for the benefit of others? The present entry focuses on this concern, and canvasses the most prominent attempts to address it. The present entry largely brackets the range of interesting and important ethical challenges that arise in the course of conducting clinical research: How should it be reviewed? Who may conduct it? What must potential participants understand to give valid consent? May it be conducted in countries that will not be able to afford the intervention being tested? Do investigators have any obligations to treat unrelated medical conditions in participants that they uncover during the course of their research?

One might attempt to address the central ethical challenge by limiting clinical research to the medical setting, offering experimental interventions to patients who want to try them. This approach, which has the virtue of evaluating interventions in the process of trying to help individual patients, makes sense for comparisons of two or more interventions that are widely accepted and already in clinical use. In contrast, this approach poses enormous scientific and practical problems with respect to testing new interventions. On the practical side, who would be willing to manufacture a new intervention without knowing whether it works? What dose should be used? How often should the new drug be taken? More importantly, this approach might not yield reliable information as to whether the new treatment is useful or harmful until hundreds, perhaps thousands of people have received it. Clinical research is designed to address these concerns by systematically assessing potential new treatments in a small number of individuals, including very sick ones, before offering them more widely. As we go about our daily lives, driving cars, flushing our waste down the drain, exhaling, getting a dog, we inevitably expose others to risks of harm. Despite the fact that these practices pervade our lives, there has been surprisingly little philosophical analysis of the conditions under which they are acceptable (Hayenhjelm and Wolff 2012). In addition to being of value in its own right, evaluation of the central ethical challenge posed by clinical research thus provides an opportunity to consider one of the more fundamental concerns in moral theory: when is it acceptable to expose some individuals to risks of harm?

1. What is Clinical Research?

2. early clinical research, 3. abuses and guidelines, 4. clinical research and clinical care, 5. a libertarian analysis, 6. contract theory, 7. minimal risks, 8. goals and interests, 9. industry sponsored research, 10. learning health care, other internet resources, related entries.

Human subjects research is research which studies humans, as opposed to animals, atoms, or asteroids. Assessment of whether humans prefer 100 dollars or a 1% chance of 10,000 dollars constitutes human subjects research. Clinical research refers to the subset of human subjects research that evaluates the impact interventions have on human beings with the goal of assessing whether they might help to improve human health and well-being. The present analysis focuses on research that is designed to improve human health and well-being by identifying better methods to treat, cure or prevent illness or disease. This focus is intended to bracket the question of whether research on enhancements that might increase our well-being by making us better than normal, allowing us to remember more or worry less, without treating, curing, or preventing illness or disease qualifies as clinical research.

We shall also bracket the question of whether quality improvement and quality assurance projects qualify as clinical research. To briefly consider the type of research at the heart of this debate, consider a hospital which proposes to evaluate the impact of checklists on the quality of patient care. Half the nurses in the hospital are told to continue to provide care as usual; the other half are provided with a checklist and instructed to mechanically check off each item as they complete it when caring for their patients. The question of whether this activity constitutes clinical research is of theoretical interest as a means to clarifying the precise boundaries of the concept. Should we say that this is not clinical research because the checklist is used by the nurses, not administered to the patients? Or should we say this is clinical research because it involves the systematic testing of a hypothesis which is answered by collecting data on patient outcomes? The answer has significant practical implications, determining whether these activities must satisfy existing regulations for clinical research, including whether the clinicians need to obtain patients’ informed consent before the nurses taking care of them can use the checklist.

While clinical medicine is enormously better than it was 100 or even 50 years ago, there remain many diseases against which current clinical medicine offers an inadequate response. To name just a few, malaria kills over a million people, mostly children, every year; chronic diseases, chief among them heart disease and stroke, kill millions each year, and there still are no effective treatments for Alzheimer disease. The social value of clinical research lies in its ability to collect information that might be useful for identifying improved methods to address these conditions. Yet, it is the rare study which definitively establishes that a particular method is effective and safe for treating, curing or preventing some illness. The success of specific research studies more commonly lies in the gathering of information that, together with the results of many other studies, may yield these improvements. For example, prior to establishing the efficacy of a new treatment for a given condition, researchers typically need to identify the cause of the condition, possible mechanisms for treating it, a safe and effective dose, and ways of testing whether the drug alters the course of the disease.

The process of testing potential new treatments can take 10–15 years, and is standardly divided into phases. Formalized phase 0 studies are a relatively recent phenomenon involving the testing of interventions and methods which might be used in later phase studies. A phase 0 study might be designed to determine the mechanism of action of a particular drug and evaluate different ways to administer it. Phase 1 studies are the earliest tests of a new intervention and are conducted in small numbers of individuals. Phase 1 studies are designed to evaluate the pharmacokinetics and pharmacodynamics of new treatments, essentially evaluating how the drug influences the human body and how the human body influences the drug. Phase 1 studies also evaluate the risks of the treatment and attempt to identify an appropriate dose to be used in subsequent phase 2 studies. Phase 1 studies pose risks and frequently offer little if any potential for clinical benefit to participants. As a result, a significant amount of the ethical concern over clinical research focuses on phase 1 studies.

If phase 1 testing is successful, potential new treatments go on to larger phase 2 studies which are designed to further assess risks and also to evaluate whether there is any evidence that the treatment might be beneficial. Successful phase 2 studies are followed by phase 3 studies which involve hundreds, sometimes thousands of patients. Phase 3 studies are designed to provide a rigorous test of the efficacy of a treatment and frequently involve randomization of participants to the new treatment or a control, which might be the current treatment or a placebo. Finally, post-marketing or phase 4 studies evaluate the use of interventions in clinical practice.

The first three phases of clinical trials typically include purely research procedures, such as blood draws, imaging scans, or biopsies, that are included in the study to collect data regarding the treatment under study. Analysis of the central ethical challenge posed by clinical research thus focuses on three related risk-benefit profiles: (a) the risk-benefit profile of the interventions(s) under study; (b) the risk-benefit profile of the included research procedures; and (c) the risk-benefit profile of the study as a whole.

In some cases, receipt of potential new treatments is in the ex ante interests of research participants. For example, the risks posed by an experimental cancer treatment might be justified by the possibility that it will extend participants’ lives. Moreover, the risk/benefit profile of the treatment might be as favorable for participants as the risk/benefit profile of the available alternatives. In these cases, receipt of the experimental intervention ex ante promotes participants’ interests. In other cases, participation in research poses ‘net’ risks, that is, risks of harm which are not, or not entirely, justified by potential clinical benefits to individual participants. A first in human trial of an experimental treatment might involve a single dose to see whether humans can tolerate it. And it might occur in healthy individuals who have no need of treatment. These studies pose risks to participants and offer essentially no chance for clinical benefit. The qualifier to ‘essentially’ no chance of clinical benefit is intended to capture the fact that the research procedures included in clinical trials may inadvertently end up providing some clinical benefit to some participants. For example, a biopsy that is used to collect research data may disclose a previously unidentified and treatable condition. The chance for such benefit, albeit real, is typically so remote that it is not sufficient to compensate for the risks of the procedure. Whether a study as a whole poses net risks depends on whether the potential benefits of the experimental intervention compensate for its risks plus the net risks of the research procedures included in the study.

Clinical research which poses net risks raises important ethical concern. Net-risk studies raise concern that participants are being used as mere means to collect information to benefit future patients. Research procedures that pose net risks may seem to raise less concern when they are embedded within a study which offers a favorable risk-benefit profile overall. Yet, since these procedures pose net risks, and since the investigators could provide participants with the new potential treatment alone, they require justification. An investigator who is about to insert a needle into a research participant to obtain some blood purely for laboratory purposes faces the question of whether doing so is ethically justified, even when the procedure is included in a study that offers participants the potential for important medical benefit. The goal of ethical analyses of clinical research is to provide an answer.

Clinical research poses three types of net risks: absolute, relative, and indirect (Rid and Wendler 2011). Absolute net risks arise when the risks of an intervention or procedure are not justified by its potential clinical benefits. Most commentators focus on this possibility with respect to research procedures which pose some risks and offer (essentially) no chance of clinical benefit, such as blood draws to obtain cells for laboratory studies. Research with healthy volunteers is another example which frequently offers no chance for clinical benefit. Clinical research also poses absolute net risks when it offers a chance for clinical benefit which is not sufficient to justify the risks participants face. A kidney biopsy to obtain tissue from presumed healthy volunteers may offer some very low chance of identifying an unrecognized and treatable pathology. This intervention nonetheless poses net risks if the chance for clinical benefit for the participants is not sufficient to justify the risks of their undergoing the biopsy.

Relative net risks arise when the risks of a research intervention are justified by its potential clinical benefits, but the intervention’s risk-benefit profile is less favorable than the risk-benefit profile of one or more available alternatives. Imagine that investigators propose a randomized, controlled trial to compare an inexpensive drug against an expensive and somewhat more effective drug. Such trials make sense when, in the absence of a direct comparison, it is unclear whether the increased effectiveness of the more expensive drug justifies its costs. In this case, receipt of the cheaper drug would be contrary to participants’ interest in comparison to receiving the more expensive drug. The trial thus poses relative net risks to participants.

Indirect net risks arise when a research intervention has a favorable risk-benefit profile, but the intervention diminishes the risk-benefit profile of other interventions provided as part of or in parallel to the study. For example, an experimental drug for cancer might undermine the effectiveness of other drugs individuals are taking for their condition. The risks of research participation can be compounded if the indicated response to the harm in question poses additional risks. Kidney damage suffered as the result of research participation might lead to the need for short-term dialysis which poses additional risks to the individual; a participant who experiences a postlumbar puncture headache might need a ‘blood patch’ which poses some risk of blood entering the epidural space which would call for a further response which carries additional risks. While commentators tend to focus on the risks of physical harm, participation in clinical research can pose other types of risks as well, including psychological, economic, and social risks. Depending on the study and the circumstances, individuals who are injured as the result of participating in research might incur significant expenses. Most guidelines and regulations stipulate that evaluation of the acceptability of clinical research studies should take into account all the different risks to which participants are exposed.

To assess the ethics of exposing research participants to risks, one needs an account of why exposing others to risks raises ethical concern in the first place. Being exposed to risks obviously raises concern to the extent that the potential harm to which the risk refers is realized: the chance of a headache turns into an actual headache. Being exposed to risks also can lead to negative consequences as a result of the recognition that one is at risk of harm. Individuals who recognize that they face a risk may become frightened; they also may take costly or burdensome measures to protect themselves. In contrast, the literature on the ethics of clinical research implicitly assumes that being exposed to risks is not per se harmful. The mere fact that participants are exposed to risks is not regarded as necessarily contrary to their interests. It depends on whether the risk is realized in an actual harm.

It is one thing to expose a consenting adult to risks to save the health or life of an identified and present other, particularly when the two individuals are first degree relatives. It is another thing, or seems to many to be another thing, to expose consenting individuals to risks to help unknown and unidentified, and possibly future others. Almost no one objects to operating on a healthy, consenting adult to obtain a kidney that might save a present and ailing sibling, even though the operation poses some risk of serious harm and offers the donor no potential for clinical benefit. Attempts to obtain a kidney from a healthy, consenting adult and give it to an unidentified individual are met with greater ethical concern. The extent of the concern increases as the path from risk exposure to benefit becomes longer and more tenuous. Many clinical research studies expose participants to risks in order to collect generalizable information which, if combined with the results of other, as yet non-existent studies, may eventually benefit future patients through the identification of a new intervention, assuming the appropriate regulatory authorities approve it, some company or group chooses to manufacture it, and patients can afford to purchase it. The potential benefits of clinical research may thus be realized someday, but the risks and burdens are clear and present.

Increasingly, researchers obtain and store human biological samples for use in future research studies. These studies raise important questions regarding what might be called ‘contribution’ and ‘information’ risks. The former question concerns the conditions under which it is acceptable to ask individuals to contribute to answering the scientific question posed by a given study (Jonas 1969). The fact that this question has been ignored by many commentators and regulations may trace to an overly narrow understanding of individuals’ interests. Individuals undoubtedly have an interest in avoiding the kinds of physical harms, pain, infection, loss of organ function, that they face in clinical research. This leaves the question of whether individuals’ interests are implicated, and whether they can be set back, by contributing to particular projects, activities and goals?

Consider the routine practice of storing leftover samples of participants’ blood for use in future research projects. For the purposes of protecting participants’ interests, does it matter what goals these future studies attempt to advance? Can individuals be harmed if their samples are used to promote goals which conflict with their fundamental values? For example, are the interests of individuals who fundamentally oppose cloning set back if their samples are used in a study that successfully identifies improved methods to clone human beings?

The possibility of information risks garnered significant attention when investigators used DNA samples obtained from members of the Havasupai tribe in Arizona to study “theories of the tribe’s geographical origins.” The study’s conclusion that early members of the tribe had migrated from Asia across the Bering Strait contradicted the tribe’s own views that they originated in the Grand Canyon (Harmon 2010). Can learning the truth, in this case the origins of one’s tribal group, harm research participants?

Attempts to determine when it is acceptable to expose participant to risks for the benefit of others, including future others who do not yet exist, have been significantly influenced by its history, by how it has been conducted and, in particular, by how it has been misconducted (Lederer 1995; Beecher 1966). Thus, to understand the current state of the ethics of clinical research, it is useful to know something of its past.

Modern clinical research may have begun on the 20 th of May, 1747, aboard the HMS Salisbury. James Lind, the ship’s surgeon, was concerned with the costs scurvy was exacting on British sailors, and was skeptical of some of the interventions, cider, elixir of vitriol, vinegar, sea-water, being used at the time to treat it. Unlike other clinicians of his day, Lind did not simply assume that he was correct and treat his patients accordingly. He designed a study. He chose 12 sailors from among the 30 or so Salisbury’s crew members who were suffering from scurvy, and divided them into six groups of 2 sailors each. Lind assigned a different intervention to each of the groups, including two sailors turned research participants who received 2 oranges and 1 lemon each day. Within a week these two were sailors again; the others remained patients, and several were dying.

The ethics of clinical research begins by asking how we should think about the fate of these latter sailors. Did Lind treat them appropriately? Do they have a moral claim against Lind? It is widely assumed that physicians should do what they think is best for the patient in front of them. Lind, despite being a physician, did not follow this maxim. He felt strongly that giving sea water to individuals with scurvy was a bad idea, but he gave sea water to 2 of the sailors in his study to test whether he, or others, were right. To put the fundamental concern raised by clinical research in its simplest form: Did Lind sacrifice these two sailors, patients under his care, for the benefit of others?

Lind’s experiments represent perhaps the first modern clinical trial because he attempted to address one of the primary challenges facing those who set out to evaluate medical treatments. How does one show that any differences in the outcomes of the treatments under study are a result of the treatments themselves, and not a result of the patients who received them, or other differences in the patients’ environment or diet? How could Lind be confident that the improvements in the two sailors were the result of the oranges and lemons, and not a result of the fact that he happened to give them to the two patients who occupied the most salutary rooms on the ship? Lind tried to address the challenge of confounding variables by beginning with patients who were as similar as possible. He carefully chose the 12 subjects for his experiment from a much larger pool of ailing sailors; he also tried to ensure that all 12 received the same rations each day, apart from the treatments provided as part of his study. It is also worth noting that Lind’s dramatic results were largely ignored for decades, leading to uncounted and unnecessary deaths, and highlighting the importance of combining clinical research with effective promulgation and implementation. The Royal Navy did not adopt citrus rations for another 50 years (Sutton 2003), at which point scurvy essentially disappeared from the Royal Navy.

Lind’s experiments, despite controlling for a number of factors, did not exclude the possibility that his own choices of which sailors got which treatment influenced the results. More recent experiments, including the first modern randomized, placebo controlled trial of Streptomycin for TB in 1948 (D’Arcy Hart 1999), attempt to address this concern by assigning treatments to patients using a random selection process. By randomly assigning patients to treatment groups these studies ushered in the modern era of controlled, clinical trials. And, by taking the choice of which treatment a given patient receives out of the hands of the treating clinician, these trials underscore and, some argue, exacerbate the ethical concerns raised by clinical research (Hellman and Hellman 1991). A foundational principle of clinical medicine is the importance of individual judgment. A physician who decides which treatments her patients receive by flipping a coin is guilty of malpractice. A clinical investigator who relies on the same methods receives awards and gets published in elite journals. One might conclude that sacrifice of the interests of some, often sick patients, for the benefit of future patients, is essentially mandated by the scientific method (Miller & Weijer 2006; Rothman 2000). The history of clinical research seems to provide tragic support for this view.

The history of clinical research is littered with abuses. Indeed, one account maintains that the history of pediatric research is “largely one of child abuse” (Lederer and Grodin 1994, 19; also see Lederer 2003). This history has had a significant influence on how research ethicists understand the concerns raised by clinical research and on how policy makers attempt to address them. In particular, policy makers have responded to previous abuses by developing guidelines intended to prevent their recurrence.

The most influential abuses in this regard were the horrific experiments conducted by Nazi physicians during WW II (abuses perpetrated by Japanese physicians were also horrific, but have received significantly less attention). Response to these abuses led to the Nuremberg Code (Grodin & Annas 1996; Shuster 1997), which is frequently regarded as the first set of formal guidelines for clinical research, an ironic claim on two counts. First, there is some debate over whether the Nuremberg Code was intended to apply generally to clinical research or whether, as a legal ruling in a specific trial, it was intended to address only the cases before the court (Katz 1996). Second, the Germans themselves had developed systematic research guidelines as early as 1931 (Vollmann & Winau 1996). These guidelines were still legally in force at the time of the Nazi atrocities and clearly prohibited a great deal of what the Nazi doctors did.

Wide consensus developed by the end of the 1950s that the Nuremberg Code was inadequate to the ethics of clinical research. Specifically, the Nuremberg Code did not include a requirement that clinical research receive independent ethics review and approval. In addition, the first and longest principle in the Nuremberg Code states that informed consent is “essential” to ethical clinical research (Nuremberg Military Tribunal 1947). This requirement provides a powerful safeguard against the abuse of research participants. It also appears to preclude clinical research with individuals who cannot consent.

One could simply insist that the informed consent of participants is necessary to ethical clinical research and accept the opportunity costs thus incurred. Representatives of the World Medical Association, who hoped to avoid these costs, began meeting in the early 1960s to develop guidelines, which would become the Declaration of Helsinki, to address the perceived shortcomings of the Nuremberg Code (Goodyear, Krleza-Jeric, and Lemmens 2007). They recognized that insisting on informed consent as a necessary condition for clinical research would preclude a good deal of research designed to find better ways to treat dementia and conditions affecting children, as well as research in emergency situations. Regarding consent as necessary precludes such research even when it poses only minimal risks or offers participants a compensating potential for important clinical benefit. The challenge, still facing us today, is to identify protections for research participants which are sufficient to protect them without being so strict as to preclude appropriate research designed to benefit the groups to which they belong.

The Declaration of Helsinki (World Medical Organization 1996) allows individuals who cannot consent to be enrolled in clinical research based on the permission of the participant’s representative. The U.S. federal regulations governing clinical research take a similar approach. These regulations are not laws in the strict sense of being passed by Congress and applying to all research conducted on U.S. soil. Instead, the regulations represent administrative laws which effectively attach to clinical research at the beginning and the end. Research conducted using U.S. federal monies, for instance, research funded by the NIH, or research involving NIH researchers, must follow the U.S. regulations (Department of Health and Human Services 2005). Research that is included as part of an application for approval from the U.S. FDA also must have been conducted according to FDA regulations which, except for a few exceptions, are essentially the same. Although many countries now have their own national regulations (Brody 1998), the U.S. regulations continue to exert enormous influence around the world because so much clinical research is conducted using U.S. federal money and U.S. federal investigators, and the developers of medical treatments often want to obtain approval for the U.S. market.

The abuses perpetrated as part of the infamous Tuskegee syphilis study were made public in 1972, 40 years after the study was initiated. The resulting outcry led to the formation of the U.S. National Commission, which was charged with evaluating the ethics of clinical research with humans and developing recommendations for appropriate safeguards. These deliberations resulted in a series of recommendations for the conduct of clinical research, which became the framework for existing U.S. regulations. The U.S. regulations, like many regulations, place no clear limits on the risks to which competent and consenting adults may be exposed. In contrast, strict limits are placed on the level of research risks to which those unable to consent may be exposed, particularly children. In the case of pediatric research, the standard process for review and approval is limited to studies that offer a ‘prospect of direct’ benefit and research that poses minimal risk or a minor increase over minimal risk. Studies that cannot be approved in one of these categories must be reviewed by an expert panel and approved by a high government official. While this 4th category offers important flexibility, it implies that, at least in principle, U.S. regulations do not mandate a ceiling on the risks to which pediatric research participants may be exposed for the benefit of others. This reinforces the importance of considering how we might justify exposing participants to research risks, both minimal and greater than minimal, for the benefit of others.

Lind’s experiments on scurvy exemplify the fact that clinical research is often conducted by clinicians and often is conducted on patients. Many commentators have thus assumed that clinical research should be governed by the ethics of clinical care, and the methods of research should not diverge from the methods that are acceptable in clinical care. On this approach, participants should not be denied any beneficial treatments available in the clinical setting and they should not be exposed to any risks not present in the clinical setting.

Some proponents (Rothman 2000) argue that this approach is implied by the kind of treatment that patients, understood as individuals who have a condition or illness needing treatment, are owed. Such individuals are owed treatment that promotes, or at least is consistent with their medical interests. Others (Miller & Weijer 2006) argue that the norms of clinical research derive largely from the obligations that bear on clinicians. These commentators argue that it is unacceptable for a physician to participate in, or even support the participation of her patients in a clinical trial unless that trial is consistent with the patients’ medical interests. To do less is to provide substandard medical treatment and to violate one’s obligations as a clinician.

The claim that the treatment of research participants should be consistent with the norms which govern clinical care has been applied most prominently to the ethics of randomized clinical trials (Hellman & Hellman 1991). Randomized trials determine which treatment a given research participant receives based on a random process, not based on clinical judgment of which treatment would be best for that patient. Lind assigned the different existing treatments for scurvy to the sailors in his study based not on what he thought was best for them, but based on what he thought would yield an effective comparative test. Lind did not give each intervention to the same number of sailors because he thought that all the interventions had an equal chance of being effective. To the contrary, he did this because he was confident that several of the interventions were harmful and this design was the best way to show it. Contemporary clinical researchers go even further, assigning participants to treatments randomly. Because this aspect of clinical research represents a clear departure from the practice of clinical medicine it appears to sacrifice the interests of participants in order to collect valid data.

One of the most influential responses to this concern (Freedman 1987) argues that randomization is acceptable when the study in question satisfies what has come to be known as ‘clinical equipoise.’ Clinical equipoise obtains when, for the population of patients from which participants will be selected, the available clinical evidence does not favor one of the treatments being used over the others. In addition, it must be the case that there are no treatments available outside the trial that are better than those used in the trial. Satisfaction of these conditions seems to imply that the interests of research participants will not be undermined in the service of collecting scientific information. If the available data do not favor any of the treatments being used, randomizing participants seems as good a process as any other for choosing which treatment they receive.

Proponents of clinical equipoise as an ethical requirement for clinical research determine whether equipoise obtains not by appeal to the belief states of individual clinicians, but based on whether there is consensus among the community of experts regarding which treatment is best. Lind believed that sea water was ineffective for the treatment of scurvy. Yet, in the absence of agreement among the community of experts, this view essentially constituted an individual preference rather than a clinical norm. This suggests that it was acceptable for Lind to randomly assign sailors under his care to the prevailing treatments in order to test, in essence, whose preferred treatment was the best. In this way, the existence of uncertainty within the community of experts seems to offer a way to reconcile the methods of clinical research with the norms of clinical medicine.

Critics respond that even when clinical equipoise obtains for the population of patients, the specific circumstances of individual patients within that population might imply that one of the treatments under investigation is better for them (Gifford 2007). A specific patient may have reduced liver function which places her at greater risk of harm if she receives a treatment metabolized by the liver. And some patients may have personal preferences which incline them toward one treatment over another (e.g., they may prefer a one-time riskier procedure to multiple, lower risk procedures which pose the same collective risk). Current debate focuses on whether randomized clinical trials can take these possibilities into account in a way that is consistent with the norms of clinical medicine.

Even if the existence of clinical equipoise can justify some randomized trials, a significant problem remains, namely, many studies and procedures which are crucial to the identification and development of improved methods for protecting and advancing health and well-being are inconsistent with participants’ medical interests. This concern arises for many phase 1 studies which offer essentially no chance for medical benefit and pose at least some risks, and to that extent are inconsistent with the participants’ medical interests.

Phase 3 studies which randomize participants to a potential new treatment or existing standard treatment, and satisfy clinical equipoise, typically include purely research procedures, such as additional blood draws, to evaluate the drugs being tested. Enrollment in these studies may be consistent with participants’ medical interests in the sense that the overall risk-benefit ratio is at least as favorable as the available alternatives. Yet, evaluation of the overall risk-benefit profile of the study masks the fact that it includes individual procedures which are contrary to participants’ medical interests, and contrary to the norms of clinical medicine.

The attempt to protect research participants by appeal to the obligations clinicians have to promote the medical interests of their patients also seems to leave healthy volunteers unprotected. Alternatively, proponents might characterize this position in terms of clinicians’ obligations to others in general: clinicians should not perform procedures on others unless doing so promotes the individual’s clinical interests. This approach seems to preclude essentially all research with healthy volunteers. For example, many phase 1 studies are conducted in healthy volunteers to determine a safe dose of the drug under study. These studies, vital to drug development, are inconsistent with the principle that clinicians should expose individuals to risks only when doing so is consistent with their clinical interests. It follows that appeal to clinical equipoise alone cannot render clinical research consistent with the norms of clinical practice.

Commentators sometimes attempt to justify net-risk procedures that are included within studies, and studies that overall pose net risks by distinguishing between ‘therapeutic’ and ‘non-therapeutic’ research (Miller and Weijer 2006). The claim here is that the demand of consistency with participants’ medical interests applies only to therapeutic research; non-therapeutic research studies and procedures may diverge from these norms to a certain extent, provided participants’ medical interests are not significantly compromised. The distinction between therapeutic and non-therapeutic research is sometimes based on the design of the studies in question, and sometimes based on the intentions of the investigators. Studies designed to benefit participants, or investigators who intend to benefit participants are conducting therapeutic studies. Those designed to collect generalizable knowledge or in which the investigators intend to do so constitute non-therapeutic research.

The problem with the distinction between therapeutic and non-therapeutic research so defined is that research itself often is defined as a practice designed to collect generalizable knowledge and conducted by investigators who intend to achieve this end (Levine 1988). On this definition, all research qualifies as non-therapeutic. Conversely, most investigators intend to benefit their participants in some way or other. Perhaps they design the study in a way that provides participants with clinically useful findings, or they provide minor care not required for research purposes, or referrals to colleagues. Even if proponents can make good on the distinction between therapeutic and non-therapeutic research in theory, these practices appear to render it irrelevant to the practice of clinical research. More importantly, it is not clear why investigators’ responsibilities to patients, or patients’ claims on investigators, should vary as a function of this distinction. Why think that investigators are allowed to expose patients to some risks for the benefit of others, but only in the context of research that is not designed to benefit the participants? To apply this proposed resolution to pediatric research, why might it be acceptable to expose infants to risks for the benefit of others, but only in the context of studies which offer the infants no chance for personal benefit?

To take one possibility, it is not clear that this view can be defended by appeal to physicians’ role responsibilities. A prima facie plausible view holds that physicians’ role responsibilities apply to all encounters between physicians and patients who need medical treatment. This view would imply that physicians may not compromise patients’ medical interests when conducting therapeutic studies, but also seems to prohibit non-therapeutic research procedures with patients. Alternatively, one might argue that physicians’ role responsibilities apply only in the context of clinical care and so do not apply in the context of clinical research at all. This articulation yields a more plausible view, but does not support the use of the therapeutic/ non-therapeutic distinction. It provides no reason to think that physicians’ obligations differ based on the type of research in question.

Critics argue that these problems highlight the fundamental confusion that results when one attempts to evaluate clinical research based on norms appropriate to clinical medicine. They instead distinguish between the ethics of clinical research and the ethics of clinical care, arguing that it is inappropriate to assume that investigators are subject to the claims and obligations which apply to physicians, despite the fact that the individuals who conduct clinical research often are physicians (Miller and Brody 2007).

The claim that clinical research should satisfy the norms of clinical medicine has this strong virtue: it provides a clear method to protect individual research participants and reassure the public that they are being so protected. If research participants are treated consistent with their medical interests, we can be reasonably confident that improvements in clinical medicine will not be won at the expense of exploiting them. Most accounts of the ethics of clinical research now recognize the limitations of this approach and search for ways to ensure that research participants are not exposed to excessive risks without assuming that the claims of clinical medicine apply to clinical researchers (Emanuel, Wendler, and Grady 2000; CIOMS 2002). Dismissal of the distinction between therapeutic and non-therapeutic research thus yields an increase in both conceptual clarity and concern regarding the potential for abuse of research participants.

Clinicians, first trained as physicians taught to act in the best interests of the patient in front of them, often struggle with the process of exposing some patients to risky procedures for the benefit of others. It is one thing for philosophers to insist, no matter how accurately, that research participants are not patients and need not be treated according to the norms of clinical medicine. It is another thing for clinical researchers to regard research participants who are suffering from disease and illness as anything other than patients. These clinical instincts, while understandable and laudable, have the potential to obscure the true nature of clinical research, as investigators and participants alike try to convince themselves that clinical research involves nothing more than the provision of clinical care. One way to try to address this collective and often willful confusion would be to identify a justification for exposing research participants to net risks for the benefit of others.

It is often said that those working in bioethics are obsessed with the principle of respect for individual autonomy. Advocates of this view of bioethicists cite the high esteem accorded in the field to the requirement of obtaining individual informed consent and the frequent attempts to resolve bioethical challenges by citing its satisfaction. One might assume that this view within bioethics traces to implicit endorsement of a libertarian analysis according to which it is permissible for competent and informed individuals to do whatever they prefer, provided those with whom they interact are competent, informed and in agreement. In the words of Mill, investigators should be permitted to conduct research and expose participants to risks provided they obtain their “free, voluntary, and undeceived consent and participation” (On Liberty, page 11). Setting aside the question of whether this view accurately characterizes bioethics and bioethicists generally, it does not apply to the vast majority of work done on the ethics of clinical research. Almost no one in the field argues that it is permissible for investigators to conduct any research they want provided they obtain the free and informed consent of those they enroll.

Current research ethics does place significant weight on informed consent and many regulations and guidelines devote much of their length to articulating the requirement for informed consent. Yet, as exemplified by the response to the Nuremberg Code, almost no one regards informed consent as necessary and sufficient for ethical research. Most regulations and guidelines, beginning with the Declaration of Helsinki, first adopted in 1964 (World Medical Organization 1996), allow investigators to conduct research on humans only when it has been approved by an independent group charged with ensuring that the study is ethically acceptable. Most regulations further place limits on the types of research that independent ethics committees may approve. They must find that the research has important social value and the risks have been minimized, thereby restricting the types of research to which even competent adults may consent. Are these requirements justified, or are they inappropriate infringements on the free actions of competent individuals? The importance of answering this question goes beyond its relevance to debates over Libertarianism. Presumably, the requirements placed on clinical research have the effect of reducing to some extent the number of research studies that get conducted. The fact that at least some of the prohibited studies likely would have important social value, helping to identify better ways to promote health and well-being, provides a normative reason to eliminate the restrictions, unless there is some compelling reason to retain them.

The libertarian claim that valid informed consent is necessary and sufficient to justify exposing research participants to risks for the benefit of others seems to imply, consistent with the first principle of the Nuremberg Code, that research with individuals who cannot consent is unethical. This plausible and tempting claim commits one to the view that research with children, research in many emergency situations, and research with the demented elderly all are ethically impermissible. One could consistently maintain such a view but the social costs of adopting it would be great. It is estimated, for example, that approximately 70% of medications provided to children have not been tested in children, even for basic safety and efficacy (Roberts, Rodriquez, Murphy, Crescenzi 2003; Field & Behrman 2004; Caldwell, Murphy, Butow, and Craig 2004). Absent clinical research with children, pediatricians will be forced to continue to provide frequently inappropriate treatment, leading to significant harms that could have been avoided by pursuing clinical research to identify better approaches.

One response would be to argue that the Libertarian analysis is not intended as an analysis of the conditions under which clinical research is acceptable. Instead, the claim might be that it provides an analysis of the conditions under which it is acceptable to conduct clinical research with competent adults. Informed consent is necessary and sufficient for enrolling competent adults in research. While this view does not imply that research with participants who cannot consent is impermissible, it faces the not insignificant challenge of providing an account for why such research might be acceptable.

Bracketing the question of individuals who cannot consent, many of the limitations on clinical research apply to research with competent adults. How might these limitations be justified? One approach would be to essentially grant the Libertarian analysis on theoretical grounds, but then argue that the conditions for its implementation are rarely realized in practice. In particular, there are good reasons, and significant empirical data, to question how often clinical research actually involves participants who are sufficiently informed to provide valid consent. Even otherwise competent adults often fail to understand clinical research sufficiently to make their own informed decisions regarding whether to enroll (Flory and Emanuel 2004).

To consider an example which is much discussed in the research ethics literature, it is commonly assumed that valid consent for randomized clinical trials requires individuals to understand randomization. It requires individuals to understand that the treatment they will receive, if they enroll in the study, will be determined by a process which does not take into account which of the treatments is better for them (Kupst 2003). There is an impressive wealth of data which suggests that many, perhaps most individuals who participate in clinical research do not understand this (Snowden 1997; Featherstone and Donovan 2002; Appelbaum 2004). The data also suggest that these failures of understanding often are resistant to educational interventions.

With this in mind, one might regard the limitations on research with competent adults as betraying the paternalism embedded in most approaches to the ethics of clinical research (Miller and Wertheimer 2007). Although the charge of paternalism often carries with it some degree of condemnation, there is a strong history of what is regarded as appropriate paternalism in the context of clinical research. This too may have evolved from clinical medicine. Clinicians are charged with protecting and promoting the interests of the patient “in front of them”. Clinician researchers, who frequently begin their careers as clinicians, may regard themselves as similarly charged. Paternalism involves interfering with the liberty of agents for their own benefit (Feinberg 1986; see also entry on paternalism ). As the terms are used in the present debate, ‘soft’ paternalism involves interfering with the liberty of an individual in order to promote their interests on the grounds that the action being interfered with is the result of impaired decision-making: “A freedom-restricting intervention is based on soft paternalism only when the target’s decision-making is substantially impaired, when the agent lacks (or we have reason to suspect that he lacks) the information or capacity to protect his own interests—as when A prevents B from drinking the liquid in a glass because A knows it contains poison but B does not” (Miller & Wertheimer 2007). ‘Hard’ paternalism, in contrast, involves interfering with the liberty of an individual in order to promote their interests, despite the fact that the action being interfered with is the result of an informed and voluntary choice by a competent individual.

If the myriad restrictions on clinical research were justified on the basis of hard paternalism they would represent restrictions on individuals’ autonomous actions. However, the data on the extent to which otherwise competent adults fail to understand what they need to understand to provide valid consent suggests that the limitations can instead be justified on the grounds of soft paternalism. This suggests that while the restrictions may limit the liberty of adult research participants, they do not limit their autonomy. In this way, one may regard many of the regulations on clinical research not as inconsistent with the libertarian ideal, but instead as starting from that ideal and recognizing that otherwise competent adults often fail to attain it.

Even if most research participants had sufficient understanding to provide valid consent, it would not follow that there should be no limitations on research with competent adults. The conditions on what one individual may do to another are not exhausted by what the second individual consents to. Perhaps some individuals may choose for themselves to be treated with a lack of respect, even tortured. It does not follow that it is acceptable for me or you to treat them accordingly. As independent moral agents we need sufficient reason to believe that our actions, especially the ways in which we treat others, are appropriate, and this evaluation concerns, in typical cases, more than just the fact that the affected individuals consented to them.

Understood in this way, many of the limitations on the kinds of research to which competent adults may consent are not justified, or at least not solely justified, on paternalistic grounds. Instead, these limitations point to a crucial and often overlooked concern in research ethics. The regulations for clinical research often are characterized as protecting the participants of research from harm. Although this undoubtedly is an important and perhaps primary function of the regulations, they also have an important role in limiting the extent to which investigators harm research participants, and limiting the extent to which society supports and benefits from a process which inappropriately harms others. It is not just that research participants should not be exposed to risk of harm without compelling reason. Investigators should not expose them to such risks without compelling reason, and society should not support and benefit from their doing so.

This aspect of the ethics of clinical research has strong connections with the view that the obligations of clinicians restrict what sort of clinical research they may conduct. On that view, it is the fact that one is a physician and is obligated to promote the best interests of those with whom one interacts professionally which determines what one is allowed to do to participants. This connection highlights the pressing questions that arise once we attempt to move beyond the view that clinical research is subject to the norms of clinical medicine. There is a certain plausibility to the claim that a researcher is not acting as a clinician and so may not be subject to the obligations that bear on clinicians. Or perhaps we might say that the researcher/subject dyad is distinct from the physician/patient dyad and is not necessarily subject to the same norms. But, once we conclude that we need an account of the ethics of clinical research, distinct from the ethics of clinical care, one is left with the question of which limitations apply to what researchers may do to research participants.

It seems clear that researchers may not expose research participants to risks without sufficient justification, and also clear that this claim applies even to those who provide free and informed consent. The current challenge then is to develop an analysis of the conditions under which it is acceptable for investigators to expose participants to risks and determine to what extent current regulations need to be modified to reflect this analysis. To consider briefly the extent of this challenge, and to underscore and clarify the claim that the ethics of clinical research go beyond the protection of research participants to include the independent consideration of what constitutes appropriate behavior on the part of investigators, consider an example.

Physical and emotional abuse cause enormous suffering, and a good deal of research is designed to study various methods to reduce instances of abuse and also to help victims recover from being abused. Imagine that a team of investigators establishes a laboratory to promote the latter line of research. The investigators will enroll consenting adults and, to mimic the experience of extended periods of abuse in real life, they will abuse their participants emotionally and physically for a week. The abused participants will then be used in studies to evaluate the efficacy of different methods for helping victims to cope with the effects of abuse.

The proper response to this proposal is not to claim that the study is acceptable because the participants are competent and they gave informed consent. The proper response is to point out that, while these considerations are undoubtedly important, they do not establish that the study is ethically acceptable. One needs to consider many other things. Is the experiment sufficiently similar to real life abuse that its results will have external validity? Are there less risky ways to obtain the same results? Finally, even if these questions are answered in a way that supports the research, the question remains whether investigators may ethically treat their participants in this way. The fact that essentially everyone working in research ethics would hold that this study is unethical—investigators are not permitted to treat participants in this way—suggests that research ethics, both in terms of how it is practiced and possibly how it should be practiced, goes beyond respect for individual autonomy to include independent standards on investigator behavior. Defining those standards represents one of the more important challenges for research ethics.

As exemplified by Lind’s experiments on treatments for scurvy, clinical research studies were first conducted by clinicians wondering whether the methods they were using were effective. To answer this question, the clinicians altered the ways in which they treated their patients in order to yield information that would allow them to assess their methods. In this way, clinical research studies initially were part of, but an exception to standard clinical practice. As a result, clinical research came to be seen as an essentially unique activity. And widespread recognition of clinical research’s scandals and abuses led to the view that this activity needed its own extensive regulations.

More recently, some commentators have come to question the view that clinical research is a unique human activity, as well as the regulations and guidelines which result from this view. In particular, it has been argued that this view has led to overly restrictive requirements on clinical research, requirements that hinder scientists’ ability to improve medical care for future patients, and also fail to respect the liberty of potential research participants. This view is often described in terms of the claim that many regulations and guidelines for clinical research are based on an unjustified ‘research exceptionalism’ (Wertheimer 2010).

The central ethical concern raised by clinical research involves the practice of exposing participants to risks for the benefit of others. Yet, as noted previously, we are constantly exposing individuals to risks for our benefit and the benefit of others. When you drive to the store, you expose your neighbors to some increased risk of pollution for the benefits you derive from shopping; speeding ambulances expose pedestrians to risks for the benefit of the patients they carry; factories expose their workers to risks for the benefit of their customers; charities expose volunteers to risks for the benefit of recipients. Despite this similarity, clinical research is widely regarded as ethically problematic and is subject to significantly greater regulation, review, and oversight (Wilson and Hunter 2010). Almost no one regards driving, ambulances, charities, or factories as inherently problematic. Even those who are not great supporters of a given charity do not argue that it treats its volunteers as guinea pigs, despite exposing them to risks for the benefit of others. And no one argues that charitable activities should satisfy the requirements that are routinely applied to clinical research, such as the requirements for independent review and written consent based on an exhaustive description of the risks and potential benefits of the activity, its purpose, duration, scope, and procedures.

Given that many activities expose some to risks for the benefit of others, yet are not subject to such extensive regulation, some commentators conclude that many of the requirements for clinical research are unjustified (Sachs 2010, Stewart et al. 2008, and Sullivan 2008). This work is based on the assumption that, when it comes to regulation and ethical analysis, we should treat clinical research as we treat other activities in daily life which involve exposing some to risks for the benefit of others. This assumption leads to a straightforward solution to the central ethical problem posed by clinical research.

Exposing factory workers to risks for the benefit of others is deemed ethically acceptable when they agree to do the work and are paid a fair wage. The solution suggested for the ethical concern of non-beneficial research is to obtain consent and pay research participants a fair wage for their efforts. This view is much less restrictive than current regulations for clinical research, but seems to be less permissive than a Libertarian analysis. The latter difference is evident in claims that research studies should treat participants fairly and not exploit them, even if individuals consent to being so treated.

The gap between this approach and the traditional view of research ethics is evident in the fact that advocates of the traditional view tend to regard payment of research participants as exacerbating rather than resolving its ethical concerns, raising, among others, worries of undue inducement and commodification. Those who are concerned about research exceptionalism, in contrast, tend to regard payment as it is regarded in most other contexts in daily life: some is good and more is better.

The claims of research exceptionalism have led to valuable discussion of the extent to which clinical research differs from other activities which pose risks to participants for the benefit of others and whether any of the differences justify the extensive regulations and guidelines standardly applied to clinical research. Proponents of research exceptionalism who regard many of the existing regulations as unjustified face the challenge of articulating an appropriate set of regulations for clinical research. While comparisons to factory work provide a useful lens for thinking about the ethics of clinical research, it is not immediately obvious what positive recommendations follow from this perspective. After all, it is not as if there is general consensus regarding the regulations to which industry should be subject. Some endorse minimum wage laws; others oppose them. There are further arguments over whether workers should be able to unionize; whether governments should set safety standards for industry; whether there should be rules protecting workers against discrimination.

A few commentators (Caplan 1984; Harris 2005; Heyd 1996) try to justify exposing research participants to risks for the benefit of others by citing an obligation to participate in clinical research. At least all individuals who have access to medical care have benefited from the efforts of previous research participants in the form of effective vaccines and better medical treatments. One might try to argue that these benefits obligate us to participate in clinical research when its our turn.

Current participation in clinical research typically benefits future patients. However, if we incur an obligation for the benefits that are due to previous research studies, we presumably are obligated to the patients who participated in those studies, an obligation we cannot discharge by participating in current studies. This approach also does not provide a way to justify the very first clinical trials, such as Lind’s, which of necessity enrolled participants who had never benefited from previous clinical research.

Alternatively, one might argue that the obligation to participate does not trace to the benefits we receive from the efforts of previous research participants. Rather, the obligation is to the overall social system of which clinical research is a part (Brock 1994). For example, one might argue that individuals acquire this obligation as the result of being raised in the context of a cooperative scheme or society. We are obligated to do our part because of the many benefits we have enjoyed as a result of living within such a scheme.

The first challenge for this view is to explain why the mere enjoyment of benefits, without some prospective agreement to respond in kind, obligates individuals to help others. Presumably, your doing a nice thing for me yesterday, without my knowledge or invitation, does not obligate me to do you a good turn today. This concern seems even greater with respect to pediatric research. Children certainly benefit from previous research studies, but typically do so unknowingly and often with vigorous opposition. The example of pediatric research makes the further point that justification of clinical research on straightforward contractualist grounds will be difficult at best. Contract theories have difficulties with those groups, such as children, who do not accept in any meaningful way the benefits of the social system under which they live (Gauthier 1990).

In a Rawlsian vein, one might try to establish an obligation to participate in clinical research based on the choices individuals would make regarding the structure of society from a position of ignorance regarding their own place within that society, from behind a veil of ignorance (Rawls 1999). To make this argument, one would have to modify the Rawlsian argument in several respects. The knowledge that one is currently living could well bias one’s decision against the conduct of clinical research. Those who know they are alive at the time the decision is being made have already reaped many of the benefits they will receive from the conduct of clinical research.

To avoid these biases, we might stretch the veil of ignorance to obscure the generation to which one belongs—past, present or future (Brock 1994). Under a veil of ignorance so stretched, individuals might choose to participate in clinical research as long as the benefits of the practice exceed its overall burdens. One could then argue that justice as fairness gives all individuals an obligation to participate in clinical research when their turn comes. This approach seems to have the advantage of explaining why we can expose even children to some risks for the benefit of others, and why parents can give permission for their children to participate in such research. This argument also seems to imply not simply that clinical research is acceptable, but that, in a range of cases, individuals have an obligation to participate in it. It implies that adults whose turn has come are obligated to participate in clinical research, although for practical reasons we might refrain from forcing them to do so.

This justification for clinical research faces several challenges. First, Rawlsian arguments typically are used to determine the basic structure of society, that is, to determine a fair arrangement of the basic institutions within the society (Rawls 1999). If the structure of society meets these basic conditions, members of the society cannot argue that the resulting distribution of benefits and burdens is unfair. Yet, even when the structure of society meets the conditions for fairness, it does not follow that individuals are obligated to participate in the society so structured. Competent adults can decide to leave a society that meets these conditions rather than enjoy its benefits (whether they have any better places to go is another question). The right of exit suggests that the fairness of the system does not generate an obligation to participate, but rather defends the system against those who would argue that it is unfair to some of the participants over others. At most, then, the present argument can show that it is not unfair to enroll a given individual in a research study, that this is a reasonable thing for all individuals, including those who are unable to consent.

Second, it is important to ask on what grounds individuals behind the veil of ignorance make their decisions. In particular: are these decisions constrained or guided by moral considerations? (Dworkin 1989; Stark 2000). It seems plausible to think that they would be. After all, we are seeking the ethical approach or policy with respect to clinical research. The problem, then, is that the answer we get in this case may depend significantly on which ethical constraints are built into the system, rendering the approach question begging. Most importantly, we are considering whether it is ethical to expose individuals who cannot consent to risks for the benefit of others. If it isn’t, then it seems that this should be a limitation on the choices individuals can make from behind the veil of ignorance, in which case appeal to those choices will not be able to justify pediatric research, nor research with incompetent adults. And if this research is ethical it is unclear why we need this mechanism to justify it.

Proponents might avoid this dilemma by assuming that individuals behind the veil of ignorance will make decisions based purely on self-interest, unconstrained by moral limits or considerations. Presumably, many different systems would satisfy this requirement. In particular, the system that produces the greatest amount of benefits overall may well be one that we regard as unethical. Many endorse the view that clinical research studies which offer no potential benefit to participants and pose a high chance of serious risk, such as death, are unethical, independent of the magnitude of the social value to be gained. For example, almost all research ethicists would regard as unethical a study which intentionally infects a few participants with the HIV virus, even when the study offers the potential to identify a cure for AIDS. Yet, individuals behind the veil of ignorance who make decisions based solely on self-interest might well allow this study on the grounds that it offers a positive cost-benefit ratio overall: the high risks to a few participants are clearly outweighed by the potential to save the lives of millions.

The question here is not whether a reasonable person would choose to make those who are badly off even worse off in order to elevate the status of those more privileged. Rather, both options involve some individuals being in unfortunate circumstances, namely, infected with the HIV virus. The difference is that the one option (not conducting the study) involves many more individuals becoming infected over time, whereas the other option involves significantly fewer individuals being infected, but some as the result of being injected in the process of identifying a cure. Since the least desirable circumstances (being infected with HIV) are the same in both cases, the reasonable choice, even if one endorses the maximin strategy, seems to be whichever option reduces the total number of individuals who are in those circumstances, revealing that, in the present case at least, the Rawlsian approach seems not to take into account the way in which individuals end up in the positions they occupy.

Limits on risks are a central part of almost all current research regulations and guidelines. With respect to those who can consent, there is an essentially implicit agreement that the risks should not be too high (as noted earlier, some argue that there should not be any net risks to even competent adults in the context of so-called therapeutic research). However, there is no consensus regarding how to determine which risks are acceptable in this context. With respect to those who cannot consent, many commentators argue that clinical research is acceptable when the net risks are very low. The challenge, currently faced by many in clinical research, is to identify a standard, and find a reliable way to implement it, for what constitutes a sufficiently low risk in this context. An interesting and important question in this regard is whether the level of acceptable risks varies depending on the particular class of individuals who cannot consent. Is the level of acceptable risks the same for individuals who were once competent, such as previously competent adults with Alzheimer disease, individuals who are not now but are expected to become competent, such as healthy children, and individuals who are not now and likely never will be competent, such as individuals born with severe cognitive disabilities?

Some argue that the risks of clinical research qualify as sufficiently low when they are ‘negligible’, understood as risks that do not pose any chance of serious harm (Nicholson 1986). Researchers who ask children a few questions for research purposes may expose them to risks no more worrisome than that of being mildly upset for a few minutes. Exposing participants to a risk of minor harm for the benefit of others does not seem to raise ethical concern. Or one might argue that the ethical concerns it raises do not merit serious ethical concern. Despite the plausibility of these views, very few studies satisfy the negligible risk standard. Even routine procedures that are widely accepted in pediatric research, such as single blood draws, pose some, typically very low risk of more than negligible harm.

Others (Kopelman 2000; Resnik 2005) define risks as sufficiently low or ‘minimal’ when they do not exceed the risks individuals face during the performance of routine examinations. This standard provides a clear and quantifiable threshold for acceptable risks. Yet, the risks of routine medical procedures for healthy individuals are so low that this standard seems to prohibit intuitively acceptable research. This approach faces the additional problem that, as the techniques of clinical medicine become safer and less invasive, increasing numbers of procedures used in clinical research would move from acceptable to unacceptable. And, at a theoretical level, one might wonder why we should think that the risks we currently happen to accept in the context of clinical care for healthy children should define the level of risk that is acceptable in clinical research. Why think that the ethical acceptability of a blood draw in pediatric research depends on whether clinicians still use blood draws as part of clinical screening for healthy children?

Many guidelines (U.S. Department of Health and Human Services 2005; Australian National Health and Medical Research Council 1999) and commentators take the view that clinical research is ethically acceptable as long as the net risks do not exceed the risks individuals face in daily life. Many of those involved in clinical research implicitly assume that this minimal risk standard is essentially equivalent to the negligible risk standard. If the risks of research are no greater than the risks individuals face in daily life, then the research does not pose risk of any serious harm. As an attitude toward many of the risks we face in daily life, this view makes sense. We could not get through the day if we were conscious of all the risks we face. Crossing the street poses more risks than one can catalog, much less process readily. When these risks are sufficiently low, psychologically healthy individuals place them in the cognitive background, ignoring them unless the circumstances provide reason for special concern (e.g. one hears a siren, or sees a large gap in the sidewalk).

Paul Ramsey reports that members of the US National Commission often used the terms ‘minimal’ and ‘negligible’ in a way which seemed to imply that they were willing to allow minimal risk research, even with children, on the grounds that it poses no chance of serious harm (Ramsey 1978). The members then went on to argue that an additional ethical requirement for such research is a guarantee of compensation for any serious research injuries. This approach to minimal risk pediatric research highlights nicely the somewhat confused attitudes we often have toward risks, especially those of daily life.

We go about our daily lives as though harms with very low probability are not going to occur, effectively treating low probability harms as zero probability events. To this extent, we are not Bayesians about the risks of daily life. We treat some possible harms as impossible for the purposes of getting through the day. This attitude, crucial to living our lives, does not imply that there are no serious risks in daily life. The fact that our attitude toward the risks of everyday life is justified by its ability to help us to get through the day undermines its ability to provide an ethical justification for exposing research participants to the same risks in the context of non-beneficial research (Ross & Nelson 2006).

First, the extent to which we ignore the risks of daily life is not a fully rational process. In many cases, our attitude regarding risks is a function of features of the situation that are not correlated directly with the risk level, such as our perceived level of control and our familiarity with the activity (Tversky, Kahneman 1974; Tversky, Kahneman 1981; Slovic 1987; Weinstein 1989). Second, to the extent that the process of ignoring some risks is rational, we are involved in a process of determining which risks are worth paying attention to. Some risks are so low that they are not worth paying attention to. Consideration of them would be more harmful (would cost us more) than the expected value of being aware of them in the first place.

To some extent, then, our attitudes in this regard are based on a rational cost/benefit analysis. To that extent, these attitudes do not provide an ethical argument for exposing research participants to risks for the benefit of others. The fact that the costs to an individual of paying attention to a given risk in daily life are greater than the benefits to that individual does not seem to have any relevance for what risks we may expose them to for the benefit of others. Finally, there is a chance of serious harm from many of the activities of daily life. This reveals that the ‘risks of daily life’ standard does not preclude the chance of some participants experiencing serious harm. Indeed, one could put the point in a much stronger way. Probabilities being what they are, the risks of daily life standard implies that if we conduct enough minimal risk research eventually a few participants will die and scores will suffer permanent disability.

As suggested above, a more plausible line of argument would be to defend clinical research that poses minimal risk on the grounds that it does not increase the risks to which participants are exposed. It seems plausible to assume that at any given time an individual will either be participating in research or involved in the activities of daily life. But, by assumption, the risks of the two activities are essentially equivalent, implying that enrollment in the study, as opposed to allowing the participant to continue to participate in the activities of daily life does not increase the risks to which he is exposed.

The problem with this argument is that the risks of research often are additive rather than substitutive. For example, participation in a study might require the participant to drive to the clinic for a research visit. The present defense succeeds to the extent that this trip replaces another trip in the car, or some similarly risky activity in which the participant would have been otherwise involved. In practice, this often is not the case. The participant instead may simply put off the trip to the mall until after the research visit. In that case, the participant’s risk of serious injury from a car trip may be doubled as a result of her participation in research. Moreover, we accept many risks in daily life because the relevant activities offer those who pursue them a chance of personal benefit. We allow children to take the bus because we assume that the benefits of receiving an education justify the risks. The fact that we accept these risks given the potential benefits provides no reason to think that the same risks or even the same level of risk would be acceptable in the context of an activity which offers no chance of medical benefit. Finally, applied strictly, this justification seems to imply that investigators should evaluate what risks individuals would face if they did not enroll in the research, and enroll only those who would otherwise face similar or greater levels of risk.

In one of the most influential papers in the history of research ethics, Hans Jonas (1969) argues that the progress clinical research offers is normatively optional, whereas the need to protect individuals from the harms to which clinical research exposes them is mandatory. He writes:

… unless the present state is intolerable, the melioristic goal [of biomedical research] is in a sense gratuitous, and this is not only from the vantage point of the present. Our descendants have a right to be left an unplundered planet; they do not have a right to new miracle cures. We have sinned against them if by our doing, we have destroyed their inheritance not if by the time they come around arthritis has not yet been conquered (unless by sheer neglect). (Jonas 1969, 230–231)

Jonas’s view does not imply that clinical research is necessarily unethical, but the conditions on when it may be conducted are very strict. This argument may seem plausible to the extent that one regards, as Jonas does, the benefits of clinical research to be ones that make an acceptable state in life even better. The example of arthritis cited by Jonas characterizes this view. Curing arthritis, like curing dyspepsia, baldness, and the minor aches and pains of living and aging, may be nice, but may be thought to address no profound problem in our lives. If this were all that clinical research had to offer, we might be reluctant to accept many risks in order to achieve its goals. We should not, in particular, take much chance of wronging individuals, or exploiting them to realize these goals.

This argument makes sense to the extent that one regards the status quo as acceptable. Yet, without further argument, it is not clear why one should accept this view; it seems almost certain that those suffering from serious illness that might be addressed by future research will not accept it. Judgments regarding the present state of society concern very general level considerations and a determination that society overall is doing fairly well is consistent with many individuals suffering terrible diseases. Presumably, the suffering of these individuals provides some reason to conduct clinical research. In response, one might understand Jonas to be arguing that the present state of affairs involves sufficiently good medicine and adequately flourishing lives such that the needs which could now be addressed by additional clinical research are not of sufficient importance to justify the risks raised by conducting it. It might have been the case, at some point in the past, that life was sufficiently nasty, brutish and short to justify running the risk of exploiting research participants in the process of identifying ways to improve the human lot. But, we have advanced, in part thanks to clinical research, well beyond that point. This reading need not interpret Jonas as ignoring the fact that there remain serious ills to be cured. Instead, he might be arguing that these ills, while real and unfortunate, are not of sufficient gravity, or perhaps prevalence to justify the risks of conducting clinical research.

This view implicitly expands the ethical concerns raised by clinical research. We have been focusing on the importance of protecting individual research participants. However, Jonas assumes that clinical research also threatens society in some sense. There are at least two possibilities here. First, it might be thought that the conduct of unethical research reaches beyond individual investigators to taint society as a whole. This does not seem unreasonable given that clinical research typically is conducted in the name of and often for the benefit of society. Second, one might be concerned that allowing investigators to expose research participants to some risks for the benefit of others might put us on a slippery slope that ends with serious abuses throughout society.

An alternative reading would be to interpret Jonas as arguing from a version of the active-passive distinction. It is often claimed that there is a profound moral difference between actively causing harm versus merely allowing harm to occur, between killing someone versus allowing them to die, for example. Jonas often seems to appeal to this distinction when evaluating the ethics of clinical research. The idea is that conducting clinical research involves investigators actively exposing individuals to risks of harm and, when those harms are realized, it involves investigators actively harming them. The investigator who injects a participant with an experimental medication actively exposes the individual to risks for the benefit of others and actively harms, perhaps even kills those who suffer harm as a result. And, to the extent that clinical research is conducted in the name of and for the benefit of society in general, one can say without too much difficulty that society is complicit in these harms. Not conducting clinical research, in contrast, involves our allowing individuals to be subject to diseases that we might otherwise have been able to avoid or cure. And this situation, albeit tragic and unfortunate, has the virtue of not involving clear moral wrongdoing.

The problem with at least this version of the argument is that the benefits of clinical research often involve finding safer ways to treat disease. The benefits of this type of clinical research, to the extent they are realized, involve clinicians being able to provide less harmful, less toxic medications to patients. Put differently, many types of clinical research offer the potential to identify medical treatments which harm patients less than current ones. This not an idle goal. One study found that the incidence of serious adverse events from the appropriate use of clinical medications (i.e. excluding such things as errors in drug administration, noncompliance, overdose, and drug abuse) in hospitalized patients was 6.7%. The same study, using data from 1994, concludes that the approved and properly prescribed use of medications is likely the 5 th leading cause of death in the US (Lazarou, Pomeranz, and Corey 1998).

These data suggest that the normative calculus is significantly more complicated than the present reading of Jonas suggests. The question is not whether it is permissible to risk harming some individuals in order to make other individuals slightly better off. Instead, we have to decide how to trade off the possibility of clinicians exposing patients to greater risks of harm (albeit with a still favorable risk-benefit ratio) in the process of treating them versus clinical researchers exposing participants to risk of harm in the process of trying to identify improved methods to treat others. This is not to say that there is no normative difference between these two activities, only that that difference is not accurately described as the difference between harming individuals versus improving their lot beyond some already acceptable status quo. It is not even a difference between harming some individuals versus allowing other individuals to suffer harms. The argument that needs to be made is that harming individuals in the process of conducting clinical research potentially involves a significant moral wrong not present when clinicians harm patients in the process of treating them.

The primary concern here is that, by exposing participants to risks of harm, the process of conducting clinical research involves the threat of exploitation of a particular kind. It runs the risk of investigators treating persons as things, devoid of any interests of their own. The worry here is not so much that investigators and participants enter together into the shared activity of clinical research with different, perhaps even conflicting goals. The concern is rather that, in the process of conducting clinical research, investigators treat participants as if they had no goals at all or, perhaps, that any goals they might have are normatively irrelevant.

Jonas argues that this concern can be addressed, and the process of experimenting on some to benefit others made ethically acceptable, only when the research participants share the goals of the research study. Ethically appropriate research, on Jonas’s view, is marked by: “appropriation of the research purpose into the person’s own scheme of ends” (Jonas 1969, 236). And assuming that it is in one’s interests to achieve one’s, at least, proper goals, it follows that, by participating in research, participants will be acting in their own interests, despite the fact that they are thereby being exposed to risky procedures which are performed to collect information to benefit others.

Jonas claims in some passages that research participants, at least those with an illness, can share the goals of a clinical research study only when they have the condition or illness under study (Jonas 1969). These passages reveal something of the account of human interests on which Jonas’s arguments rely. On standard preference satisfaction accounts of human interests, what is in a given individual’s interests depends on what the individual happens to want or prefer, or the goals the individual happens to endorse, or the goals the individual would endorse in some idealized state scrubbed clean of the delusions, misconceptions and confusion which inform their actual preferences (Griffin 1986). On this view, participation in clinical research would promote an individual’s interests as long as she was well informed and wanted to participate. This would be so whether or not she had the condition being studied. Jonas’s view, in contrast, seems to be that there are objective conditions under which individuals can share the goals of a given research study. They can endorse the cause of curing or at least finding treatments for Alzheimer disease only if they suffer from the disease themselves.

One possible objection would be to argue that there are many reasons why an individual might endorse the goals of a given study, apart from having the disease themselves. One might have family members with the disease, or co-religionists, or have adopted improved treatment of the disease as an important personal goal. The larger question is whether participants endorsing the goals of a clinical research study is a necessary condition on its acceptability. Recent commentators and guidelines rarely, if ever, endorse this condition, although at least some of them might be assuming that the requirement to obtain free and informed consent will ensure its satisfaction. It might be assumed, that is, that competent, informed, and free individuals will enroll in research only when they share the goals of the study in question.

Jonas was cognizant of the extent to which the normative concerns raised by clinical research are not exhausted by the risks to which participants are exposed, but also include the extent to which investigators and by implication society are the agents of their exposure to risks. For this reason, he recognized that the libertarian response is inadequate, even with respect to competent adults who truly understand. Finally, to the extent Jonas’s claims rely on an objective account of human interests, one may wonder whether he adopts an overly restrictive one. Why should we think, on an objective account, that individuals will have an interest in contributing to the goals of a given study only when they have the disease it addresses? Moreover, although we will not pursue the point here, appeal to an objective account of human interests raises the possibility of justifying the process of exposing research participants to risks for the benefit of others on the grounds that contributing to valuable projects, including presumably some clinical research studies, promotes (most) individuals’ interests (Wendler 2010).

The fundamental ethical challenge posed by clinical research is whether it is acceptable to expose some to research risks for the benefit of others. In the standard formulation, the one we have been considering to this point, the benefits that others enjoy as the result of participants’ participation in clinical research are medical and health benefits, better treatments for disease, better methods to prevent disease.

Industry funded research introduces the potential for a very different sort of benefit and thereby potentially alters, in a fundamental way, the moral concerns raised by clinical research. Pharmaceutical companies typically focus on generating profit and increasing stock price and market share. Indeed, it is sometimes argued that corporations have an obligation to their shareholders to pursue increased market share and share price (Friedman 1970). This approach may well lead companies to pursue new medical treatments which have little or no potential to improve overall health and well-being (Huskamp 2006; Croghan and Pittman 2004). “Me-too” drugs are the classic example here. These are drugs identical in all clinically relevant respects to approved drugs already in use. The development of a me-too drug offers the potential to redistribute market share without increasing overall health and well-being.

There is considerable debate regarding how many me-too drugs there really are and what is required for a drug to qualify as effectively identical (Garattini 1997). For example, if the existing treatment needs to be taken with meals, but a new treatment need not, is that a clinically relevant advance? Bracketing these questions, a drug company may well be interested in a drug which clearly qualifies as a me-too drug. The company may be able, by relying on a savvy marketing department, to convince physicians to prescribe, and consumers to request the new one, thus increasing profit for the company without advancing health and well-being.

The majority of clinical research was once conducted by governmental agencies. For example, the US NIH is likely the largest governmental sponsor of clinical research in the world. However, its research budget has declined over the past 20 years (Mervis 2004, 2008), and it is estimated that a majority, perhaps a significant majority of clinical research studies are now conducted by industry: “as recently as 1991 eighty per cent of industry-sponsored trials were conducted in academic health centers…Impatient with the slow pace of academic bureaucracies, pharmaceutical companies have moved trials to the private sector, where more than seventy per cent of them are now conducted” (Elliott 2008, Angell 2008, Miller and Brody 2005).

In addition to transforming the fundamental ethical challenge posed by clinical research, industry sponsored research has the potential to transform the way that many of the specific ethical concerns are addressed within that context. For example, the possibility that investigators and funders may earn significant amounts of money from their participation in clinical research might, it is thought, warp their judgment in ways that conflict with appropriate protection of research participants (Fontanarosa, Flanagin, and DeAngelis 2005). When applied to investigators and funders this concern calls into question the very significant percentage of research funded by and often conducted by for-profit organizations. Skeptics might wonder whether the goal of making money has any greater potential to influence judgment inappropriately compared to many other motivations that are widely accepted, even esteemed in the context of clinical research, such as gaining tenure and fame, impressing one’s colleagues, or winning the Nobel Prize.

Financial conflicts of interest in clinical research point to a tension between relying on profits to motivate research versus insulating drug development and testing from the profit motive as a way of protecting research participants and future patients (Psaty and Kronmal 2008). Finally, if a company can make billions of dollars a year from a single drug, one wonders what constitutes an appropriate response to the participants who were vital to its development. On a standard definition, whether a given transaction is fair depends on the risks and burdens that each party to the transaction bears and the extent to which others benefit from the party’s participation in the transaction (see entry on exploitation ). A series of clinical research studies can result in a company earning tens of billions of dollars in profits. Recognizing that a fair level of benefit is a complex function of participants’ inputs compared to the inputs of others, and the extent to which third parties benefit from those inputs, it is difficult to see how one might fill in the details of this scenario to show that the typically minimal, or non-existent compensation offered to research participants is fair.

At the same time, addressing this potential for exploitation by offering substantial payments to research participants who contribute to especially lucrative studies would introduce its own set of ethical concerns: is payment an appropriate response to the kind of contribution made by research participants; might payment constitute an undue inducement to participate; will payment undermine other participants’ altruistic motivations; to what extent does payment encourage research participants to provide misleading or false information to investigators in order to enroll and remain in research studies?

Like most early clinical research, James Lind’s experiments on treatments for scurvy took place in the clinical setting, with a physician evaluating different possible treatments in his patients. This practice eventually led to concern that it did not offer sufficient protection for patients who were frequently unaware that they were involved in research. And this concern led to separating clinical research from clinical care and subjecting it to significantly more extensive regulations, including independent review and extensive consent. This approach offered greater protections for research participants and likely led to more sophisticated clinical trials as the result of being conducted by dedicated researchers rather than clinicians in their spare time.

This segregation of clinical research from clinical care has also had significant drawbacks. It makes clinical research much more expensive and much more difficult to conduct. Given that studies are conducted in specialized environments, this approach has also undermined to some extent the relevance that the findings of clinical trials have for clinical care. For example, clinical trials assessing possible new treatments for hypertension are conducted in individuals who are aware that the trials exist and take the time to find out what is required to enroll. These studies frequently have trouble enrolling sufficient numbers of participants and take years to complete. At the same time, on the other side of the clinical research/clinical care divide, millions of patients with hypertension who might be eligible for a clinical trial are obtaining care from their doctors. Moreover, the countless data points generated by these encounters, what dose did the patient receive, did they experience any side effects, how long did they last, end up gathering dust in the patients’ medical records rather than being systematically collected and used to inform future practice.

In response, commentators have called for developing learning health care systems. According to one definition, a learning health care system is one in which “science, informatics, incentives, and culture are aligned for continuous improvement and innovation, with best practices seamlessly embedded in the care process, patients and families active participants in all elements, and new knowledge captured as an integral by-product of the care experience” (Committee on Learning 2013). The important point for present purposes is that these commentators are calling for the desegregation of clinical research and clinical care, with clinical research once again being conducted in the context of clinical care. This approach offers what seems a potentially appealing response to the central challenge of justifying the risks to which research participants are exposed. Put simply, this practice would be justified by the fact that the practice of exposing individuals to research risks is an essential component of a learning health care system which continuously evaluates methods of providing clinical care and passes the benefits of the improvements on to its members (Faden et al 2013).

The segregated model of clinical research was designed to protect research participants from exploitation. The primary drawbacks are that it is inefficient and it raises concerns over free riders, allowing patients to realize the benefits of improved clinical care without having to accept any of the risks associated with its generation. Learning health care systems are intended to address the problem of inefficiency. Given that all the members of learning health care systems are possible research participants and also beneficiaries of research, they may, in the process, address concerns over free riders.

The ethical challenge learning health care systems face is whether it is possible to do away with the segregated model, along with its regulations and practices, without reintroducing the potential for participant exploitation. For example, should individuals in a learning health care system be told that their data might be used for research purposes? Should they be notified when it is? To what extent can patients in learning health care systems be exposed to added risks for the purposes of research? Should they be permitted to decline being so exposed? In the end, then, on-going attempts to address the concerns raised by clinical research raise new ethical concerns and, thereby, offer opportunities for philosophers and others looking for interesting, not to mention practically important issues in need of analysis and resolution.

  • Angell, M., 2008. “Industry sponsored clinical research: a broken system,” Journal of the American Medical Association , 80: 899–904.
  • Appelbaum, P.S., with C.W. Lidz and T. Grisso, 2004. “Therapeutic misconception in clinical research: frequency and risk factors,” IRB: Ethics and Human Research , 26: 1–8.
  • Australian Government, National Health and Medical Research Council, 1999. National statement on ethical conduct in research involving humans . Ch 2.1: 92. Commonwealth of Australia, 1999.
  • Beecher, H. K., 1966. “Ethics and clinical research,” N Engl J Med , 274: 1354–60.
  • Brock, D. W., 1994. “Ethical issues in exposing children to risks in research,” Chapter 3 (pp. 81–101) of Grodin and Glantz (eds.), Children as Research Subjects , New York: Oxford University Press.
  • Brody, B.A., 1998. The Ethics of Biomedical Research: An International Perspective , Oxford: Oxford University Press.
  • Caldwell, P.H.Y., with S.B. Murphy, P.H. Butow, and J.C. Craig, 2004. “Clinical trials in children,” Lancet , 364: 803–11.
  • Caplan, A., 1984. “Is there a duty to serve as a subject in biomedical research?” IRB: Ethics and Human Research , 6: 1–5.
  • Committee on the Learning Health Care System in America; Institute of Medicine; Smith M, Saunders R, Stuckhardt L, et al. (eds.), Best Care at Lower Cost: The Path to Continuously Learning Health Care in America , Washington, D.C.: National Academies Press; 2013 May 10.
  • Council for International Organizations of Medical Sciences, 2002. International ethical guidelines for biomedical research involving human subjects . Geneva: CIOMS.
  • Croghan, T.W., and P.M. Pittman, 2004. “The medicine cabinet: What’s in it, why, and can we change the contents?” Health Affairs , 23: 23–33.
  • D’Arcy Hart, P., 1999. “A change in scientific approach: From alternation to randomised allocation in clinical trials in the 1940s,” BMJ , 319: 572–3.
  • Department of Health and Human Services, 2005. Code of Federal Regulations : Title 45 (Public Welfare). Part 46: protection of human subjects (45 CFR 46). US Government Printing Office.
  • Dworkin, R., 1989, “The original position,” in Reading Rawls , Norman Daniels (ed.), Stanford: Stanford University Press, 16–53.
  • Dworkin, G., 2005. “Paternalism,” in Stanford Encyclopedia of Philosophy (Winter 2005 Edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/win2005/entries/paternalism/ >.
  • Elliott, C., 2008, “Guinea-pigging,” The New Yorker , January 7, 2008, p. 36.
  • Emanuel, E.J., with D. Wendler and C. Grady 2000. “What makes clinical research ethical?,” Journal of the American Medical Association , 283: 2701–11.
  • Faden, R.R., and T.L. Beauchamp, 1986. A History and Theory of Informed Consent , New York: Oxford University Press, pp. 200–232.
  • Faden, R.R., N.E. Kass, S.N. Goodman, P. Pronovost, S. Tunis, and T.L Beauchamp. An ethics framework for a learning health care system: a departure from traditional research ethics and clinical ethics , Hastings Center Report 2013; Spec No: S16-27.
  • Featherstone, K., and J.L. Donovan, 2002. “Why don’t they just tell me straight, why allocate it? The struggle to make sense of participating in a randomised controlled trial,” Social Science and Medicine , 55: 709–19.
  • Feinberg, J., 1986. Harm to Self , Oxford: Oxford University Press, pp. 3–26.
  • Field, M.J., and R.E. Behrman, 2004. The Ethical Conduct of Clinical Research Involving Children , Washington DC: National Academies Press, Ch. 2.
  • Flory, J., and E. Emanuel, 2004. “Interventions to improve research participants’ understanding in informed consent for research: A systematic review,” Journal of the American Medical Association , 292: 1593–1601.
  • Fontanarosa P.B., with A. Flanagin A. and C.D. DeAngelis, 2005. “Reporting conflicts of interest, financial aspects of research, and role of sponsors in funded studies,” Journal of the American Medical Association , 294: 110–11.
  • Freedman, B., 1987. “Equipoise and the ethics of clinical research,” The New England Journal of Medicine , 317: 141–45.
  • Friedman M., 1970, “The social responsibility of business is to increase its profits,” The New York Times Magazine , September 13, 1970.
  • Garattini S., 1997. “Are me-too drugs justified?,” J Nephrol , 10: 283–94.
  • Gauthier, D., 1990. Morals by Agreement , Oxford: Clarendon Press.
  • Gifford, F., 2007. “Pulling the plug on clinical equipoise: A critique of Miller and Weijer,” Kennedy Institute of Ethics Journal , 17: 203–26.
  • Goodyear, M.D., with K. Krleza-Jeric and T. Lemmens, 2007. “The Declaration of Helsinki,” BMJ , 335: 624–5.
  • Grady, C., 2005. “Payment of clinical research subjects,” J Clin Invest , 115: 1681–7.
  • Griffin, J., 1986. Well-being: Its Meaning, Measurement and Moral Importance , Oxford: Clarendon.
  • Grodin, M.A., and G.J. Annas, 1996. “Legacies of Nuremberg: Medical ethics and human rights,” Journal of the American Medical Association , 276: 1682–83.
  • Harmon, A., 2010. “Indian Tribe Wins Fight to Limit Research of Its DNA,” New York Times , 21 April 2010.
  • Harris, J., 2005. “Scientific research is a moral duty,” Journal of Medical Ethics , 31: 242–48.
  • Hayenhjelm, M., and J. Wolff, 2012. “The moral problem of risk impositions: A survey of the literature,” European Journal of Philosophy , 20 (Supplement S1): E26–E51.
  • Hellman, S., and D.S. Hellman, 1991. “Of mice but not men: Problems of the randomized clinical trial,” The New England Journal of Medicine , 324: 1585–89.
  • Heyd, D., 1996. “Experimentation on trial: Why should one take part in medical research?”, Jahrbuch fur Recht und Ethik [Annual Review of Law and Ethics] , 4: 189–204.
  • Huskamp, H.A., 2006. “Prices, profits, and innovation: Examining criticisms of new psychotropic drugs’ value,” Health Affairs , 25: 635–46.
  • Jonas, H., 1969. “Philosophical reflections on experimenting with human subjects”, Daedalus , 98: 219–247.
  • Katz, J., 1996. “The Nuremberg Code and the Nuremberg trial. A reappraisal,” Journal of the American Medical Association 276: 1662–6.
  • Kopelman, L.M., 2000. “Children as research subjects: A dilemma,” Journal of Medicine and Philosophy , 25: 745–64.
  • Kupst, M.J., with A.F. Patenaude, G.A. Walco, and C. Sterling, 2003. “Clinical trials in pediatric cancer: Parental perspectives on informed consent,” Journal of Pediatric Hematology and Oncology , 25: 787–90.
  • Lazarou, J., with B.H. Pomeranz and P.N. Corey, 1998. “Incidence of adverse drug reactions in hospitalized patients: A meta-analysis of prospective studies,” Journal of the American Medical Association ; 279: 1200–05.
  • Lederer, S.E., 1995. Subjected to Science: Human Experimentation in America before the Second World War . Baltimore: Johns Hopkins University Press.
  • –––, 2003, “Children as guinea pigs: Historical perspective,” Accountability in Research , 10(1): 1–16.
  • Lederer, S.E., and M.A. Grodin, “Historical overview: Pediatric experimentation,” in M.A. Grodin and L.H. Glantz (eds.), Children as Research Subjects: Science, Ethics and Law , New York: Oxford University Press, 1994.
  • Levine, R.J., 1988. Ethics and Regulation of Clinical Research . 2nd ed. New Haven, Conn: Yale University Press.
  • Macklin, R., 1981. “Due and undue inducements: On paying money to research subjects,” IRB: A Review of Human Subjects Research , 3: 1–6.
  • Mervis, J., 2004. “U.S. Science budget: Caught in a squeeze between tax cuts and military spending,” Science , 30: 587.
  • –––, 2008. “U.S. Budget: Promising year ends badly after fiscal showdown squeezes science,” Science , 319: 18–9.
  • Mill, John Stuart, 1869, On Liberty . Page reference to On Liberty and Other Writings , Stefan Collini (ed.), Cambridge, Cambridge University Press, 2005, 12th edition.
  • Miller, F.G., and H. Brody, 2007. “Clinical equipoise and the incoherence of research ethics,” Journal of Medicine and Philosophy , 32: 151–65.
  • Miller, F.G., and A. Wertheimer, 2007. “Facing up to paternalism in research ethics,” Hastings Center Report , 37: 24–34.
  • Miller, P.B., and C. Weijer, 2006. “Trust based obligations of the state and physician-researchers to patient-subjects,” Journal of Medical Ethics , 32: 542–47.
  • National Bioethics Advisory Commission (NBAC), 2001. Ethical and Policy Issues in Research Involving Human Participants . Washington, DC: NBAC.
  • Nicholson, R.H., 1986. Medical Research with Children: Ethics, Law, Practice . Oxford: Oxford University Press. Pages 87–100.
  • Nuremberg Code, 1947, in Trials of war criminals before the Nuremberg Military Tribunals under Control Council Law No. 10 , Vol. 2, Washington, D.C.: U.S. Government Printing Office, 1949, pp. 181–182. Reprinted in Journal of the American Medical Association , 276: 1961.
  • Psaty, B.M., and R.A. Kronmal, 2008. “Reporting mortality findings in trials of rofecoxib for Alzheimer disease or cognitive impairment: A case study based on documents from rofecoxib litigation,” Journal of the American Medical Association , 299: 1813–7.
  • Ramsey, P., 1978. “Ethical dimensions of experimental research on children”, in Research on Children: Medical Imperatives, Ethical Quandaries, and Legal Constraints , J. van Eys (ed.), Baltimore: University Park Press, p. 61.
  • Rawls, J., 1999. A Theory of Justice . Cambridge, Mass: Belknap Press of Harvard University Press.
  • Resnik, D.B., 2005. “Eliminating the daily life risks standard from the definition of minimal risk,” Journal of Medical Ethics , 31: 35–8.
  • Rid, A., and D. Wendler, 2011. “A framework for risk-benefit evaluations in biomedical research,” Kennedy Institute of Ethics Journal , 21(2): 141–179.
  • Roberts, R., with W. Rodriquez, D. Murphy, and T. Crescenzi, 2003. “Pediatric drug labeling: Improving the safety and efficacy of pediatric therapies,” Journal of the American Medical Association , 290: 905–11.
  • Ross, L.F., and R.M. Nelson, 2006. “Pediatric research and the federal minimal risk standard,” Journal of the American Medical Association , 295: 759.
  • Rothman, D.J., 2000. “The shame of medical research”, The New York Review of Books , 47 (19): 60–64.
  • Sachs, Ben, 2010. “The exceptional ethics of the investigator-subject relationship,” Journal of Medicine and Philosophy , 35: 64–80.
  • Shuster, E., 1997. “Fifty years later: The significance of the Nuremberg Code,” The New England Journal of Medicine , 337: 1436–40.
  • Slovic, P., 1987. “Perception of risk,” Science , 236: 280–85.
  • Snowdon, C., with J. Garcia, and D. Elbourne, 1997. “Making sense of randomization: Responses of parents of critically ill babies to random allocation of treatment in a clinical trial,” Social Science and Medicine , 45: 1337–55.
  • Spilker, B., 1991. Guide to clinical trials , Philadelphia: Lippincott, Williams and Wilkins.
  • Stark, C., 2000. “Hypothetical consent and justification,” Journal of Philosophy , 97: 313–34.
  • Stewart, Paul M., with Anna Stears, Jeremy W. Tomlinson, and Morris J. Brown, 2008. “Regulation—the real threat to clinical research,” British Medical Journal , 337: 1085–1087.
  • Sullivan, Richard, 2008. “The good, the bad and the ugly: Effect of regulation on cancer research,” Lancet Oncology , 9: 2–3.
  • Sutton, G., 2003. “Putrid gums and ‘dead men’s cloaths’: James Lind aboard the Salisbury,” Journal of the Royal Society of Medicine , 96: 605–8.
  • Tversky, A., and D. Kahneman, 1974. “Judgments under uncertainty: Heuristics and biases,” Science , 185: 1124–31.
  • –––, 1981, “The framing of decisions and the rationality of choice,” Science , 211: 453–8.
  • Vollmann J., and R. Winau, 1996. “Informed consent in human experimentation before the Nuremberg code,” British Medical Journal , 313: 1445–7.
  • Weinstein, N., 1989. “Optimistic biases about personal risks,” Science , 246: 1232–3.
  • Wenar, L., 2008. “John Rawls”, The Stanford Encyclopedia of Philosophy (Summer 2008 Edition) , Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/sum2008/entries/rawls/ >.
  • Wendler, D., 2010, The Ethics of Pediatric Research , Oxford: Oxford University Press.
  • Wertheimer, A., 2008. “Exploitation”, The Stanford Encyclopedia of Philosophy (Fall 2008 Edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/fall2008/entries/exploitation/ >.
  • –––, 2010, Rethinking the Ethics of Clinical Research: Widening the Lens , Oxford: Oxford University Press.
  • Wilson, James, and David Hunter, 2010. “Research exceptionalism,” American Journal of Bioethics , 10: 45–54.
  • World Medical Organization, 1996. Declaration of Helsinki , British Medical Journal, 313 (7070): 1448–1449.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Bioethics.net , organized by the editors of the American Journal of Bioethics.
  • Bioethics Literature Database , German site for conducting searches
  • National Reference Center for Bioethics Literature , organized by the research library at the Kennedy Institute of Ethics
  • Nuffield Council on Bioethics , organized by the Nuffield council, the preeminent organization on ethics in Britain
  • Office for Human Research Protections (OHRP), Department of Health and Human Services, website of the office that oversees U.S. research regulations.
  • PRIM&R , Public Responsibility in Medicine and Research
  • SARETI , The South African Research Ethics Training Initiative

cloning | contract law, philosophy of | decision-making capacity | exploitation | health | informed consent | original position | paternalism | Rawls, John | risk

Copyright © 2021 by David Wendler < dwendler @ nih . gov >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Journal of Medical Ethics

is a leading journal covering the whole field of medical ethics, promoting ethical reflection and conduct in scientific research and medical practice.

Impact Factor: 4.2 Citescore: 6.2 All metrics >>

Journal of Medical Ethics is an official journal of the Institute of Medical Ethics and a Plan S compliant Transformative Journal .

JME features articles on ethical aspects of health care relevant to health care professionals, members of clinical ethics committees, medical ethics professionals, researchers and bioscientists, policy makers and patients. It also publishes an associated forum .

The Editor-in-Chief is Professor John McMillan (University of Otago), who leads an international editorial team .

Find out how you can contribute to the journal’s Feature Article discussions here .

Journal Current Issue

The Journal of Medical Ethics accepts submissions of a wide range of article types, including original research, reviews and feature articles.

The Author Information section provides specific article requirements to help you turn your research into an article suitable for JME.

Information is also provided on editorial policies and open access .

Latest Articles

14 May 2024

Extended essay :

Commentary :

Most Read Articles

23 January 2024

Clinical ethics :

20 February 2024

Editorial :

Current controversy :

Original research :

20 March 2024

altmetric badge

Featured Video

Journal of medical ethics - our full story.

Listen to Editor-in-Chief of Journal of Medical Ethics, John McMillan, talk about the journal's aims and scope, the advice he has for authors thinking about submitting to the journal, what the journal has to offer to readers and what the journal has planned for 2024.

Related Journals

Medical Humanities

Medical Humanities

Goole, East Riding of Yorkshire

Recruiter: Snaith and Rawcliffe Medical Group

Recruiter: The Christie NHS Foundation Trust

Cairns (LGA), Far North Queensland (AU)

Recruiter: Cairns and Hinterland Hospital and Health Service

Recruiter: Barts Health NHS Trust

Wythenshawe

Recruiter: Manchester University NHS Foundation Trust

medical ethics on research

Medical Research Ethics: Challenges in the 21st Century

  • © 2023
  • Tomas Zima 0 ,
  • David N. Weisstub 1

Charles University, Prague, Czech Republic

You can also search for this editor in PubMed   Google Scholar

International Academy of Law and Mental Health, Montréal, Canada

  • Provides a current review of medical research ethics on a global basis
  • Aims to promote a discussion about controversial and foundational aspects in the field
  • Concentrates on key areas of medical research where there are core ethical issues

Part of the book series: Philosophy and Medicine (PHME, volume 132)

11k Accesses

5 Citations

16 Altmetric

This is a preview of subscription content, log in via an institution to check access.

Access this book

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

Table of contents (26 chapters)

Front matter, philosophical foundations, embryo research ethics.

  • Robert George, Christopher Tollefsen

The Ethics of Medical Research

  • David Novak

Genopolitics: Biotechnology Norms and the Liberal International Order

  • Jonathan Moreno

Vulnerability

Persons and groups: protection of research participants with vulnerabilities as a process.

  • Paweł Łuków

Centring the Human Subject: Catalyzing Change in Ethics and Dementia Research

  • Gloria Puurveen, Jim Mann, Susan Cox

Unproven Stem Cell-Based Interventions: Addressing Patients’ Unmet Needs or Causing Patient Harms?

  • Kirstin R. W. Matthews

Genetic Privacy in the Age of Consumer and Forensic DNA Applications

  • Sheldon Krimsky

Neuroscience

Ethical issues in neuroscience research.

  • Walter Glannon

Should Prisoners’ Participation in Neuroscientific Research Always Be Disregarded When Making Decisions About Early Release?

  • Elizabeth Shaw

Applying Neuroscience Research: The Bioethical Problems of Predicting and Explaining Behavior

  • David Freedman

Direct Benefit, Equipoise, and Research on the Non-consenting

  • Stephen Napier

The Ethics of Surgical Research and Innovation

  • Wendy A. Rogers, Katrina Hutchison

Palliative Care

Opening death’s door: psilocybin and existential suffering in palliative care.

  • Duff R. Waring
  • 21st Century Bioethics
  • Medical Ethics in the 21st Century
  • Global Bioethics
  • Medical Jurisprudence
  • Medical Research Ethics
  • Bioethics in the 21st Century

About this book

This book provides a current review of Medical Research Ethics on a global basis. The book contains chapters that are historically and philosophically reflective and aimed to promote a discussion about controversial and foundational aspects in the field. An elaborate group of chapters concentrates on key areas of medical research where there are core ethical issues that arise both in theory and practice: genetics, neuroscience, surgery, palliative care, diagnostics, risk and prediction, security, pandemic threats, finances, technology, and public policy.This book is suitable for use from the most basic introductory courses to the highest levels of expertise in multidisciplinary contexts.  The insights and research by this group of top scholars in the field of bioethics is an indispensable read for medical students in bioethics seminars and courses as well as for philosophy of bioethics classes in departments of philosophy, nursing faculties, law schools where bioethics is linked to medical law, experts in comparative law and public health, international human rights, and is equally useful for policy planning in pharmaceutical companies.

Editors and Affiliations

David N. Weisstub

About the editors

Professor Tomas Zima, Head of the Institute of Medical Biochemistry and Laboratory Medicine First Faculty of Medicine Charles University in Prague, served as Dean of the First Faculty of Medicine and Rector Magnificus of Charles University. Professor Zima is author of 470 articles which have been cited over 4500 times in the Science Citation Index. He has also written nine books and is Author of over 75 additional chapters. He has lectured globally at 160 universities and served in many leadership roles including as Member of the European Commission’s Scientific Panel for Health (SPH). He is Chair of the Advisory Board for the International Academy of Medical Ethics and Public Health. 

Bibliographic Information

Book Title : Medical Research Ethics: Challenges in the 21st Century

Editors : Tomas Zima, David N. Weisstub

Series Title : Philosophy and Medicine

DOI : https://doi.org/10.1007/978-3-031-12692-5

Publisher : Springer Cham

eBook Packages : Religion and Philosophy , Philosophy and Religion (R0)

Copyright Information : The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023

Hardcover ISBN : 978-3-031-12691-8 Published: 03 January 2023

Softcover ISBN : 978-3-031-12694-9 Published: 04 January 2024

eBook ISBN : 978-3-031-12692-5 Published: 01 January 2023

Series ISSN : 0376-7418

Series E-ISSN : 2215-0080

Edition Number : 1

Number of Pages : XVI, 499

Number of Illustrations : 3 b/w illustrations, 3 illustrations in colour

Topics : Bioethics

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

You are using an outdated browser. Please upgrade your browser to improve your experience. Thanks!

Stony Brook University

  • Medical Humanities
  • Compassionate Care
  • Clinical Ethics
  • Center Report
  • Faculty & Staff
  • Affiliated Faculty
  • How to Apply
  • Program Requirements
  • Course Information
  • MD with Scholarly Concentrations
  • MCS Core Curriculum
  • MCS Selectives
  • MCS Independent Learning
  • Center Year 4 Courses
  • Compassion Research Committee
  • MESH Newsletter
  • Professional Identity Formation
  • Center Events
  • Schwartz Rounds
  • Conferences
  • History of Medicine
  • The Center in the News
  • Recent Publications
  • Faculty Books
  • About 

Institutional Ethics Committee Members:

Maria Basile, MD, MBA Phyllis Migdal, MD, MA Stephen G. Post, PhD

The antiquity of the Hippocratic Oath—which enjoins the practitioners of medicine to “never do harm” and to act “for the good of the patient”—makes clear that the professional practice of medicine has always been an ethically motivated endeavor. The great strides in both science and technology that have come since this oath was first sworn, along with the ever-changing legal environment in which medicine is practiced, have only increased the ethical complexity of the medical situations our patients (and their families), physicians, and other health care staff face on a daily basis. In response to the increasing complexity of situations (for example, surrounding the end of life) as well as clear abuses of medical authority in the past (such as the experiments performed by Dr. Mengele on behalf of the Nazi regime), bioethicists, philosophers, theologians and others have developed a number of ethical principles (such as respect for patient autonomy, non-¬ maleficence, beneficence, and justice) while lawyers, advocates, and legislatures (at both the state and national level) have developed a number of laws to ensure appropriate adherence to these principles. The ethical and legal complexity of even the most routine medical care can often be staggering for a patient and his family confronting these situations for the first time, as well as the health care provider who wants nothing more than to care for her patient. The Institutional Ethics Committee (IEC) at Stony Brook University Hospital aims to help both health care providers and their patients (and family and friends) work through these complexities and ensure that both legal and ethical conflicts are addressed proficiently and to everyone’s satisfaction.

The Institutional Ethics Committee has a tripartite mission in serving Stony Brook University Hospital and the community through Education, Policy advisement and Consultative Service. The IEC provides prospective study and education as well as timely ethics consultation to ensure that the Stony Brook University Hospital is able to meet all of the ethical and legal questions that its staff faces each day with compassion and beneficence in order to ensure the best experience possible for all of our patients.

The IEC is on call 24 hours a day, 7 days a week for consultation. A consultation may be requested by anyone involved in the case—patient, family member, friend, a health care provider, physician, nurse, or any staff member. Ethics consultations are handled by a team comprised of multiple disciplines, including, physicians, nurses, legal advisors, social workers, clergy, religious advisors, ethicists, and others. The Consultation service both helps advise those involved in the case and also helps build consensus among the affected parties so that all feel satisfied and comfortable with the outcome. The IEC keeps careful records of consultations and offers followup on a regular basis.

The IEC also maintains an Ethics Review Committee that regularly reviews the current laws, attends to changes as they are made, and revisits hospital policies and forms to ensure they are up to date and comply with all new laws. Educational opportunities are then provided to all hospital staff including orientations, ethics sessions, conferences, and Ethics Rounds.

  • For Clinicians
  • For Medical Students
  • For Scientists
  • Our Victories
  • Internships
  • Annual & Financial Reports
  • Barnard Medical Center
  • May 13, 2024

Ethics—Both Human and Animal—Demand We Move Away From Animal Use in Biomedical Research Urgently and Focus Entirely on Human Biology Instead

  • Share on Facebook
  • Share on Twitter
  • Share via Email

New paper by Jarrod Bailey, PhD, Director of Medical Research at the Physicians Committee, and co-author Professor Michael Balls, summarizes the case for a paradigm shift in medical research that will help animals and humans.

In their invited review in Anesthesiology Clinics , the authors discuss ethical issues and considerations concerning the use of animals in science, and how these can be used by some to justify animal experimentation. They also make the case that human ethics must take a much more prominent role. If biomedical science is not sufficiently human relevant, then it is letting down billions of people relying on science to increase the understanding of human diseases, and to provide new drugs to ease human suffering—and perpetuating harm to tens of millions of animals in laboratories every year.

One of the reasons for the continuation of large-scale animal research and testing is that there is, and has been, far too little reflection and critical questioning of animal models of human diseases and drug efficacy and safety. This widespread failure to follow the scientific method—in which critical evaluation of what is being done, what models are being used to generate data and test hypotheses—is often absent, in the face of unprecedented and substantial evidence against it.

The paper makes the case that animal ‘models’ of human biology and diseases are poor, due to intractable species differences that cannot be overcome or sufficiently taken into account. These differences are amplified by considerable variability within species: Individual animals and humans often differ significantly from one another, making the relevance of experimental data even to other members of the same species problematic. Such differences between individual human beings must be factored into research, and this can only be achieved through human-specific research approaches. This includes so-called New Approach Methodologies (NAMs)—advanced methods of growing and maintaining human cells, tissues, and miniature organs, often derived from patient skin and blood samples, which reflect the variable biology of specific, different patients. Huge strides in our understanding of human diseases, and in what we can do to develop new, safe, and effective therapies for them, are being made in this way, including for diseases which have been poorly served by the use of animals.

The article concludes that a rapid phase-out of animal use in science is essential and must be replaced by a focus on human biology at all stages—for the sake of animals and humans.

Dr Bailey’s new paper can be found here, free, until June 22:

https://authors.elsevier.com/a/1j19V6tLd7CNeS

More on Ethical Science

medical ethics on research

Good Science Digest

medical ethics on research

Join the Kickstart

Prevention starts today. Join the 21-Day Vegan Kickstart.

21-Day Kickstart

Get Healthy With Good Nutrition

Food for Life classes teach you how to improve your health with a plant-based diet.

Find a Class

Animal Charities of America

  • Share full article

Advertisement

Supported by

Ethical Lapses in the Medical Profession

More from our inbox:, don’t cave, columbia, a florida book oasis, balloon release ban.

medical ethics on research

To the Editor:

Re “ Moral Dilemmas in Medical Care ” (Opinion guest essay, May 8):

It is unsettling, and dismaying, to read Dr. Carl Elliott’s account of moral lapses continuing to exist, if not thrive, in medical education. As a neurology resident in the early 1970s, I was assigned a patient who was scheduled to have psychosurgery.

He was a prisoner who had murdered a nurse in a hospital basement, and the surgery to remove part of his brain was considered by the department to be a therapeutic and even forward-looking procedure. This was despite its being widely discredited, and involving a prisoner who could not provide truly informed consent.

A fellow resident and I knew that refusing would almost certainly result in suspension or dismissal from the residency, so we anonymously contacted our local newspapers, whose reporting resulted in an overflow protest meeting, cancellation of the psychosurgery and legislative action placing conditions on the acceptance of informed consent by prisoners.

It is lamentable that even though bioethics programs are widely incorporated into medical education, moral and ethical transgressions remain a stubborn problem as part of medical structures’ groupthink.

As Richard Feynman has emphasized , doubt, uncertainty and continued questioning are the hallmarks of scientific endeavor. They need to be an integral element of medical education to better prepare young doctors for the inevitable moral challenges that lie ahead.

Robert Hausner Mill Valley, Calif.

I would like to thank Carl Elliott for exposing the “Moral Dilemmas in Medical Care.” There is a medical school culture that favors doctors as privileged persons over patients.

I can remember multiple patient interactions in medical school in which I thanked a patient for allowing me to examine them and apologized for hurting them during my exam of their painful conditions.

I was then criticized by attending physicians for apologizing to the patients. I was told, on multiple occasions, that the patient should be thanking me for the privilege of assisting in my education.

Medical training, in a medical school culture that favors the privilege of the medical staff over the rights and feelings of patients, needs to be exposed and changed.

Doug Pasto-Crosby Nashville The writer is a retired emergency room physician.

As a psychiatrist and medical ethicist, I commend Dr. Carl Elliott for calling attention to several egregious violations of medical ethics, including failure to obtain the patient’s informed consent. Dr. Elliott could have included a discussion of physician-assisted suicide and the slippery slope of eligibility for this procedure, as my colleagues and I recently discussed in Psychiatric Times .

For example, as reported in The Journal of Eating Disorders , three patients with the eating disorder anorexia nervosa were prescribed lethal medication under Colorado’s End-of-Life Options Act. Because of the near-delusional cognitive distortions present in severe anorexia nervosa, it is extremely doubtful that afflicted patients can give truly informed consent to physician-assisted suicide. Worse still, under Colorado law, such patients are not required to avail themselves of accepted treatments for anorexia nervosa before prescription of the lethal drugs.

Tragically, what Dr. Elliott calls “the culture of medicine” has become increasingly desensitized to physician-assisted suicide, nowadays touted as just another form of medical care. In the anorexia cases cited, informed consent may have been one casualty of this cultural shift.

Ronald W. Pies Lexington, Mass. The writer is on the faculty of SUNY Upstate Medical University and Tufts University School of Medicine, but the views expressed are his own.

Carl Elliot’s article on medical ethics was excellent. But it is not just in the medical profession that there exists the “subtle danger” that assimilation into an organization will teach you to no longer recognize what is horrible.

Businesses too have a culture that can “transform your sensibility.” In many industries executives check their consciences at the office door each morning. For example, they promote cigarettes; they forget they too breathe the air as they lobby against clean-air policies; they forget they too have children or grandchildren as they fight climate-friendly policies or resist gun-control measures. The list could go on.

In every organization, we need individuals to say no to policies and actions that may benefit the organization but are harmful, even destructive, to broader society.

Colin Day Ann Arbor, Mich.

Re “ Columbia’s Protests Also Bring Pressure From a Private Donor ” (front page, May 11):

Universities are meant to be institutions of higher learning, research and service to the community. They are not items on an auction block to be sold to the highest bidder.

Universities that sell off their policy platform to spoiled one-issue donors who threaten to throw a tantrum no longer deserve our respect. Grant-making foundations should not be grandstanding online. Give money, or don’t, but don’t call a news conference about it.

If Columbia caves, why should prospective students trust it as a place where they can go to become freethinkers and explore their own political conscience as they begin to contemplate the wider world and issues of social justice?

This is a real test of Columbia and its leadership. I do not envy its president, Nemat Shafik, who has few good choices and no way to make everyone happy. What she should not sell is her integrity, or the university’s. She should stand up to these selfish donors. Learn to say, “Thanks, but no thanks.”

Carl Henn Marathon, Texas

Re “ Book Bans? So Open a Bookstore ” (Arts, May 13):

Deep respect for the American novelist Lauren Groff and her husband, Clay Kallman, for opening the Lynx, their new bookstore in Gainesville, Fla. The store focuses on offering titles among the more than 5,100 books that were banned in Florida schools from July 2021 through December 2023.

To all the book clubbers and haters of bans: Order straight from the Lynx.

Fight evil. Read books.

Ted Gallagher New York

Re “ Keep a Firm Grip on Those Mickey Mouse Balloons. It’s the Law ” (front page, May 9):

Balloons are some of the deadliest ocean trash for wildlife, as mentioned in your article about Florida’s expected balloon release ban.

Plastic balloon debris poses a significant threat to marine life, often mistaken for food or becoming entangled in marine habitats, leading to devastating consequences for our fragile ocean ecosystems.

As the founder of Clean Miami Beach, an environmental conservation organization, I’m concerned about the impact of plastic pollution on Florida’s wildlife and coastal areas. Florida’s stunning beaches and diverse marine life are not only treasures to us locals but also draw millions of tourists each year.

Because of the dangers, intentional balloon releases have been banned in many cities and counties across the state. A poll released by Oceana showed that 87 percent of Florida voters support local, state and national policies that reduce single-use plastic. Gov. Ron DeSantis must waste no time in signing this important piece of legislation into law.

Our elected officials should continue to work together to address environmental issues so Floridians and tourists can enjoy our beautiful state without its being marred by plastic pollution.

Sophie Ringel Miami Beach

henrietta lacks

After Her Death, Scientists Made This Woman Immortal. But She Never Agreed to That.

Her genetic material rewrote the future of medicine—all without her consent.

Gear-obsessed editors choose every product we review. We may earn commission if you buy from a link. Why Trust Us?

In March 2024, Tuskegee University unveiled a monument commemorating Henrietta Lacks and her contributions to the fight against polio. Just days later, submissions closed for the Henrietta Lacks Hometown Initiative’s contest to design a memorial in her native Halifax County, Virginia.

It’s not surprising that Lacks, whose “immortal” HeLa cells were pivotal in developing treatments for diseases such as polio, HIV/AIDS, and COVID-19 , continues to be honored more than 70 years after her death. But Lacks’ legacy is complicated due to the ethical concerns surrounding the use of her special cells.

Lacks, who died of cancer at age 31 in 1951, was never aware that her cells led to significant medical advancements—or that they were taken without her consent. And even now, her strange case raises questions about the the morally dubious methods through which we achieved unquestionably positive breakthroughs in medicine.

The Immortal Life of Henrietta Lacks

Though her cells live on in labs across the world, Henrietta Lacks’ life was short and full of challenges, and she remained largely unrecognized during her lifetime. It wasn’t until author Rebecca Skloot investigated the origins of the famous HeLa cells that Lacks’ story gained widespread attention with the bestselling 2010 book, T he Immortal Life of Henrietta Lacks .

the immortal life of henrietta lacks book on table

Thanks to Skloot and the descendants of Lacks, who have worked to retell her story through initiatives like HeLa100 , we now know more about the life of the woman often referred to as “the mother of modern medicine .”

On August 1, 1920, Lacks was born in Roanoke, Virginia. According to Biography , following her mother’s death in 1924, Henrietta moved to her grandfather’s log cabin, a former slave quarters on a plantation. There, she lived with her cousin, David “Day” Lacks. At 14, she gave birth to their first child, Lawrence, and the couple married in 1941. Before moving to Maryland, they had a second child, Elsie, in 1939, and later expanded their family with three more children: David Jr., Deborah, and Joe.

Henrietta had been experiencing “abnormal pain and bleeding in her abdomen” when she visited The Johns Hopkins Hospital in Baltimore on February 1, 1951. Lacks had to travel to Hopkins for treatment because, as Hopkins Medicine’s own website notes , at the time, the hospital “was one of only a few hospitals to treat poor African-Americans.”

Lacks was quickly diagnosed with cervical cancer by physician Howard Jones. She received radium treatments there, and without her consent, doctors extracted two cervical samples. Despite treatment, her condition worsened, leading to her readmission to the hospital on August 8. Lacks passed away on October 4, 1951, and was laid to rest in an unmarked grave in Clover, Virginia, where she spent her early years.

But unbeknownst to Lacks, or her family, the cervical samples taken without her consent revealed something extraordinary.

Lacks’ tumor cells were sent to the lab of Dr. George Otto Gey, a prominent cancer researcher. Gey collected cells “from all patients—regardless of their race or socioeconomic status—who came to The Johns Hopkins Hospital with cervical cancer,” according to the hospital’s website. But unlike previous samples that died within a day or two, Lacks’ cells amazingly doubled in number every 20 to 24 hours.

That discovery led to a medical revolution. As Biography summarizes:

“Gey isolated and multiplied a specific cell, creating a cell line. He dubbed the resulting sample HeLa, derived from the name Henrietta Lacks. The HeLa strain revolutionized medical research. Jonas Salk used the HeLa strain to develop the polio vaccine, sparking mass interest in the cells. As demand grew, scientists cloned the cells in 1955. Since that time, over 10,000 patents involving HeLa cells have been registered. Researchers have used the cells to study disease and to test human sensitivity to new products and substances. More recently, the cells enabled the development of COVID-19 vaccines.”

It took nearly two decades for the Lacks family to learn about their relation to the immortal HeLa cells. During that time, the cells’ origin was mistakenly attributed in the media to fictitious names such as “Helen Lane” or “Helen Larson.”

hela cervical cancer cells

In February 2010, Johns Hopkins addressed ethical concerns about the acquisition of the initial HeLa cells in a statement:

“It’s important to note that at the time the cells were taken from Mrs. Lacks’ tissue, the practice of obtaining informed consent from cell or tissue donors was essentially unknown among academic medical centers. Sixty years ago, there was no established practice of seeking permission to take tissue for scientific research purposes.”

But that’s not quite the full story. Skloot’s research in The Immortal Life of Henrietta Lacks revealed that additional cell samples were taken from Lacks beyond the initial two.

Lacks’ cells were special, but by the time Gey hoped to collect more samples, she had died. “Though no law or code of ethics required doctors to ask permission before taking tissue from a living patient,” Skloot wrote, “the law made it very clear that performing an autopsy or removing tissue from the dead without permission was illegal.”

After Lacks’ death, doctors sought consent from her husband, Day, to perform an autopsy, but he initially refused. When they approached Day a second time, suggesting that the autopsy could yield test results that might benefit their children in the future, Day relented and gave his permission. Unfortunately, the promised test results were never provided to the Lacks family.

Johns Hopkins maintains that the obfuscation wasn’t for financial gain. The university says, “Johns Hopkins has never sold or profited from the discovery or distribution of HeLa cells and does not own the rights to the HeLa cell line. Rather, Johns Hopkins offered HeLa cells freely and widely for scientific research.”

But other companies have financially benefited from the HeLa cells. Notably, Thermo Fisher Scientific reached a settlement with the Lacks family in 2023 over products related to the HeLa cell line, as reported by NPR . This settlement acknowledges the commercial use and financial gains derived from Lacks’ cells.

Dr. Gey’s actions, which were typical of an era when the potential benefits to many patients took precedence over the rights of individual medical research subjects, could have been driven by the belief that prioritizing consent might impede scientific progress. It wasn’t until 15 years after Lacks’ cells were harvested that a landmark research paper changed that belief.

The Evolution of Medical Ethics in America

Ethical standards in American medical science have dramatically evolved over the past century, notably in the attitude toward eugenics, which is the science of promoting desirable qualities in the human race, usually through some kind of controlled breeding.

cover of physical culture magazine

“When most people think of eugenics, they think of the unspeakable acts of Adolf Hitler and Dr. Josef Mengele ,” Dr. Marilyn M. Singleton wrote in an article for the Journal of American Physicians and Surgeons . “ But history tells us that some of America’s best and brightest promoted eugenics as settled science and necessary for the preservation of society.” (For example, Harvard professor W.E.B. DuBois supported selective breeding within the Black community, birth control pioneer Margaret Sanger advocated for “negative eugenics,” and even the NAACP ran “Better Baby” contests to fund its anti-lynching efforts.)

After World War II, the revelation of Nazi medical crimes during the Doctors’ Trials at Nuremberg, including the experiments by Mengele, radically changed American views on eugenics and led to the Nuremberg Code, which prioritized voluntary consent in medical research. But the U.S. didn’t legally adopt the Nuremberg Code, nor immediately recognize the similarities between Nazi human experiments and its own practices, such as collecting Lacks’ cells without her consent.

doctors' trial defendants

In 1966, two decades after the start of the Doctors’ Trial, and 15 years after the death of Henrietta Lacks, Dr. Henry K. Beecher published a paper in The New England Journal of Medicine titled, “ Ethics and Clinical Research .” It’s also known within medical circles today colloquially as “ Beecher’s Bombshell .”

The paper criticized the use of unwitting patients in experiments that offered no benefit and often harmed them, detailing 22 shocking cases including soldiers given placebos for rheumatic fever based solely on their serial number, and patients being injected with live cancer cells without their informed consent.

The experiments chronicled in Beecher’s report rocked the medical world, since these were conducted not by German men on trial, but by respected institutions, prestigious scientists, and the U.S. military. Years later, the Office of Human Research Protections stated that this paper contributed to “the impetus for the first [National Institutes of Health] and [Food and Drug Administration] regulations.”

Beecher’s paper strongly advocated for informed consent, challenging the prevailing notion among some circles that ethical scrutiny could hinder scientific progress.

How We Remember Henrietta Lacks

The story of Henrietta Lacks is often framed by the medical breakthroughs that her immortal cells facilitated, with a focus on the collective benefit rather than the ethical missteps that scientists took, and her family’s ignorance and lack of compensation. But decades before Lacks’ story became widely known, Beecher already argued against this idea:

“An experiment is ethical or not at its inception," Beecher concludes, “it does not become ethical post hoc—ends do not justify means. There is no ethical distinction between ends and means.”

framed photo of henrietta lacks in the living room of her grandson, ron lacks in n baltimore, md on march 22, 2017

It’s impossible to say whether Henrietta Lacks would have consented to the use of her cells, especially with the knowledge of the remarkable medical achievements they’d unlock—achievements even the doctors who collected the cells couldn’t have predicted.

We know now that those HeLa cells changed medicine forever, and though some tried to obscure their origins, Henrietta Lacks is immortalized in medical history. But we also know now that, regardless of the laws at the time, it should have been Lacks’ choice whether she was immortalized at all. If only someone had thought to ask.

Headshot of Michael Natale

Michael Natale is the news editor for Best Products , covering a wide range of topics like gifting, lifestyle, pop culture, and more. He has covered pop culture and commerce professionally for over a decade. His past journalistic writing can be found on sites such as Yahoo! and Comic Book Resources , his podcast appearances can be found wherever you get your podcasts, and his fiction can’t be found anywhere, because it’s not particularly good. 

preview for Popular Mechanics All Sections

.css-cuqpxl:before{padding-right:0.3125rem;content:'//';display:inline;} Pop Mech Pro .css-xtujxj:before{padding-left:0.3125rem;content:'//';display:inline;}

abstract light in a tunnel

Russia Is Now Using Golf Carts in Combat

m2a4e1 bradley equipped with iron fist on display

The Bradley Gets a New Iron Fist

ship wreck, underwater wreck, battleship wreck , fisher boat wreck

South Africa Wins Fight for 'Indian Titanic' Loot

a soldier holding a gun

Can the Army’s New Rifle Take on Russian Armor?

a pair of jackets on a wall

Overhaul Your Shop

nebula with womans face

The Source of All Consciousness May Be Black Holes

a jet taking off

F-22s are Flocking to This Island in the Pacific

a us air force ac 130j ghostrider, assigned to 27th special operations wing, engages a target vessel during a maritime littoral strike during balikatan 24 near lubang, philippines, april 30, 2024 bk 24 is an annual exercise between the armed forces of the philippines and the us military designed to strengthen bilateral interoperability, capabilities, trust, and cooperation built over decades of shared experiences us marine corps photo by cpl nayomi koepke

Special Ops Gunship Shreds “Fishing Boat”

iskander m ballistic missile truck launcher at army 2019 defense expo

Russia's Unprecedented Nuclear Drills

dutch iron caltrop from 18th century

Ukraine Is Using an Ancient Weapon on Russia

view of the glomar explorerr

Inside the CIA’s Quest to Steal a Soviet Sub

  • Visits and Open Days
  • Jobs and vacancies
  • Undergraduate
  • Postgraduate
  • Accommodation
  • Student Guide
  • Student email
  • Library and IT services
  • Staff Guide
  • Staff email
  • Timetabling

30 May 2024. Women and Surgical Ethics (LAB)

medical ethics on research

CHMH Lab: Women and Surgical Ethics

30th of May, 2-4 pm

Grimond Seminar Room 3

Teams link: Join the meeting now

Meeting ID: 379 646 856 861

Passcode: 6yVTGf

How have women made a difference to surgical ethics, as patients, and as surgeons? Can we judge the history of surgery ethically? If so, whose ethics?

What further concerns ought there to be for current ethicists regarding surgery?

Join Dr Wendy Suffield (Henry Lumley Collections Engagement Grant researcher at Royal College of Surgeons 2023-24) and Dr Claire L. Jones  (Senior Lecturer in the History of Medicine, School of Classics, English and History at Kent) for an interactive workshop on women and surgical ethics, past and present.

Film, objects and texts will provoke group discussion on the following themes:

  • What is surgical ethics?
  • Surgeons’ power and the patient experience
  • Women surgeons: hard-edged or hard done to?

All welcome!

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.

Cover of StatPearls

StatPearls [Internet].

Ethical issues in academic medicine.

Martin R. Huecker ; Jacob Shreffler .

Affiliations

Last Update: June 5, 2023 .

  • Continuing Education Activity

The rapidly changing field of medicine, led by academic medicine institutions, will continue to present ethical challenges related to patient care, training, and research. Professionals in academic medicine must continuously strive to adhere to ethical standards in clinical medicine, medical education, and research. This activity will provide a brief overview of the ethical considerations and responsibilities in academic medicine.

  • Explain the importance of adhering to clinical, educational, and research ethics.
  • Explain the criteria institutional review boards consider when reviewing research protocols.
  • Describe the four phases of trial designs and the connection between trial design and research ethics.
  • Identify the types of bias that may impact research.
  • Introduction

The field of medicine continues to evolve rapidly due to the changing healthcare industry, progressive educational models, and high output and dissemination of research. All healthcare professionals should follow current trends and advances in their own subfields and specialties. Healthcare practices that meet the standard of care today could be obsolete in ten years. Academic physicians must abide by the core ethical principles (autonomy, justice, beneficence, and nonmaleficence) in their clinical, educational, and research roles. Ethical responsibilities in each of these domains have significant overlap: conflicts of interest can impact one’s research and choice of treatment patients; lapses in faculty self-care can jeopardize the education of trainees and research standards. This StatPearls article will provide a brief overview of the ethical considerations and responsibilities in academic medicine.

  • Issues of Concern

Clinical Ethics

Ethics likely originated in multiple different societies, while Aristotle may have been the first to develop a system of rational ethics. Medical ethics was first described by Thomas Percival in 1803, with the first code of medical ethics in 1847. Today we recite the Hippocratic Oath or the Declaration of Geneva. Some add solidarity, confidentiality, and acceptance of ambiguity to the four core principles of autonomy, beneficence, non-maleficence, and respect for human rights.

Conflicts can arise between these values; for instance, patient autonomy can inhibit a physician from exercising beneficence. More recent debates within medical ethics focus on healthcare disparities and social determinants of health. Physicians should seek justice for all patients, providing individualized, culturally competent care. Patients coming from different backgrounds may not observe all aspects of Western Medicine (e.g., they may practice Chinese, Buddhist, Islamic, or Ayurvedic medicine). Other new concepts of investigation include medical technology, reproductive health, mental health, organ donation, suicide and assisted death, futility, and surrogate decision-making.

Professionals in the field of academic medicine must adhere to all of the standard medical ethics principles. Please see the  StatPearls Medical Ethics  for further expansion on these concepts.

Educational Ethics

Educators prioritize ethics just as we do in medicine. In a Code of Ethics for Educators, the National Education Association highlights “the worth and dignity of each human being, the supreme importance of the pursuit of truth, devotion to excellence, and nurture of the democratic principles. Essential to these goals is the protection of freedom to learn and teach and guarantee an equal educational opportunity for all. The NEA abides by I. a commitment to the student and II. a commitment to the profession. Professionals in academic medicine have dual roles of educating learners and caring for patients (and conducting research).

Ethical standards in medical education begin before students even reach professional school. Faculty have a duty to the community to facilitate access to a proper early education that makes a path to medicine possible. Admissions committees should consider student backgrounds in a just and fair approach to considering applicants. Well-rounded students must have more than prior academic success – they should have compassion and a humble willingness to care for patients without judgment. Harvey Cushing stated that “A physician is obligated to consider more than a diseased organ, more even than the whole man—he must view the man in his world.” [1]  As the business of medicine continues to encroach on the art and duty of the physician, we must affirm that the “acknowledgment of nonfinancial values is fundamental to achieve quality in health care and education.”

Medical education continues to add content on the concepts of cultural competency and healthcare disparities. Race, gender, socioeconomic class, and other differences should not limit access to medical education or optimal health. Social determinants of health are now introduced early during medical education, and instruction in cultural competency extends beyond training years. [2]

Medical education now provides more instruction on the maintenance of health and wellness for the providers. This includes financial and mental health in addition to nutritive and physical pillars. Healthcare professionals who are not healthy cannot ensure the health of patients. Students also receive instruction on responsible use of social media, along with more traditional professionalism standards. Students receive instruction on how to report incompetent or unethical behaviors by colleagues or supervisors. The so-called “hidden curriculum,” which can mean different things to different people, often refers to negative customs or rituals that often occur in direct conflict with the formal curriculum and even in “tension with ideals of the medical profession.” [3] [4]

Students must be prepared for instances in which their own personal values diverge from professional values, such as granting full autonomy to patients. With changes in technology, especially those brought on by pandemics, educators must reconsider the structure of medical education. More and more instruction may transition to virtual learning, although drawbacks to this model certainly exist (lack of face-to-face interaction, difficult transition to clinical care, and the potential deficit in group education on ethics and professionalism. [5]  One final issue relates to the care of patients directly by trainees, sometimes without knowledge or true consent, and for which a framework may be adopted from research ethics. [6]

Research Ethics

Professionals who gravitate to a career or niche in research must practice meticulous methods and maintain high ethical standards. Due to the increasing complexity of scientific and medical research, some academic medical centers offer formal research ethics consultation services to assist clinical investigators with various tasks: assisting with research design and implementation, providing a forum for the deliberative exploration of ethical issues, and supplementing regulatory oversight. [7]  As research protocols and data become more complex, ethical standards and operational oversight will only increase in importance. Although one may encounter many pitfalls in scientific and medical research (see StatPearls EBM Research Pitfalls), concerns related to ethical violations are easily avoided with background knowledge, planning, and good intentions. See the  StatPearls Research Ethics  chapter for a more in-depth analysis.

Institutional Review Boards

Successful research endeavors begin with planning and clearance with the proper oversight committees. Generally, an institutional review board decides whether a project constitutes human subject research and then reviews and makes recommendations before giving their authorization to proceed. The 1974 Belmont Report declared three principles of equal importance that must be adhered to in human subject research:  Respect for persons, beneficence, and justice. [8]  IRBs have evolved significantly over subsequent decades, requiring more personnel and more precise descriptions of research procedures.

IRB approval generally requires meeting seven criteria: Informed consent  1) sought when appropriate and 2) documented, 3) measures in place to protect privacy and confidentiality of subjects, 4) risks minimized, 5) risks reasonable relative to benefits, 6) data monitored for safety, and 7) selection of subjects is equitable. [8]

The IRB protocol outlines the background, objectives, procedure, subject selection, data collection forms, statistical methods, and expected outcomes. Study personnel must have the requisite training and documentation to participate in research. The standards for clinical trials will generally be much higher than a prospective cohort study, and chart reviews are often declared “not human subjects research.”         

Informed Consent

If the IRB decides you are performing human subjects research, you will have to ensure subjects have properly informed consent. There will be some exceptions, such as incapacitated patients in emergency research (Langlois), but in general, informed consent is crucial for ethical research. Truly informed consent requires  disclosure, understanding, capacity, and voluntariness . Patients must have the capacity to understand the diagnosis, risks and benefits, and treatment options, including no treatment (or intervention) at all. Research subjects must have the freedom to revoke their consent at any time after enrollment. Other than  emergency research , other exceptions to informed consent include legally incompetent patients, patients who  waive their right  of informed consent, and matters of  therapeutic privilege . Research involving protected populations must also be cleared with the IRB, such as studies involving minors, pregnant patients, and prisoners.        

Trial Design

Clinical trials typically compare a new treatment to either a placebo or a treatment already established as the standard of care. The placebo must be appropriate and can only be used if the subjects will experience no harm by  not  receiving an already established treatment. When possible, clinical trials should involve blinding, randomization, and controlling for other variables.

Because safety is paramount, clinical trials have four different phases.  Phase I  occurs in a low number of healthy volunteers to establish safety.  Phase II  has a larger subject set to determine the efficacy of the treatment (along with dosing and adverse effects).  Phase III  tests the intervention on a larger number of subjects, ideally with randomization, attempting to establish that the treatment under investigation is superior or at least non-inferior to the standard care. Finally,  Phase IV  includes continued surveillance after the treatment has been approved to detect long-term or more rare adverse effects.

Those conducting research must protect all data collected in the course of the project. Patient confidentiality covers  privacy  but also  respects the autonomy  of subjects. Subjects should understand the level of privacy regarding the data collected, with deidentification being a common method of ensuring these standards. All researchers must have certification in Health Insurance Portability and Accountability Act (HIPAA). Database managers should ensure data protection and the integrity of data, with frequent cleaning, detecting missing data, and ensuring consistency in data entry.

After designing, receiving approval for, and initiating a clinical trial, the amount of work and oversight only increases. Principle investigators (PIs) and research teams must continuously monitor protocols and data collection. Generally, trials will perform intervention analyses at preset intervals to determine if the intervention or control group shows a clearly superior outcome. Research personnel must also conduct regular safety monitoring and log all adverse events.

Conflicts of Interest

Conflicts of interest can impact medical research, practice, and even education. A 2020 follow-up report on the 2009 Institute of Medicine findings declared that COIs are still very prevalent and can negatively impact individual patient care and aggregate care by influencing FDA approval and clinical practice guidelines. COI can alter judgment in publishing, drug review, social media, and patient advocacy. [9]

All research study personnel, but especially the PI(s), must disclose and reflect on any potential conflicts of interest. If a pharmaceutical company funds a trial, do the researchers have full freedom to design it and report honest results? If the funding is federal, do any of the researchers have financial ties to private industry that could bias their reporting and interpretation of data? All of these factors must be determined before, during, and after research is conducted.

The number of cognitive biases that can affect biomedical research could encompass multiple StatPearls chapters. A few of the most common and crucial biases to eliminate (when possible) appear below. A bias cannot be completely eliminated in some studies but can be addressed in limitations discussion openly and constructively. Failing to eliminate (or discuss) sources of bias, even as errors of omission, could be considered an ethical violation.

Selection bias  involves the nonrandom choice of subjects or choice of intervention in subjects will introduce bias in outcomes.  Procedure bias  can describe a non-equitable protocol or procedural treatment for different groups in a trial.  Measurement bias  occurs when data is collected or coded in an inconsistent manner, thus affecting the integrity of results. The  observer-expectancy bias  can occur when research personnel believe in the efficacy of treatment and affects results by treating subjects differently based on this bias.  Confounding bias  occurs when factors other than the exposure being studied affect outcomes.

More Bias      

Publishing Ethics

Whether retrospective chart review or randomized control clinical trial, studies that reach the point of data analysis should be summarized in manuscript form for dissemination to the larger scientific community, see StatPearls How to Write a Scientific Manuscript for instruction on writing the paper itself. When seeking to publish your own research, beware of predatory journals that charge fees to publish or even submit, and have questionable practices. [10]  Most libraries keep databases delineating ‘safe’ vs. predatory journals.

Results should be conveyed concisely in plain language, with a clear delineation of objective results vs. subjective interpretation. The ethics of publishing also involves addressing financial incentives for journals (and authors), adhering to authorship criteria, and avoiding intentional and unintentional scientific misconduct. A systematic review and meta-analysis found that 1.97% of scientists admitted to having fabricated, falsified, or modified data or results at least once; 33.7% admitted questionable research practices, and 72% admitted following questionable research practices. [11] [12]  A different paper described scientific misconduct in 29% of respondents who had  no awareness  of it. [13]

Another concern related to research ethics involves the significant delay in applying new knowledge; the adoption of new evidence into practice can take many years. Conversely, treatments that become standard of care quickly, but fail to have reproducible findings in subsequent studies, can harm patients. This  medical reversal  should be avoided by a clear interpretation of published research findings in a big-picture context. Concerns related to replication, validation, and reliability are covered in many publications. [14]

  • Clinical Significance

One cannot practice medicine, learn about medicine, or conduct medical research without attention to high ethical standards. The four core principles of autonomy, beneficence, non-maleficence and respect for human rights are cornerstones of medical practice. The practice of medicine does not deal ion black and white, binary decisions. Every decision we make comes with potentially unintended consequences. For example, in the Emergency Department, every decision to prioritize one patient implies a deprioritization of the other patients in the ED. In academic medicine, especially, the education of clinicians inherently puts some degree of risk onto the patient. This risk must be acknowledged and then minimized as much as possible. 

Medical educators must teach students and residents to consider the whole patient, not just a collection of parts or disease processes. This applies to all caregivers in the health profession. Additionally, race, gender, socioeconomic class, and other differences should not limit access to medical education or optimal health.

The rigorous demands of clinical medicine also involve a tradeoff with the health of the providers themselves. Physicians who do not take care of their own mental and physical health cannot act as models for their patients. This self-care to ensure clinical competency represents an ethical obligation of all of us in clinical medicine. 

Any research in clinical medicine inherently involves the clinical care of real patients. Research goals can never supersede the ethical care of patients. IRBs provide oversight, but each member of research teams must prioritize the ethical treatment of patients in the clinical setting. Of course, informed consent and shared decision-making extend far beyond the realm of research studies. Each patient interaction depends on trust, honesty, and consideration of the patient's welfare and his or her autonomy. 

Like any other clinician, clinicians in academic medicine must always act in the patient's best interests while attending to their own health and well-being. Academic physicians have the added responsibility of teaching the importance of these principles and behaviors to the next generation of clinical providers.

  • Enhancing Healthcare Team Outcomes

The rapidly changing field of medicine, led by academic medicine institutions, will continue to present ethical challenges related to patient care, training, and research. The era of big data brings new challenges regarding privacy, confidentiality, trust, and data ownership and fairness, justice, and patient empowerment/autonomy. [15]  All professionals in academic medicine must continuously strive to adhere to ethical standards in clinical medicine, medical education, and research.

  • Review Questions
  • Access free multiple choice questions on this topic.
  • Comment on this article.

Disclosure: Martin Huecker declares no relevant financial relationships with ineligible companies.

Disclosure: Jacob Shreffler declares no relevant financial relationships with ineligible companies.

This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ), which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal.

  • Cite this Page Huecker MR, Shreffler J. Ethical Issues in Academic Medicine. [Updated 2023 Jun 5]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.

In this Page

Bulk download.

  • Bulk download StatPearls data from FTP

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Similar articles in PubMed

  • Review Ethical issues in education: Medical trainees and the global health experience. [Best Pract Res Clin Obstet Gyn...] Review Ethical issues in education: Medical trainees and the global health experience. Raine SP. Best Pract Res Clin Obstet Gynaecol. 2017 Aug; 43:115-124. Epub 2017 Mar 15.
  • Review Ethical principles and concepts in medicine. [Handb Clin Neurol. 2013] Review Ethical principles and concepts in medicine. Taylor RM. Handb Clin Neurol. 2013; 118:1-9.
  • Guiding Principles for Pharmaceutical Physicians from the Ethical Issues Committee of the Faculty of Pharmaceutical Medicine of the Royal Colleges of Physicians of the UK. [Int J Clin Pract. 2006] Guiding Principles for Pharmaceutical Physicians from the Ethical Issues Committee of the Faculty of Pharmaceutical Medicine of the Royal Colleges of Physicians of the UK. Bickerstaffe R, Brock P, Husson JM, Rubin I, Bragman K, Paterson K, Sommerville A. Int J Clin Pract. 2006 Feb; 60(2):238-41.
  • Review Culture of Care: Organizational Responsibilities. [Management of Animal Care and ...] Review Culture of Care: Organizational Responsibilities. Brown MJ, Symonowicz C, Medina LV, Bratcher NA, Buckmaster CA, Klein H, Anderson LC. Management of Animal Care and Use Programs in Research, Education, and Testing. 2018
  • Review [Ethical conflicts in emergency medicine]. [Anaesthesist. 1997] Review [Ethical conflicts in emergency medicine]. Mohr M, Kettler D. Anaesthesist. 1997 Apr; 46(4):275-81.

Recent Activity

  • Ethical Issues in Academic Medicine - StatPearls Ethical Issues in Academic Medicine - StatPearls

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

IMAGES

  1. Research Ethics: Definition, Principles and Advantages

    medical ethics on research

  2. The ethics of conducting clinical trials in the search for treatments

    medical ethics on research

  3. Clinical Research Ethics

    medical ethics on research

  4. Why Medical Ethics Are Essential For Healthcare Workers

    medical ethics on research

  5. PPT

    medical ethics on research

  6. ETHICS IN NURSING RESEARCH

    medical ethics on research

VIDEO

  1. Medical Ethics MCQs Set D

  2. Medical Ethics MCQs Set A

  3. Medical Ethics MCQs Set C

  4. Medical Ethics (April 23, 2024)

  5. Medical Ethics || L3 ~ By sewar Al-Hwamdeh

  6. Medical Ethics

COMMENTS

  1. Fundamentals of Medical Ethics

    Our hope is that the Fundamentals of Medical Ethics series will suggest broad lessons to keep in mind as physicians, patients, research participants, families, and communities struggle with new ...

  2. Principles of Clinical Ethics and Their Application to Practice

    Bioethics and Clinical (Medical) Ethics. A number of deplorable abuses of human subjects in research, medical interventions without informed consent, experimentation in concentration camps in World War II, along with salutary advances in medicine and medical technology and societal changes, led to the rapid evolution of bioethics from one ...

  3. Ethical Principles for Medical Research and Practice

    The International Code of Medical Ethics. The World Medical Association's International Code of Medical Ethics defines and elucidates the professional duties of physicians towards their patients, other physicians and health professionals, themselves, and society as a whole, in concordance with the WMA's Declaration of Geneva: The Physician's Pledge, and the WMA's entire body of policies

  4. Ethics in medical research: General principles with special reference

    Ethics in medical research deals with the conflicts of interest across various levels. Guidelines have been proposed for standardized ethical practice throughout the globe. The four fundamental principles of ethics which are being underscored are autonomy, non-maleficence, beneficence, and justice. Some special ethical issues have particular ...

  5. Guiding Principles for Ethical Research

    NIH Clinical Center researchers published seven main principles to guide the conduct of ethical research: Social and clinical value. Scientific validity. Fair subject selection. Favorable risk-benefit ratio. Independent review. Informed consent. Respect for potential and enrolled subjects.

  6. Research Ethics

    Research ethics is a foundational principle of modern medical research across all disciplines. The overarching body, the IRB, is intentionally comprised of experts across a range of disciplines that can include ethicists, social workers, physicians, nurses, other scientific researchers, counselors and mental health professionals, and advocates ...

  7. Ethics of Medical Research & Innovation

    The fundamentals of ethical research are steadfast, even for vaccine trials in a pandemic. Opinions from the AMA Code of Medical Ethics outline top-level concerns. Trial design and informed consent take on ethical significance when developing vaccines, even during the urgency of a pandemic such as COVID-19.

  8. PDF International Ethical Guidelines for Health-related Research Involving

    The Council for International Organizations of Medical Sciences (CIOMS) acknowledges the contribution of the Working Group for the revision of the CIOMS Ethical Guidelines. In 2011, the Executive ... CIOMS, in association with WHO, undertook its work on ethics in biomedical research in the late 1970s. Accordingly, CIOMS set out, in cooperation ...

  9. Home page

    BMC Medical Ethics is an open access journal publishing original peer-reviewed research articles in relation to the ethical aspects of biomedical research and clinical practice, including professional choices and conduct, medical technologies, healthcare systems and health policies. We are recruiting new, international Editorial Board Members.

  10. The top 10 most-read medical ethics articles in 2021

    The top 10 most-read medical ethics articles in 2021. Dec 29, 2021 . 3 MIN READ. By. Kevin B. O'Reilly , Senior News Editor. Print Page. Each month, the AMA Journal of Ethics® ( @JournalofEthics) gathers insights from physicians and other experts to explore issues in medical ethics that are highly relevant to doctors in practice and the future ...

  11. Ethics in medical research

    Ethics of medical research on human subjects must be clinically justified and scientifically sound. Informed consent is a mandatory component of any clinical research. Investigators are obligated to design research protocols that establish standards of scientific integrity, safeguard ethical and legislative issues of the human subjects, and ...

  12. Research & Innovation

    The AMA was founded in part to establish the first national code of medical ethics. ... To protect the rights and welfare of participants in research on emergency medical interventions, physician-researchers must ensure that the experimental intervention has a realistic probability of providing benefit equal to or greater than standard care and ...

  13. What Is Ethics in Research and Why Is It Important?

    World Medical Association's Declaration of Helsinki; Ethical Principles The following is a rough and general summary of some ethical principles that various codes address*: ... Education in research ethics is can help people get a better understanding of ethical standards, policies, and issues and improve ethical judgment and decision making. ...

  14. The Ethics of Clinical Research

    An ethics framework for a learning health care system: a departure from traditional research ethics and clinical ethics, Hastings Center Report 2013; Spec No: ... Harris, J., 2005. "Scientific research is a moral duty," Journal of Medical Ethics, 31: 242-48. Hayenhjelm, M., and J. Wolff, 2012. "The moral problem of risk impositions: ...

  15. Ensuring ethical standards and procedures for research with human beings

    It is important to adhere to ethical principles in order to protect the dignity, rights and welfare of research participants. As such, all research involving human beings should be reviewed by an ethics committee to ensure that the appropriate ethical standards are being upheld. Discussion of the ethical principles of beneficence, justice and ...

  16. Ethics in Medical Research and Publication

    Medical research involving human subjects must be conducted only by individuals with the appropriate ethics and scientific education, training and qualifications. Research on patients or healthy volunteers requires the supervision of a competent and appropriately qualified physician or other health care professional.

  17. Homepage

    Journal of Medical Ethics. is a leading journal covering the whole field of medical ethics, promoting ethical reflection and conduct in scientific research and medical practice. Impact Factor: 4.2. Citescore: 6.2. Journal of Medical Ethics is an official journal of the Institute of Medical Ethics and a Plan S compliant Transformative Journal.

  18. Medical Research Ethics: Challenges in the 21st Century

    Tomas Zima, David N. Weisstub. Provides a current review of medical research ethics on a global basis. Aims to promote a discussion about controversial and foundational aspects in the field. Concentrates on key areas of medical research where there are core ethical issues. Part of the book series: Philosophy and Medicine (PHME, volume 132)

  19. Journal of Medical Ethics

    Journal of Medical Ethics is a leading international journal that reflects the whole field of medical ethics. The journal seeks to promote ethical reflection and conduct in scientific research and medical practice. It features original, full length articles on ethical aspects of health care, as well as brief reports, responses, editorials, and other relevant material.

  20. Scarcity of Medical Ethics Research in Allergy and Immunology: A Review

    Introduction. Medical ethics, defined as the moral principles that guide the conduct of medicine, has a crucial and direct role in the clinical practice of allergy and immunology (A&I). 1 Medical ethics encompasses a broad range of concepts and these are based on the fundamental ethical principles described by Beauchamp and Childress. 2 Autonomy is the capacity to make informed, uncoerced ...

  21. Clinical Ethics

    Clinical Ethics. Institutional Ethics Committee Members: Maria Basile, MD, MBA Phyllis Migdal, MD, MA Stephen G. Post, PhD. The antiquity of the Hippocratic Oath—which enjoins the practitioners of medicine to "never do harm" and to act "for the good of the patient"—makes clear that the professional practice of medicine has always been an ethically motivated endeavor.

  22. Ethics—both Human and Animal—demand We Move Away From Animal Use in

    New paper by Jarrod Bailey, PhD, Director of Medical Research at the Physicians Committee, and co-author Professor Michael Balls, ... They also make the case that human ethics must take a much more prominent role. If biomedical science is not sufficiently human relevant, then it is letting down billions of people relying on science to increase ...

  23. Scarcity of Medical Ethics Research in Allergy and Immunology: A Review

    Medical ethics is relevant to the clinical practice of allergy and immunology regardless of the type of patient, disease state, or practice setting. When engaging in clinical care, performing research, or enacting policies on the accessibility and distribution of healthcare resources, physicians regularly make and justify decisions using the fundamental principles of medical ethics.

  24. Opinion

    To the Editor: Re " Moral Dilemmas in Medical Care " (Opinion guest essay, May 8): It is unsettling, and dismaying, to read Dr. Carl Elliott's account of moral lapses continuing to exist, if ...

  25. Medical Ethics

    Medical ethics is a required element of American physicians' formal training. Familiarity with ethical principles on a basic level is necessary to pass initial medical licensing examinations. However, many healthcare providers (HCPs) are unfamiliar with the list of ethical principles relevant to modern medical practice, explain how or why medical ethics principles have come to be, or integrate ...

  26. Henrietta Lacks and the Controversy Behind Immortal HeLa Cells

    The HeLa strain revolutionized medical research. Jonas Salk used the HeLa strain to develop the polio vaccine, sparking mass interest in the cells. As demand grew, scientists cloned the cells in 1955.

  27. Prospective neuroimaging and neuropsychological evaluation in adults

    ### Funding Statement This work was funded by a UK Medical Research Council (MRC) research grant (MR/S00355X/1). ### Author Declarations I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained. ... This study was approved by the Northwest Liverpool East Research ...

  28. 30 May 2024. Women and Surgical Ethics (LAB)

    Join Dr Wendy Suffield (Henry Lumley Collections Engagement Grant researcher at Royal College of Surgeons 2023-24) and Dr Claire L. Jones (Senior Lecturer in the History of Medicine, School of Classics, English and History at Kent) for an interactive workshop on women and surgical ethics, past and present.

  29. Ethical Issues in Academic Medicine

    This activity will provide a brief overview of the ethical considerations and responsibilities in academic medicine. Objectives: Explain the importance of adhering to clinical, educational, and research ethics. Explain the criteria institutional review boards consider when reviewing research protocols. Describe the four phases of trial designs ...