Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 3: Research Ethics

From Moral Principles to Ethics Codes

Learning Objectives

  • Describe the history of ethics codes for scientific research with human participants.
  • Summarize the Tri-Council Policy Statement on Ethical Conduct—especially as it relates to informed consent, deception, debriefing, research with nonhuman animals, and scholarly integrity.

The general core principles of respect for persons, concern for welfare, and justice provide a useful starting point for thinking about the ethics of psychological research because essentially everyone agrees on them. As we have seen, however, even people who agree on these general principles can disagree about specific ethical issues that arise in the course of conducting research. This discrepancy is why there also exist more detailed and enforceable ethics codes that provide guidance on important issues that arise frequently. In this section, we begin with a brief historical overview of such ethics codes and then look closely at the one that is most relevant to psychological research in Canada—the Tri-Council Policy Statement (TCPS).

Historical Overview

One of the earliest ethics codes was the  Nuremberg Code —a set of 10 principles written in 1947 in conjunction with the trials of Nazi physicians accused of shockingly cruel research on concentration camp prisoners during World War II. It provided a standard against which to compare the behaviour of the men on trial—many of whom were eventually convicted and either imprisoned or sentenced to death. The Nuremberg Code was particularly clear about the importance of carefully weighing risks against benefits and the need for informed consent. The  Declaration of Helsinki  is a similar ethics code that was created by the World Medical Council in 1964. Among the standards that it added to the Nuremberg Code was that research with human participants should be based on a written  protocol —a detailed description of the research—that is reviewed by an independent committee. The Declaration of Helsinki has been revised several times, most recently in 2013.

In the United States, concerns about the Tuskegee study and others led to the publication in 1978 of a set of federal guidelines called the  Belmont Report . The Belmont Report explicitly recognized the principle of seeking justice, including the importance of conducting research in a way that distributes risks and benefits fairly across different groups at the societal level. The Belmont Report was influential in the formation of national ethical guidelines for research in both the US and Canada.

In Canada, researchers and research institutions must follow the code of ethics formally laid out in the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans . The term “Tri-Council” is commonly used to collectively describe the three research granting agencies that are funded by the Canadian federal government: the Social Sciences and Humanities Research Council of Canada (SSHRC), the Canadian Institutes of Health Research (CIHR), and the Natural Sciences and Engineering Research Council of Canada (NSERC). The first edition of the Tri-Council Policy Statement (TCPS) was published in 1998, when it replaced all previous guidelines that had been developed by individual agencies and institutions. In 2010 a second edition (TCPS 2) was published that consolidated the core principles, clarified and updated the guidelines, updated terminology, and clarified the scope and responsibilities of institutional research ethics boards (REBs). The guidelines in the TCPS 2 are based on the three core principles already discussed: respect for persons, concern for welfare, and justice.

A detailed online tutorial of the specific guidelines contained within the TCPS 2 is available on the following website: TCPS 2

Successful completion of the tutorial takes up to 3 hours and provides the user with a certificate that is now required by many universities and other research institutions before a research proposal will be evaluated by the institution’s research ethics board (REB)  – a committee that is responsible for reviewing research protocols for potential ethical problems. An REB must consist of at least five people with varying backgrounds, including at least two members with expertise in relevant research disciplines, at least one member knowledgeable in ethics, and at least one community member who has no affiliation with the institution. The REB helps to make sure that the risks of the proposed research are minimized, the benefits outweigh the risks, the research is carried out in a fair manner, and the informed consent procedure is adequate.

The TCPS 2 includes provisions for two levels of research ethics review. Full REB review  is the default requirement for research involving humans. However, for research   in which “the probability and magnitude of possible harms implied by participation in the research is no greater than those encountered by participants in those aspects of their everyday life that relate to the research,” the REB may deem it to be a case of minimal risk research [1] . In such cases (as is sometimes the case with surveys, questionnaire-based studies, and naturalistic observation studies), the REB may delegate the research ethics review to one or more members of the REB. A further exception is made in the case of student course-based research, in which case the REB may delegate the review to the relevant department or faculty.

Although researchers and research institutions in Canada formally follow the guidelines detailed within the TCPS 2, students of psychology should also be aware of the ethics code of the American Psychological Association (APA), which shares many principles and procedures with the TCPS 2.

The APA’s  Ethical Principles of Psychologists and Code of Conduct  (also known as the  APA Ethics Code ) was first published in 1953 and has been revised several times since then, most recently in 2002. It includes about 150 specific ethical standards that psychologists and their students are expected to follow. Much of the APA Ethics Code concerns the clinical practice of psychology—advertising one’s services, setting and collecting fees, having personal relationships with clients, and so on. For our purposes, the most relevant part is Standard 8: Research and Publication . The APA also oversees research on nonhuman animal participants, but the TCPS 2 does not. In Canada, research on animals is overseen by the Canadian Council on Animal Care. We should consider some of the most important aspects that are common to both TCPS 2 and the APA—informed consent, deception, debriefing, and scholarly integrity—in more detail .

APA Ethics Code

Standard 8: Research and Publication 8.01 Institutional Approval

When institutional approval is required, psychologists provide accurate information about their research proposals and obtain approval prior to conducting the research. They conduct the research in accordance with the approved research protocol.

8.02 Informed Consent to Research

When obtaining informed consent as required in Standard 3.10, Informed Consent, psychologists inform participants about (1) the purpose of the research, expected duration, and procedures; (2) their right to decline to participate and to withdraw from the research once participation has begun; (3) the foreseeable consequences of declining or withdrawing; (4) reasonably foreseeable factors that may be expected to influence their willingness to participate such as potential risks, discomfort, or adverse effects; (5) any prospective research benefits; (6) limits of confidentiality; (7) incentives for participation; and (8) whom to contact for questions about the research and research participants’ rights. They provide opportunity for the prospective participants to ask questions and receive answers. (See also Standards 8.03, Informed Consent for Recording Voices and Images in Research; 8.05, Dispensing With Informed Consent for Research; and 8.07, Deception in Research.)

Psychologists conducting intervention research involving the use of experimental treatments clarify to participants at the outset of the research (1) the experimental nature of the treatment; (2) the services that will or will not be available to the control group(s) if appropriate; (3) the means by which assignment to treatment and control groups will be made; (4) available treatment alternatives if an individual does not wish to participate in the research or wishes to withdraw once a study has begun; and (5) compensation for or monetary costs of participating including, if appropriate, whether reimbursement from the participant or a third-party payor will be sought. (See also Standard 8.02a, Informed Consent to Research.)

8.03 Informed Consent for Recording Voices and Images in Research

Psychologists obtain informed consent from research participants prior to recording their voices or images for data collection unless (1) the research consists solely of naturalistic observations in public places, and it is not anticipated that the recording will be used in a manner that could cause personal identification or harm, or (2) the research design includes deception, and consent for the use of the recording is obtained during debriefing. (See also Standard 8.07, Deception in Research.)

8.04 Client/Patient, Student, and Subordinate Research Participants

When psychologists conduct research with clients/patients, students, or subordinates as participants, psychologists take steps to protect the prospective participants from adverse consequences of declining or withdrawing from participation.

When research participation is a course requirement or an opportunity for extra credit, the prospective participant is given the choice of equitable alternative activities.

8.05 Dispensing With Informed Consent for Research

Psychologists may dispense with informed consent only (1) where research would not reasonably be assumed to create distress or harm and involves (a) the study of normal educational practices, curricula, or classroom management methods conducted in educational settings; (b) only anonymous questionnaires, naturalistic observations, or archival research for which disclosure of responses would not place participants at risk of criminal or civil liability or damage their financial standing, employability, or reputation, and confidentiality is protected; or (c) the study of factors related to job or organization effectiveness conducted in organizational settings for which there is no risk to participants’ employability, and confidentiality is protected or (2) where otherwise permitted by law or federal or institutional regulations.

8.06 Offering Inducements for Research Participation

Psychologists make reasonable efforts to avoid offering excessive or inappropriate financial or other inducements for research participation when such inducements are likely to coerce participation.

When offering professional services as an inducement for research participation, psychologists clarify the nature of the services, as well as the risks, obligations, and limitations. (See also Standard 6.05, Barter With Clients/Patients.)

8.07 Deception in Research

Psychologists do not conduct a study involving deception unless they have determined that the use of deceptive techniques is justified by the study’s significant prospective scientific, educational, or applied value and that effective nondeceptive alternative procedures are not feasible.

Psychologists do not deceive prospective participants about research that is reasonably expected to cause physical pain or severe emotional distress.

Psychologists explain any deception that is an integral feature of the design and conduct of an experiment to participants as early as is feasible, preferably at the conclusion of their participation, but no later than at the conclusion of the data collection, and permit participants to withdraw their data. (See also Standard 8.08, Debriefing.)

8.08 Debriefing

Psychologists provide a prompt opportunity for participants to obtain appropriate information about the nature, results, and conclusions of the research, and they take reasonable steps to correct any misconceptions that participants may have of which the psychologists are aware.

If scientific or humane values justify delaying or withholding this information, psychologists take reasonable measures to reduce the risk of harm.

When psychologists become aware that research procedures have harmed a participant, they take reasonable steps to minimize the harm.

8.09 Humane Care and Use of Animals in Research

Psychologists acquire, care for, use, and dispose of animals in compliance with current federal, state, and local laws and regulations, and with professional standards.

Psychologists trained in research methods and experienced in the care of laboratory animals supervise all procedures involving animals and are responsible for ensuring appropriate consideration of their comfort, health, and humane treatment.

Psychologists ensure that all individuals under their supervision who are using animals have received instruction in research methods and in the care, maintenance, and handling of the species being used, to the extent appropriate to their role. (See also Standard 2.05, Delegation of Work to Others.)

Psychologists make reasonable efforts to minimize the discomfort, infection, illness, and pain of animal subjects.

Psychologists use a procedure subjecting animals to pain, stress, or privation only when an alternative procedure is unavailable and the goal is justified by its prospective scientific, educational, or applied value.

Psychologists perform surgical procedures under appropriate anesthesia and follow techniques to avoid infection and minimize pain during and after surgery.

When it is appropriate that an animal’s life be terminated, psychologists proceed rapidly, with an effort to minimize pain and in accordance with accepted procedures.

8.10 Reporting Research Results

Psychologists do not fabricate data. (See also Standard 5.01a, Avoidance of False or Deceptive Statements.)

If psychologists discover significant errors in their published data, they take reasonable steps to correct such errors in a correction, retraction, erratum, or other appropriate publication means.

8.11 Plagiarism

Psychologists do not present portions of another’s work or data as their own, even if the other work or data source is cited occasionally.

8.12 Publication Credit

Psychologists take responsibility and credit, including authorship credit, only for work they have actually performed or to which they have substantially contributed. (See also Standard 8.12b, Publication Credit.)

Principal authorship and other publication credits accurately reflect the relative scientific or professional contributions of the individuals involved, regardless of their relative status. Mere possession of an institutional position, such as department chair, does not justify authorship credit. Minor contributions to the research or to the writing for publications are acknowledged appropriately, such as in footnotes or in an introductory statement.

Except under exceptional circumstances, a student is listed as principal author on any multiple-authored article that is substantially based on the student’s doctoral dissertation. Faculty advisors discuss publication credit with students as early as feasible and throughout the research and publication process as appropriate. (See also Standard 8.12b, Publication Credit.)

8.13 Duplicate Publication of Data

Psychologists do not publish, as original data, data that have been previously published. This does not preclude republishing data when they are accompanied by proper acknowledgment.

8.14 Sharing Research Data for Verification

After research results are published, psychologists do not withhold the data on which their conclusions are based from other competent professionals who seek to verify the substantive claims through reanalysis and who intend to use such data only for that purpose, provided that the confidentiality of the participants can be protected and unless legal rights concerning proprietary data preclude their release. This does not preclude psychologists from requiring that such individuals or groups be responsible for costs associated with the provision of such information.

Psychologists who request data from other psychologists to verify the substantive claims through reanalysis may use shared data only for the declared purpose. Requesting psychologists obtain prior written agreement for all other uses of the data.

8.15 Reviewers

Psychologists who review material submitted for presentation, publication, grant, or research proposal review respect the confidentiality of and the proprietary rights in such information of those who submitted it.

Source: You can read the full APA Ethics Code here: Ethical Principles of Psychologists and Code of Conduct

Informed Consent

I nformed consent means obtaining and documenting people’s agreement to participate in a study, having informed them of everything that might reasonably be expected to affect their decision. Properly informing participants  includes details of the procedure, the risks and benefits of the research, the fact that they have the right to decline to participate or to withdraw from the study, the consequences of doing so, and any legal limits to confidentiality. For example, some places  require researchers who learn of child abuse or other crimes to report this information to authorities.

Although the process of obtaining informed consent often involves having participants read and sign a  consent form , it is important to understand that this written agreement is not all it is. Although having participants read and sign a consent form might be enough when they are competent adults with the necessary ability and motivation, many participants do not actually read consent forms or read them but do not understand them. For example, participants often mistake consent forms for legal documents and mistakenly believe that by signing them they give up their right to sue the researcher (Mann, 1994) [2] . Even with competent adults, therefore, it is good practice to tell participants about the risks and benefits, demonstrate the procedure, ask them if they have questions, and remind them of their right to withdraw at any time—in addition to having them read and sign a consent form.

Note also that there are situations in which informed consent is not necessary. These include situations in which the research is not expected to cause any harm and the procedure is straightforward or the study is conducted in the context of people’s ordinary activities. For example, if you wanted to sit outside a public building and observe whether people hold the door open for people behind them, you would not need to obtain their informed consent. Similarly, if a professor wanted to compare two legitimate teaching methods across two sections of her research methods course, she would not need to obtain informed consent from her students unless she planned to publish the results in a scientific journal about learning.

Deception  of participants in psychological research can take a variety of forms: misinforming participants about the purpose of a study, using confederates, using phony equipment like Milgram’s shock generator, and presenting participants with false feedback about their performance (e.g., telling them they did poorly on a test when they actually did well). Deception also includes not informing participants of the full design or true purpose of the research even if they are not actively misinformed (Sieber, Iannuzzo, & Rodriguez, 1995) [3] . For example, a study on incidental learning—learning without conscious effort—might involve having participants read through a list of words in preparation for a “memory test” later. Although participants are likely to assume that the memory test will require them to recall the words, it might instead require them to recall the contents of the room or the appearance of the research assistant.

Some researchers have argued that deception of research participants is rarely if ever ethically justified. Among their arguments are that it prevents participants from giving truly informed consent, fails to respect their dignity as human beings, has the potential to upset them, makes them distrustful and therefore less honest in their responding, and damages the reputation of researchers in the field (Baumrind, 1985) [4] .

Note, however, that the TCPS 2 and the APA Ethics Code takes a more moderate approach—allowing deception when the benefits of the study outweigh the risks, participants cannot reasonably be expected to be harmed, the research question cannot be answered without the use of deception, and participants are informed about the deception as soon as possible. This approach acknowledges that not all forms of deception are equally bad. Compare, for example, Milgram’s study in which he deceived his participants in several significant ways that resulted in their experiencing severe psychological stress with an incidental learning study in which a “memory test” turns out to be slightly different from what participants were expecting. It also acknowledges that some scientifically and socially important research questions can be difficult or impossible to answer without deceiving participants. Knowing that a study concerns the extent to which they obey authority, act aggressively toward a peer, or help a stranger is likely to change the way people behave so that the results no longer generalize to the real world.

Debriefing  is the process of informing research participants as soon as possible of the purpose of the study, revealing any deception, and correcting any other misconceptions they might have as a result of participating. Debriefing also involves minimizing harm that might have occurred. For example, an experiment on the effects of being in a sad mood on memory might involve inducing a sad mood in participants by having them think sad thoughts, watch a sad video, or listen to sad music. Debriefing would be the time to return participants’ moods to normal by having them think happy thoughts, watch a happy video, or listen to happy music.

Scholarly Integrity

Scholarly integrity includes the obvious points that researchers must not fabricate data or plagiarize. Plagiarism means using others’ words or ideas without proper acknowledgement. Proper acknowledgement generally means indicating direct quotations with quotation marks  and  providing a citation to the source of any quotation or idea used.

The remaining standards make some less obvious but equally important points. Researchers should not publish the same data a second time as though it were new, they should share their data with other researchers, and as peer reviewers they should keep the unpublished research they review confidential. Note that the authors’ names on published research—and the order in which those names appear—should reflect the importance of each person’s contribution to the research. It would be unethical, for example, to include as an author someone who had made only minor contributions to the research (e.g., analyzing some of the data) or for a faculty member to make himself or herself the first author on research that was largely conducted by a student.

Key Takeaways

  • There are several written ethics codes for research with human participants that provide specific guidance on the ethical issues that arise most frequently. These codes include the Nuremberg Code, the Declaration of Helsinki, and the Belmont Report.
  • The Tri-Council Policy Statement is the ethics code followed by researchers and research institutions in Canada.
  • The APA Ethics Code is also an important ethics code for researchers in psychology. It includes many standards that are relevant mainly to clinical practice, but  Standard 8  concerns informed consent, deception, debriefing, the use of nonhuman animal subjects, and scholarly integrity in research.
  • Research conducted at universities, hospitals, and other institutions that receive support from the federal government must be reviewed by an research ethics board (REB)—a committee at the institution that reviews research protocols to make sure they conform to ethical standards.
  • Informed consent is the process of obtaining and documenting people’s agreement to participate in a study, having informed them of everything that might reasonably be expected to affect their decision. Although it often involves having them read and sign a consent form, it is not equivalent to reading and signing a consent form.
  • Although some researchers argue that deception of research participants is never ethically justified, the TCPS 2 and the APA Ethics Code allow for its use when the benefits of using it outweigh the risks, participants cannot reasonably be expected to be harmed, there is no way to conduct the study without deception, and participants are informed of the deception as soon as possible.
  • Practice: Read the Nuremberg Code, the Belmont Report, and  Standard 8  of the APA Ethics Code. List five specific similarities and five specific differences among them.
  • Practice: Complete the online tutorial for the TCPS 2.
  • Discussion: In a study on the effects of disgust on moral judgment, participants were asked to judge the morality of disgusting acts, including people eating a dead pet and passionate kissing between a brother and sister (Haidt, Koller, & Dias, 1993) [5] . If you were on the REB that reviewed this protocol, what concerns would you have with it? Refer to the appropriate core principles in support of your argument.
  • Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, & Social Sciences and Humanities Research Council of Canada. (2010). Tri-council policy statement: Ethical conduct for research involving humans . Ottawa: Interagency Secretariat on Research Ethics. ↵
  • Mann, T. (1994). Informed consent for psychological research: Do subjects comprehend consent forms and understand their legal rights? Psychological Science, 5 , 140–143. ↵
  • Sieber, J. E., Iannuzzo, R., & Rodriguez, B. (1995). Deception methods in psychology: Have they changed in 23 years? Ethics & Behaviour, 5 , 67–85. ↵
  • Baumrind, D. (1985). Research using intentional deception: Ethical issues revisited. American Psychologist, 40 , 165–174. ↵
  • Haidt, J., Koller, S. H., & Dias, M. (1993). Affect, culture, and morality, or is it wrong to eat your dog?  Journal of Personality and Social Psychology, 65 , 613–628. ↵

A set of ten principles written in 1947 in conjunction with the trials of Nazi physicians that provided a standard by which to compare the behaviour of the men on trial.

An ethics code that was created by the World Medical Council in 1964, adding that research with human participants should be based on a written protocol.

A detailed description of the research that is reviewed by an independent committee.

Published in 1978 in the United states, this explicitly recognized the principle of seeking justice including the importance of conducting research in a way that distributes risks and benefits fairly across different groups at the societal level.

Ethical Conduct for Research Involving Humans: Canadian code of ethics that must be followed by researchers and research institutions.

A committee that is responsible for reviewing research protocols for potential ethical problems.

The default requirement for research involving humans.

When the likelihood and magnitude of possible harms faced by the participants is no greater than those encountered in in everyday life.

A code first published in 1953 which includes approximately 150 specific ethical standards that psychologists and their students are expected to follow.

A document informing participants of procedure, risks, and benefits of the research that is signed during the process of informed consent.

Includes misinforming participants of the purpose of the study, using confederates, using fake equipment, or presenting false performance feedback.

The process of informing research participants as soon as possible of the purpose of the study, revealing deception, and correcting misconceptions they may have as a result of participating in the study.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

standard 8.10 reporting research results

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

APA Code of Ethics

  • Last updated
  • Save as PDF
  • Page ID 41246

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

8.01 Institutional Approval

When institutional approval is required, psychologists provide accurate information about their research proposals and obtain approval prior to conducting the research. They conduct the research in accordance with the approved research protocol.

8.02 Informed Consent to Research

  • When obtaining informed consent as required in Standard 3.10, Informed Consent, psychologists inform participants about (1) the purpose of the research, expected duration, and procedures; (2) their right to decline to participate and to withdraw from the research once participation has begun; (3) the foreseeable consequences of declining or withdrawing; (4) reasonably foreseeable factors that may be expected to influence their willingness to participate such as potential risks, discomfort, or adverse effects; (5) any prospective research benefits; (6) limits of confidentiality; (7) incentives for participation; and (8) whom to contact for questions about the research and research participants’ rights. They provide opportunity for the prospective participants to ask questions and receive answers. (See also Standards 8.03, Informed Consent for Recording Voices and Images in Research; 8.05, Dispensing With Informed Consent for Research; and 8.07, Deception in Research.)
  • Psychologists conducting intervention research involving the use of experimental treatments clarify to participants at the outset of the research (1) the experimental nature of the treatment; (2) the services that will or will not be available to the control group(s) if appropriate; (3) the means by which assignment to treatment and control groups will be made; (4) available treatment alternatives if an individual does not wish to participate in the research or wishes to withdraw once a study has begun; and (5) compensation for or monetary costs of participating including, if appropriate, whether reimbursement from the participant or a third-party payor will be sought. (See also Standard 8.02a, Informed Consent to Research.)

8.03 Informed Consent for Recording Voices and Images in Research

Psychologists obtain informed consent from research participants prior to recording their voices or images for data collection unless (1) the research consists solely of naturalistic observations in public places, and it is not anticipated that the recording will be used in a manner that could cause personal identification or harm, or (2) the research design includes deception, and consent for the use of the recording is obtained during debriefing. (See also Standard 8.07, Deception in Research.)

8.04 Client/Patient, Student, and Subordinate Research Participants

  • When psychologists conduct research with clients/patients, students, or subordinates as participants, psychologists take steps to protect the prospective participants from adverse consequences of declining or withdrawing from participation.
  • When research participation is a course requirement or an opportunity for extra credit, the prospective participant is given the choice of equitable alternative activities.

8.05 Dispensing With Informed Consent for Research

Psychologists may dispense with informed consent only (1) where research would not reasonably be assumed to create distress or harm and involves (a) the study of normal educational practices, curricula, or classroom management methods conducted in educational settings; (b) only anonymous questionnaires, naturalistic observations, or archival research for which disclosure of responses would not place participants at risk of criminal or civil liability or damage their financial standing, employability, or reputation, and confidentiality is protected; or (c) the study of factors related to job or organization effectiveness conducted in organizational settings for which there is no risk to participants’ employability, and confidentiality is protected or (2) where otherwise permitted by law or federal or institutional regulations.

8.06 Offering Inducements for Research Participation

  • Psychologists make reasonable efforts to avoid offering excessive or inappropriate financial or other inducements for research participation when such inducements are likely to coerce participation.
  • When offering professional services as an inducement for research participation, psychologists clarify the nature of the services, as well as the risks, obligations, and limitations. (See also Standard 6.05, Barter With Clients/Patients.)

8.07 Deception in Research

  • Psychologists do not conduct a study involving deception unless they have determined that the use of deceptive techniques is justified by the study’s significant prospective scientific, educational, or applied value and that effective nondeceptive alternative procedures are not feasible.
  • Psychologists do not deceive prospective participants about research that is reasonably expected to cause physical pain or severe emotional distress.
  • Psychologists explain any deception that is an integral feature of the design and conduct of an experiment to participants as early as is feasible, preferably at the conclusion of their participation, but no later than at the conclusion of the data collection, and permit participants to withdraw their data. (See also Standard 8.08, Debriefing.)

8.08 Debriefing

  • Psychologists provide a prompt opportunity for participants to obtain appropriate information about the nature, results, and conclusions of the research, and they take reasonable steps to correct any misconceptions that participants may have of which the psychologists are aware.
  • If scientific or humane values justify delaying or withholding this information, psychologists take reasonable measures to reduce the risk of harm.
  • When psychologists become aware that research procedures have harmed a participant, they take reasonable steps to minimize the harm.

8.09 Humane Care and Use of Animals in Research

  • Psychologists acquire, care for, use, and dispose of animals in compliance with current federal, state, and local laws and regulations, and with professional standards.
  • Psychologists trained in research methods and experienced in the care of laboratory animals supervise all procedures involving animals and are responsible for ensuring appropriate consideration of their comfort, health, and humane treatment.
  • Psychologists ensure that all individuals under their supervision who are using animals have received instruction in research methods and in the care, maintenance, and handling of the species being used, to the extent appropriate to their role. (See also Standard 2.05, Delegation of Work to Others.)
  • Psychologists make reasonable efforts to minimize the discomfort, infection, illness, and pain of animal subjects.
  • Psychologists use a procedure subjecting animals to pain, stress, or privation only when an alternative procedure is unavailable and the goal is justified by its prospective scientific, educational, or applied value.
  • Psychologists perform surgical procedures under appropriate anesthesia and follow techniques to avoid infection and minimize pain during and after surgery.
  • When it is appropriate that an animal’s life be terminated, psychologists proceed rapidly, with an effort to minimize pain and in accordance with accepted procedures.

8.10 Reporting Research Results

  • Psychologists do not fabricate data. (See also Standard 5.01a, Avoidance of False or Deceptive Statements.)
  • If psychologists discover significant errors in their published data, they take reasonable steps to correct such errors in a correction, retraction, erratum, or other appropriate publication means.

8.11 Plagiarism

Psychologists do not present portions of another’s work or data as their own, even if the other work or data source is cited occasionally.

8.12 Publication Credit

  • Psychologists take responsibility and credit, including authorship credit, only for work they have actually performed or to which they have substantially contributed. (See also Standard 8.12b, Publication Credit.)
  • Principal authorship and other publication credits accurately reflect the relative scientific or professional contributions of the individuals involved, regardless of their relative status. Mere possession of an institutional position, such as department chair, does not justify authorship credit. Minor contributions to the research or to the writing for publications are acknowledged appropriately, such as in footnotes or in an introductory statement.
  • Except under exceptional circumstances, a student is listed as principal author on any multiple-authored article that is substantially based on the student’s doctoral dissertation. Faculty advisors discuss publication credit with students as early as feasible and throughout the research and publication process as appropriate. (See also Standard 8.12b, Publication Credit.)

8.13 Duplicate Publication of Data

Psychologists do not publish, as original data, data that have been previously published. This does not preclude republishing data when they are accompanied by proper acknowledgment.

8.14 Sharing Research Data for Verification

  • After research results are published, psychologists do not withhold the data on which their conclusions are based from other competent professionals who seek to verify the substantive claims through reanalysis and who intend to use such data only for that purpose, provided that the confidentiality of the participants can be protected and unless legal rights concerning proprietary data preclude their release. This does not preclude psychologists from requiring that such individuals or groups be responsible for costs associated with the provision of such information.
  • Psychologists who request data from other psychologists to verify the substantive claims through reanalysis may use shared data only for the declared purpose. Requesting psychologists obtain prior written agreement for all other uses of the data.

8.15 Reviewers

Psychologists who review material submitted for presentation, publication, grant, or research proposal review respect the confidentiality of and the proprietary rights in such information of those who submitted it.

Source: You can read the full APA Ethics Code at http://www.apa.org/ethics/code/index.aspx .

3.2 From Moral Principles to Ethics Codes

Learning objectives.

  • Describe the history of ethics codes for scientific research with human participants.
  • Summarize the American Psychological Association Ethics Code—especially as it relates to informed consent, deception, debriefing, research with nonhuman animals, and scholarly integrity.

The general moral principles of weighing risks against benefits, acting with integrity, seeking justice, and respecting people’s rights and dignity provide a useful starting point for thinking about the ethics of psychological research because essentially everyone agrees on them. As we have seen, however, even people who agree on these general principles can disagree about specific ethical issues that arise in the course of conducting research. This is why there also exist more detailed and enforceable ethics codes that provide guidance on important issues that arise frequently. In this section, we begin with a brief historical overview of such ethics codes and then look closely at the one that is most relevant to psychological research—that of the American Psychological Association (APA).

Historical Overview

One of the earliest ethics codes was the  Nuremberg Code —a set of 10 principles written in 1947 in conjunction with the trials of Nazi physicians accused of shockingly cruel research on concentration camp prisoners during World War II. It provided a standard against which to compare the behavior of the men on trial—many of whom were eventually convicted and either imprisoned or sentenced to death. The Nuremberg Code was particularly clear about the importance of carefully weighing risks against benefits and the need for informed consent. The  Declaration of Helsinki  is a similar ethics code that was created by the World Medical Council in 1964. Among the standards that it added to the Nuremberg Code was that research with human participants should be based on a written  protocol —a detailed description of the research—that is reviewed by an independent committee. The Declaration of Helsinki has been revised several times, most recently in 2004.

In the United States, concerns about the Tuskegee study and others led to the publication in 1978 of a set of federal guidelines called the  Belmont Report . The Belmont Report explicitly recognized the principle of seeking justice , including the importance of conducting research in a way that distributes risks and benefits fairly across different groups at the societal level. It also recognized the importance of respect for persons, which translates to the need for informed consent. Finally, it recognized the principle of beneficence,  which underscores the importance of maximizing the benefits of research while minimizing harms to participants and society. The Belmont Report became the basis of a set of laws—the Federal Policy for the Protection of Human Subjects —that apply to research conducted, supported, or regulated by the federal government. An extremely important part of these regulations is that universities, hospitals, and other institutions that receive support from the federal government must establish an  institutional review board (IRB) —a committee that is responsible for reviewing research protocols for potential ethical problems. An IRB must consist of at least five people with varying backgrounds, including members of different professions, scientists and nonscientists, men and women, and at least one person not otherwise affiliated with the institution. The IRB helps to make sure that the risks of the proposed research are minimized, the benefits outweigh the risks, the research is carried out in a fair manner, and the informed consent procedure is adequate.

The federal regulations also distinguish research that poses three levels of risk. Exempt research  includes research on the effectiveness of normal educational activities, the use of standard psychological measures and surveys of a nonsensitive nature that are administered in a way that maintains confidentiality, and research using existing data from public sources. It is called exempt because the regulations do not apply to it.  Minimal risk research exposes participants to risks that are no greater than those encountered by healthy people in daily life or during routine physical or psychological examinations. Minimal risk research can receive an expedited review by one member of the IRB or by a separate committee under the authority of the IRB that can only approve minimal risk research. (Many departments of psychology have such separate committees.) Finally,  at-risk research  poses greater than minimal risk and must be reviewed by the full board of IRB members.

Ethics Codes

The link that follows the list—from the Office of Human Subjects Research at the National Institutes of Health—allows you to read the ethics codes discussed in this section in their entirety. They are all highly recommended and, with the exception of the Federal Policy, short and easy to read.

  • The Nuremberg Code
  • The Declaration of Helsinki
  • The Belmont Report
  • Federal Policy for the Protection of Human Subjects

http://ohsr.od.nih.gov/guidelines/index.html

APA Ethics Code

The APA’s  Ethical Principles of Psychologists and Code of Conduct  (also known as the  APA Ethics Code ) was first published in 1953 and has been revised several times since then, most recently in 2010. It includes about 150 specific ethical standards that psychologists and their students are expected to follow. Much of the APA Ethics Code concerns the clinical practice of psychology—advertising one’s services, setting and collecting fees, having personal relationships with clients, and so on. For our purposes, the most relevant part is Standard 8: Research and Publication . Although  Standard 8  is reproduced here in its entirety, we should consider some of its most important aspects—informed consent, deception, debriefing, the use of nonhuman animal subjects, and scholarly integrity—in more detail.

Standard 8: Research and Publication

8.01 Institutional Approval

When institutional approval is required, psychologists provide accurate information about their research proposals and obtain approval prior to conducting the research. They conduct the research in accordance with the approved research protocol.

8.02 Informed Consent to Research

  • When obtaining informed consent as required in Standard 3.10, Informed Consent, psychologists inform participants about (1) the purpose of the research, expected duration, and procedures; (2) their right to decline to participate and to withdraw from the research once participation has begun; (3) the foreseeable consequences of declining or withdrawing; (4) reasonably foreseeable factors that may be expected to influence their willingness to participate such as potential risks, discomfort, or adverse effects; (5) any prospective research benefits; (6) limits of confidentiality; (7) incentives for participation; and (8) whom to contact for questions about the research and research participants’ rights. They provide opportunity for the prospective participants to ask questions and receive answers. (See also Standards 8.03, Informed Consent for Recording Voices and Images in Research; 8.05, Dispensing With Informed Consent for Research; and 8.07, Deception in Research.)
  • Psychologists conducting intervention research involving the use of experimental treatments clarify to participants at the outset of the research (1) the experimental nature of the treatment; (2) the services that will or will not be available to the control group(s) if appropriate; (3) the means by which assignment to treatment and control groups will be made; (4) available treatment alternatives if an individual does not wish to participate in the research or wishes to withdraw once a study has begun; and (5) compensation for or monetary costs of participating including, if appropriate, whether reimbursement from the participant or a third-party payor will be sought. (See also Standard 8.02a, Informed Consent to Research.)

8.03 Informed Consent for Recording Voices and Images in Research

Psychologists obtain informed consent from research participants prior to recording their voices or images for data collection unless (1) the research consists solely of naturalistic observations in public places, and it is not anticipated that the recording will be used in a manner that could cause personal identification or harm, or (2) the research design includes deception, and consent for the use of the recording is obtained during debriefing. (See also Standard 8.07, Deception in Research.)

8.04 Client/Patient, Student, and Subordinate Research Participants

  • When psychologists conduct research with clients/patients, students, or subordinates as participants, psychologists take steps to protect the prospective participants from adverse consequences of declining or withdrawing from participation.
  • When research participation is a course requirement or an opportunity for extra credit, the prospective participant is given the choice of equitable alternative activities.

8.05 Dispensing With Informed Consent for Research

Psychologists may dispense with informed consent only (1) where research would not reasonably be assumed to create distress or harm and involves (a) the study of normal educational practices, curricula, or classroom management methods conducted in educational settings; (b) only anonymous questionnaires, naturalistic observations, or archival research for which disclosure of responses would not place participants at risk of criminal or civil liability or damage their financial standing, employability, or reputation, and confidentiality is protected; or (c) the study of factors related to job or organization effectiveness conducted in organizational settings for which there is no risk to participants’ employability, and confidentiality is protected or (2) where otherwise permitted by law or federal or institutional regulations.

8.06 Offering Inducements for Research Participation

  • Psychologists make reasonable efforts to avoid offering excessive or inappropriate financial or other inducements for research participation when such inducements are likely to coerce participation.
  • When offering professional services as an inducement for research participation, psychologists clarify the nature of the services, as well as the risks, obligations, and limitations. (See also Standard 6.05, Barter With Clients/Patients.)

8.07 Deception in Research

  • Psychologists do not conduct a study involving deception unless they have determined that the use of deceptive techniques is justified by the study’s significant prospective scientific, educational, or applied value and that effective nondeceptive alternative procedures are not feasible.
  • Psychologists do not deceive prospective participants about research that is reasonably expected to cause physical pain or severe emotional distress.
  • Psychologists explain any deception that is an integral feature of the design and conduct of an experiment to participants as early as is feasible, preferably at the conclusion of their participation, but no later than at the conclusion of the data collection, and permit participants to withdraw their data. (See also Standard 8.08, Debriefing.)

8.08 Debriefing

  • Psychologists provide a prompt opportunity for participants to obtain appropriate information about the nature, results, and conclusions of the research, and they take reasonable steps to correct any misconceptions that participants may have of which the psychologists are aware.
  • If scientific or humane values justify delaying or withholding this information, psychologists take reasonable measures to reduce the risk of harm.
  • When psychologists become aware that research procedures have harmed a participant, they take reasonable steps to minimize the harm.

8.09 Humane Care and Use of Animals in Research

  • Psychologists acquire, care for, use, and dispose of animals in compliance with current federal, state, and local laws and regulations, and with professional standards.
  • Psychologists trained in research methods and experienced in the care of laboratory animals supervise all procedures involving animals and are responsible for ensuring appropriate consideration of their comfort, health, and humane treatment.
  • Psychologists ensure that all individuals under their supervision who are using animals have received instruction in research methods and in the care, maintenance, and handling of the species being used, to the extent appropriate to their role. (See also Standard 2.05, Delegation of Work to Others.)
  • Psychologists make reasonable efforts to minimize the discomfort, infection, illness, and pain of animal subjects.
  • Psychologists use a procedure subjecting animals to pain, stress, or privation only when an alternative procedure is unavailable and the goal is justified by its prospective scientific, educational, or applied value.
  • Psychologists perform surgical procedures under appropriate anesthesia and follow techniques to avoid infection and minimize pain during and after surgery.
  • When it is appropriate that an animal’s life be terminated, psychologists proceed rapidly, with an effort to minimize pain and in accordance with accepted procedures.

8.10 Reporting Research Results

  • Psychologists do not fabricate data. (See also Standard 5.01a, Avoidance of False or Deceptive Statements.)
  • If psychologists discover significant errors in their published data, they take reasonable steps to correct such errors in a correction, retraction, erratum, or other appropriate publication means.

8.11 Plagiarism

Psychologists do not present portions of another’s work or data as their own, even if the other work or data source is cited occasionally.

8.12 Publication Credit

  • Psychologists take responsibility and credit, including authorship credit, only for work they have actually performed or to which they have substantially contributed. (See also Standard 8.12b, Publication Credit.)
  • Principal authorship and other publication credits accurately reflect the relative scientific or professional contributions of the individuals involved, regardless of their relative status. Mere possession of an institutional position, such as department chair, does not justify authorship credit. Minor contributions to the research or to the writing for publications are acknowledged appropriately, such as in footnotes or in an introductory statement.
  • Except under exceptional circumstances, a student is listed as principal author on any multiple-authored article that is substantially based on the student’s doctoral dissertation. Faculty advisors discuss publication credit with students as early as feasible and throughout the research and publication process as appropriate. (See also Standard 8.12b, Publication Credit.)

8.13 Duplicate Publication of Data

Psychologists do not publish, as original data, data that have been previously published. This does not preclude republishing data when they are accompanied by proper acknowledgment.

8.14 Sharing Research Data for Verification

  • After research results are published, psychologists do not withhold the data on which their conclusions are based from other competent professionals who seek to verify the substantive claims through reanalysis and who intend to use such data only for that purpose, provided that the confidentiality of the participants can be protected and unless legal rights concerning proprietary data preclude their release. This does not preclude psychologists from requiring that such individuals or groups be responsible for costs associated with the provision of such information.
  • Psychologists who request data from other psychologists to verify the substantive claims through reanalysis may use shared data only for the declared purpose. Requesting psychologists obtain prior written agreement for all other uses of the data.

8.15 Reviewers

Psychologists who review material submitted for presentation, publication, grant, or research proposal review respect the confidentiality of and the proprietary rights in such information of those who submitted it.

Source: You can read the full APA Ethics Code at http://www.apa.org/ethics/code/index.aspx .

Informed Consent

Standards 8.02 to 8.05 are about informed consent. Again, informed consent means obtaining and documenting people’s agreement to participate in a study, having informed them of everything that might reasonably be expected to affect their decision. This includes details of the procedure, the risks and benefits of the research, the fact that they have the right to decline to participate or to withdraw from the study, the consequences of doing so, and any legal limits to confidentiality. For example, some states require researchers who learn of child abuse or other crimes to report this information to authorities.

Although the process of obtaining informed consent often involves having participants read and sign a  consent form , it is important to understand that this is not all it is. Although having participants read and sign a consent form might be enough when they are competent adults with the necessary ability and motivation, many participants do not actually read consent forms or read them but do not understand them. For example, participants often mistake consent forms for legal documents and mistakenly believe that by signing them they give up their right to sue the researcher (Mann, 1994). [1] Even with competent adults, therefore, it is good practice to tell participants about the risks and benefits, demonstrate the procedure, ask them if they have questions, and remind them of their right to withdraw at any time—in addition to having them read and sign a consent form.

Note also that there are situations in which informed consent is not necessary. These include situations in which the research is not expected to cause any harm and the procedure is straightforward or the study is conducted in the context of people’s ordinary activities. For example, if you wanted to sit outside a public building and observe whether people hold the door open for people behind them, you would not need to obtain their informed consent. Similarly, if a college instructor wanted to compare two legitimate teaching methods across two sections of his research methods course, he would not need to obtain informed consent from his students.

Deception  of participants in psychological research can take a variety of forms: misinforming participants about the purpose of a study, using confederates, using phony equipment like Milgram’s shock generator, and presenting participants with false feedback about their performance (e.g., telling them they did poorly on a test when they actually did well). Deception also includes not informing participants of the full design or true purpose of the research even if they are not actively misinformed (Sieber, Iannuzzo, & Rodriguez, 1995). [2] For example, a study on incidental learning—learning without conscious effort—might involve having participants read through a list of words in preparation for a “memory test” later. Although participants are likely to assume that the memory test will require them to recall the words, it might instead require them to recall the contents of the room or the appearance of the research assistant.

Some researchers have argued that deception of research participants is rarely if ever ethically justified. Among their arguments are that it prevents participants from giving truly informed consent, fails to respect their dignity as human beings, has the potential to upset them, makes them distrustful and therefore less honest in their responding, and damages the reputation of researchers in the field (Baumrind, 1985). [3]

Note, however, that the APA Ethics Code takes a more moderate approach—allowing deception when the benefits of the study outweigh the risks, participants cannot reasonably be expected to be harmed, the research question cannot be answered without the use of deception, and participants are informed about the deception as soon as possible. This approach acknowledges that not all forms of deception are equally bad. Compare, for example, Milgram’s study in which he deceived his participants in several significant ways that resulted in their experiencing severe psychological stress with an incidental learning study in which a “memory test” turns out to be slightly different from what participants were expecting. It also acknowledges that some scientifically and socially important research questions can be difficult or impossible to answer without deceiving participants. Knowing that a study concerns the extent to which they obey authority, act aggressively toward a peer, or help a stranger is likely to change the way people behave so that the results no longer generalize to the real world.

Standard 8.08 is about  debriefing . This is the process of informing research participants as soon as possible of the purpose of the study, revealing any deception, and correcting any other misconceptions they might have as a result of participating. Debriefing also involves minimizing harm that might have occurred. For example, an experiment on the effects of being in a sad mood on memory might involve inducing a sad mood in participants by having them think sad thoughts, watch a sad video, and/or listen to sad music. Debriefing would be the time to return participants’ moods to normal by having them think happy thoughts, watch a happy video, or listen to happy music.

Nonhuman Animal Subjects

Standard 8.09 is about the humane treatment and care of nonhuman animal subjects. Although most contemporary research in psychology does not involve nonhuman animal subjects, a significant minority of it does—especially in the study of learning and conditioning, behavioral neuroscience, and the development of drug and surgical therapies for psychological disorders.

The use of nonhuman animal subjects in psychological research is similar to the use of deception in that there are those who argue that it is rarely, if ever, ethically acceptable (Bowd & Shapiro, 1993). [4] Clearly, nonhuman animals are incapable of giving informed consent. Yet they can be subjected to numerous procedures that are likely to cause them suffering. They can be confined, deprived of food and water, subjected to pain, operated on, and ultimately euthanized. (Of course, they can also be observed benignly in natural or zoo-like settings.) Others point out that psychological research on nonhuman animals has resulted in many important benefits to humans, including the development of behavioral therapies for many disorders, more effective pain control methods, and antipsychotic drugs (Miller, 1985). [5] It has also resulted in benefits to nonhuman animals, including alternatives to shooting and poisoning as means of controlling them.

As with deception, the APA acknowledges that the benefits of research on nonhuman animals can outweigh the costs, in which case it is ethically acceptable. However, researchers must use alternative methods when they can. When they cannot, they must acquire and care for their subjects humanely and minimize the harm to them. For more information on the APA’s position on nonhuman animal subjects, see the website of the APA’s Committee on Animal Research and Ethics ( http://www.apa.org/science/leadership/care/index.aspx ).

Scholarly Integrity

Standards 8.10 to 8.15 are about scholarly integrity. These include the obvious points that researchers must not fabricate data or plagiarize. Plagiarism means using others’ words or ideas without proper acknowledgment. Proper acknowledgment generally means indicating direct quotations with quotation marks  and  providing a citation to the source of any quotation or idea used.

The remaining standards make some less obvious but equally important points. Researchers should not publish the same data a second time as though it were new, they should share their data with other researchers, and as peer reviewers they should keep the unpublished research they review confidential. Note that the authors’ names on published research—and the order in which those names appear—should reflect the importance of each person’s contribution to the research. It would be unethical, for example, to include as an author someone who had made only minor contributions to the research (e.g., analyzing some of the data) or for a faculty member to make himself or herself the first author on research that was largely conducted by a student.

[1] Mann, T. (1994). Informed consent for psychological research: Do subjects comprehend consent forms and understand their legal rights?  Psychological Science, 5 , 140–143.

[2] Sieber, J. E., Iannuzzo, R., & Rodriguez, B. (1995). Deception methods in psychology: Have they changed in 23 years?  Ethics & Behavior, 5 , 67–85.

[3] Baumrind, D. (1985). Research using intentional deception: Ethical issues revisited. American Psychologist, 40 , 165–174.

[4] Bowd, A. D., & Shapiro, K. J. (1993). The case against animal laboratory research in psychology.  Journal of Social Issues, 49 , 133–142.

[5] Miller, N. E. (1985). The value of behavioral research on animals.  American Psychologist, 40 , 423–440.

[6] Haidt, J., Koller, S. H., & Dias, M. (1993). Affect, culture, and morality, or is it wrong to eat your dog?  Journal of Personality and Social Psychology, 65 , 613–628.

Key Takeaways

  • There are several written ethics codes for research with human participants that provide specific guidance on the ethical issues that arise most frequently. These codes include the Nuremberg Code, the Declaration of Helsinki, the Belmont Report, and the Federal Policy for the Protection of Human Subjects.
  • The APA Ethics Code is the most important ethics code for researchers in psychology. It includes many standards that are relevant mainly to clinical practice, but  Standard 8  concerns informed consent, deception, debriefing, the use of nonhuman animal subjects, and scholarly integrity in research.
  • Research conducted at universities, hospitals, and other institutions that receive support from the federal government must be reviewed by an institutional review board (IRB)—a committee at the institution that reviews research protocols to make sure they conform to ethical standards.
  • Informed consent is the process of obtaining and documenting people’s agreement to participate in a study, having informed them of everything that might reasonably be expected to affect their decision. Although it often involves having them read and sign a consent form, it is not equivalent to reading and signing a consent form.
  • Although some researchers argue that deception of research participants is never ethically justified, the APA Ethics Code allows for its use when the benefits of using it outweigh the risks, participants cannot reasonably be expected to be harmed, there is no way to conduct the study without deception, and participants are informed of the deception as soon as possible.
  • Practice: Read the Nuremberg Code, the Belmont Report, and  Standard 8 of the APA Ethics Code. List five specific similarities and five specific differences among them.
  • Discussion: In a study on the effects of disgust on moral judgment, participants were asked to judge the morality of disgusting acts, including people eating a dead pet and passionate kissing between a brother and sister (Haidt, Koller, & Dias, 1993). [6] If you were on the IRB that reviewed this protocol, what concerns would you have with it? Refer to the appropriate sections of the APA Ethics Code.

Creative Commons License

Share This Book

  • Increase Font Size
  • University of Michigan Library
  • Research Guides

Systematic Reviews

Reporting results.

  • Work with a Search Expert
  • Covidence Review Software
  • Types of Reviews
  • Evidence in a Systematic Review
  • Information Sources
  • Search Strategy
  • Managing Records
  • Selection Process
  • Data Collection Process
  • Study Risk of Bias Assessment

PRISMA Diagram Generators

Other reporting templates, we can help.

  • For Search Professionals

PRISMA provides a list of items to consider when reporting results. 

  • Study selection:   Give numbers of studies screened, assessed for eligibility, & included in the review, with reasons for exclusions at each stage, ideally with a flow diagram.
  • Study characteristics:   For each study, present characteristics for which data were extracted (e.g., study size, PICOs, follow-up period) & provide the citations.
  • Risk of bias within studies:   Present data on risk of bias of each study &, if available, any outcome level assessment.
  • Results of individual studies:   For all outcomes considered (benefits or harms), present, for each study: (a) simple summary data for each intervention group  (b) effect estimates & confidence intervals, ideally with a forest plot. 
  • Synthesis of results:   Present results of each meta-analysis done, including confidence intervals & measures of consistency.
  • Risk of bias across studies:   Present results of any assessment of risk of bias across studies.
  • Additional analysis:   Give results of additional analyses, if done (e.g., sensitivity or subgroup analyses, meta-regression).

References:

  • Moher D, Liberati A, Tetzlaff J, Altman DG; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009 Aug 18;151(4):264-9, W64. doi: 10.7326/0003-4819-151-4-200908180-00135. Epub 2009 Jul 20. PMID: 19622511.  https://pubmed.ncbi.nlm.nih.gov/19622511/
  • Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. Ann Intern Med. 2009 Aug 18;151(4):W65-94. doi: 10.7326/0003-4819-151-4-200908180-00136. Epub 2009 Jul 20. PMID: 19622512. https://pubmed.ncbi.nlm.nih.gov/1962251 2
  • Tricco AC, Lillie E, Zarin W,  et al. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Ann Intern Med. 2018 Oct 2;169(7):467-473. doi: 10.7326/M18-0850. Epub 2018 Sep 4. PMID: 30178033.  https://www.acpjournals.org/doi/epdf/10.7326/M18-0850
  • Flow Diagram Generator This is an updated version of the original PRISMA flow generator. Includes a downloadable PDF version.
  • Flow Diagram (PRISMA) Contains both PDF & Word versions. From PRISMA.

See the EQUATOR network for more guidelines for reporting health research.

As a collaborator on your research team, an informationist can write the methods section of your publication. With an informationist as a co-author you can be confident that the methods section of your paper will meet the relevant PRISMA reporting standards and be replicable by other groups.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian J Anaesth
  • v.60(9); 2016 Sep

Legal and ethical issues in research

Camille yip.

1 Department of Women's Anaesthesia, KK Women's and Children's Hospital, Bukit Timah, Singapore

Nian-Lin Reena Han

2 Division of Clinical Support Services, KK Women's and Children's Hospital, Bukit Timah, Singapore

Ban Leong Sng

3 Anesthesiology and Perioperative Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore

Legal and ethical issues form an important component of modern research, related to the subject and researcher. This article seeks to briefly review the various international guidelines and regulations that exist on issues related to informed consent, confidentiality, providing incentives and various forms of research misconduct. Relevant original publications (The Declaration of Helsinki, Belmont Report, Council for International Organisations of Medical Sciences/World Health Organisation International Guidelines for Biomedical Research Involving Human Subjects, World Association of Medical Editors Recommendations on Publication Ethics Policies, International Committee of Medical Journal Editors, CoSE White Paper, International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use-Good Clinical Practice) form the literature that are relevant to the ethical and legal aspects of conducting research that researchers should abide by when conducting translational and clinical research. Researchers should note the major international guidelines and regional differences in legislation. Hence, specific ethical advice should be sought at local Ethics Review Committees.

INTRODUCTION

The ethical and legal issues relating to the conduct of clinical research involving human participants had raised the concerns of policy makers, lawyers, scientists and clinicians for many years. The Declaration of Helsinki established ethical principles applied to clinical research involving human participants. The purpose of a clinical research is to systematically collect and analyse data from which conclusions are drawn, that may be generalisable, so as to improve the clinical practice and benefit patients in future. Therefore, it is important to be familiar with Good Clinical Practice (GCP), an international quality standard that is provided by the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH),[ 1 ] or the local version, GCP of the Central Drugs Standard Control Organization (India's equivalent of US Food and Drug Administration)[ 2 ] and local regulatory policy to ensure that the research is conducted both ethically and legally. In this article, we will briefly review the legal and ethical issues pertaining to recruitment of human subjects, basic principles of informed consent and precautions to be taken during data and clinical research publications. Some of the core principles of GCP in research include defining responsibilities of sponsors, investigators, consent process monitoring and auditing procedures and protection of human subjects.[ 3 ]

ISSUES RELATED TO THE RESEARCH PARTICIPANTS

The main role of human participants in research is to serve as sources of data. Researchers have a duty to ‘protect the life, health, dignity, integrity, right to self-determination, privacy and confidentiality of personal information of research subjects’.[ 4 ] The Belmont Report also provides an analytical framework for evaluating research using three ethical principles:[ 5 ]

  • Respect for persons – the requirement to acknowledge autonomy and protect those with diminished autonomy
  • Beneficence – first do no harm, maximise possible benefits and minimise possible harms
  • Justice – on individual and societal level.

Mistreatment of research subjects is considered research misconduct (no ethical review approval, failure to follow approved protocol, absent or inadequate informed consent, exposure of subjects to physical or psychological harm, exposure of subjects to harm due to unacceptable research practices or failure to maintain confidentiality).[ 6 ] There is also scientific misconduct involving fraud and deception.

Consent, possibility of causing harm

Based on ICH definition, ‘informed consent is a process by which a subject voluntarily confirms his or her willingness to participate in a particular trial, after having been informed of all aspects of the trial that are relevant to the subject's decision to participate’. As for a standard (therapeutic) intervention that carries certain risks, informed consent – that is voluntary, given freely and adequately informed – must be sought from participants. However, due to the research-centred, rather than patient-centred primary purpose, additional relevant information must be provided in clinical trials or research studies in informed consent form. The essential components of informed consent are listed in Table 1 [Adapted from ICH Harmonised Tripartite Guideline, Guideline for Good Clinical Practice E6(R1)].[ 1 ] This information should be delivered in the language and method that individual potential subjects can understand,[ 4 ] commonly in the form of a printed Participant Information Sheet. Informed consent is documented by means of written, signed and dated informed consent form.[ 1 ] The potential subjects must be informed of the right to refuse to participate or withdraw consent to participate at any time without reprisal and without affecting the patient–physician relationship. There are also general principles regarding risk assessment, scientific requirements, research protocols and registration, function of ethics committees, use of placebo, post-trial provisions and research publication.[ 4 ]

Essential components of an informed consent

An external file that holds a picture, illustration, etc.
Object name is IJA-60-684-g001.jpg

Special populations

Informed consent may be sought from a legally authorised representative if a potential research subject is incapable of giving informed consent[ 4 ] (children, intellectual impairment). The involvement of such populations must fulfil the requirement that they stand to benefit from the research outcome.[ 4 ] The ‘legally authorised representative’ may be a spouse, close relative, parent, power of attorney or legally appointed guardian. The hierarchy of priority of the representative may be different between different countries and different regions within the same country; hence, local guidelines should be consulted.

Special case: Emergency research

Emergency research studies occur where potential subjects are incapacitated and unable to give informed consent (acute head trauma, cardiac arrest). The Council for International Organisations of Medical Sciences/World Health Organisation guidelines and Declaration of Helsinki make exceptions to the requirement for informed consent in these situations.[ 4 , 7 ] There are minor variations in laws governing the extent to which the exceptions apply.[ 8 ]

Reasonable efforts should have been made to find a legal authority to consent. If there is not enough time, an ‘exception to informed consent’ may allow the subject to be enrolled with prior approval of an ethical committee.[ 7 ] Researchers must obtain deferred informed consent as soon as possible from the subject (when regains capacity), or their legally authorised representative, for continued participation.[ 4 , 7 ]

Collecting patient information and sensitive personal information, confidentiality maintenance

The Health Insurance Portability and Accountability Act has requirements for informed consent disclosure and standards for electronic exchange, privacy and information security. In the UK, generic legislation is found in the Data Protection Act.[ 9 ]

The International Committee of Medical Journal Editors (ICMJE) recommendations suggest that authors must ensure that non-essential identifying information (names, initials, hospital record numbers) are omitted during data collection and storage wherever possible. Where identifying information is essential for scientific purposes (clinical photographs), written informed consent must be obtained and the patient must be shown the manuscript before publication. Subjects should also be informed if any potential identifiable material might be available through media access.

Providing incentives

Cash or other benefits ‘in-kind’ (financial, medical, educational, community benefits) should be made known to subjects when obtaining informed consent without emphasising too much on it.[ 7 ] Benefits may serve as appreciation or compensation for time and effort but should not result in the inducement to participation.[ 10 ] The amount and nature of remuneration should be compared to norms, cultural traditions and are subjected to the Ethical Committee Review.[ 7 ]

ISSUES RELATED TO THE RESEARCHER

Legal issues pertaining to regulatory bodies.

Various regulatory bodies have been constituted to uphold the safety of subjects involved in research. It is imperative to obtain approval from the appropriate regulatory authorities before proceeding to any research. The constitution and the types of these bodies vary nation-wise. The researchers are expected to be aware of these authorities and the list of various bodies pertinent to India are listed in the article “Research methodology II” of this issue.

Avoiding bias, inappropriate research methodology, incorrect reporting and inappropriate use of information

Good, well-designed studies advance medical science development. Poorly conducted studies violate the principle of justice, as there are time and resources wastage for research sponsors, researchers and subjects, and undermine the societal trust on scientific enquiry.[ 11 ] The Guidelines for GCP is an international ethical and scientific quality standard for designing, conducting, recording and reporting trials.[ 1 ]

Fraud in research and publication

De novo data invention (fabrication) and manipulation of data (falsification)[ 6 ] constitute serious scientific misconduct. The true prevalence of scientific fraud is difficult to measure (2%–14%).[ 12 ]

Plagiarism and its checking

Plagiarism is the use of others' published and unpublished ideas or intellectual property without attribution or permission and presenting them as new and original rather than derived from an existing source.[ 13 ] Tools such as similarity check[ 14 ] are available to aid researchers detect similarities between manuscripts, and such checks should be done before submission.[ 15 ]

Overlapping publications

Duplicate publications violate international copyright laws and waste valuable resources.[ 16 , 17 ] Such publications can distort evidence-based medicine by double-counting of data when inadvertently included in meta-analyses.[ 16 ] This practice could artificially enlarge one's scientific work, distorting apparent productivity and may give an undue advantage when competing for research funding or career advancement.[ 17 ] Examples of these practices include:

Duplicate publication, redundant publication

Publication of a paper that overlaps substantially with one already published, without reference to the previous publication.[ 11 ]

Salami publication

Slicing of data from a single research process into different pieces creating individual manuscripts from each piece to artificially increase the publication volume.[ 16 ]

Such misconduct may lead to retraction of articles. Transparent disclosure is important when submitting papers to journals to declare if the manuscript or related material has been published or submitted elsewhere, so that the editor can decide how to handle the submission or to seek further clarification. Further information on acceptable secondary publication can be found in the ICMJE ‘Recommendations for the Conduct, Reporting, Editing, and Publishing of Scholarly Work in Medical Journals’.

Usually, sponsors and authors are required to sign over certain publication rights to the journal through copyright transfer or a licensing agreement; thereafter, authors should obtain written permission from the journal/publisher if they wish to reuse the published material elsewhere.[ 6 ]

Authorship and its various associations

The ICMJE recommendation lists four criteria of authorship:

  • Substantial contributions to the conception of design of the work, or the acquisition, analysis or interpretation of data for the work
  • Drafting the work or revising it critically for important intellectual content
  • Final approval of the version to be published
  • Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Authors and researchers have an ethical obligation to ensure the accuracy, publication and dissemination of the result of research,[ 4 ] as well as disclosing to publishers relevant corrections, retractions and errata, to protect scientific integrity of published evidence. Every research study involving human subjects must be registered in a publicly accessible database (e.g., ANZCTR [Australia and NZ], ClinicalTrials.gov [US and non-US], CTRI [India]) and the results made publicly available.[ 4 ] Sponsors of clinical trials must allow all study investigators and manuscript authors access to the full study data set and the right to use all study data for publication.[ 5 ] Source documents (containing trial data) and clinical study report (results and interpretation of trial) form part of the essential documentation that must be retained for a length of time prescribed by the applicable local legislation.[ 1 ] The ICMJE is currently proposing a requirement of authors to share with others de-identified individual patient data underlying the results presented in articles published in member journals.[ 18 ]

Those who have contributed to the work but do not meet all four criteria should be acknowledged; some of these activities include provision of administrative support, writing assistance and proofreading. They should have their written permission sought for their names to be published and disclose any potential conflicts of interest.[ 6 ] The Council of Scientific Editors has identified several inappropriate types of authorship, such as guest authorship, honorary or gift authorship and ghost authorship.[ 6 ] Various interventions should be put in place to prevent such fraudulent practices in research.[ 19 ] The list of essential documents for the conduct of a clinical trial is included in other articles of the same issue.

The recent increase in research activities has led to concerns regarding ethical and legal issues. Various guidelines have been formulated by organisations and authorities, which serve as a guide to promote integrity, compliance and ethical standards in the conduct of research. Fraud in research undermines the quality of establishing evidence-based medicine, and interventions should be put in place to prevent such practices. A general overview of ethical and legal principles will enable research to be conducted in accordance with the best practices.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

GUIDANCE FOR CLINICAL TRIAL PROTOCOLS

GUIDANCE FOR CLINICAL TRIAL PROTOCOLS

Why spirit.

The trial protocol provides guidance to individuals conducting the study, serves as the basis for…

SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials)

The SPIRIT 2013 Statement  provides evidence-based recommendations for the minimum content of a clinical trial protocol. SPIRIT is widely endorsed as an international standard for trial protocols.

The recommendations are outlined in a 33-item checklist and figure . Each checklist item is detailed in the Explanation & Elaboration paper.

Key Documents

SPIRIT Statement

Explanation and Elaboration

standard 8.10 reporting research results

Online trial protocol template

Better content, faster registration.

SEPTRE (SPIRIT Electronic Protocol Tool & Resource) is an innovative web-based tool that simplifies the creation, formatting, and registration of high-quality trial protocols. The SEPTRE protocol tool incorporates automation and the evidence-based SPIRIT guidance to strengthen the quality, efficiency, and transparency of clinical trials.

Please  contact us  for a demo and further details.

LEARN MORE FREE TRIAL LOGIN

News & Updates

  • SPIRIT-Outcomes extension now available in JAMA to guide reporting of outcomes-related information in protocols (December 2022) [spacer height=”20px” id=”2″]
  • SPIRIT-Path extension: protocol guidance for reporting cellular and molecular pathology content — Lancet Oncology (October 2021) [spacer height=”20px” id=”2″]
  • SPIRIT-PRO explanation and elaboration paper published (July 2021) [spacer height=”20px” id=”2″]
  • CONSERVE Statement published in JAMA – guidance for trials modified by COVID-19 and other extenuating circumstances (June 2021) [spacer height=”20px” id=”2″]

Turkish translation published (April 2021)

Japanese translations of SPIRIT-PRO  and CONSORT-PRO published (January 2021)

SPIRIT-AI guidance now published for trials of artificial intelligence interventions (September 2020)

SPIRIT extension for N-of-1 trials (SPENT) published in BMJ (February 2020)

SPIRIT-CONSORT Extension for clinical trials of artificial intelligence interventions (September 2019)

Web-based SEPTRE clinical trial protocol tool is now available (February 2019)

SPIRIT- TCM extension published for trials of traditional Chinese medicine (January 2019)

French Translation [by Cochrane France] (July 2018)

Doug Altman, SPIRIT group executive member and legendary giant in improving the quality of research, will be greatly missed

SPIRIT-PRO extension published in JAMA: Trial protocol guidance for patient-reported outcomes (February 2018)

Japanese translation published (December 2017)

Italian translation published (February 2017)

standard 8.10 reporting research results

last updated: April 1, 2022

This paper is in the following e-collection/theme issue:

Published on 22.5.2024 in Vol 26 (2024)

AI Quality Standards in Health Care: Rapid Umbrella Review

Authors of this article:

Author Orcid Image

  • Craig E Kuziemsky 1 , BSc, BCom, PhD   ; 
  • Dillon Chrimes 2 , BSc, MSc, PhD   ; 
  • Simon Minshall 2 , BSc, MSc   ; 
  • Michael Mannerow 1 , BSc   ; 
  • Francis Lau 2 , BSc, MSc, MBA, PhD  

1 MacEwan University, Edmonton, AB, Canada

2 School of Health Information Science, University of Victoria, Victoria, BC, Canada

Corresponding Author:

Craig E Kuziemsky, BSc, BCom, PhD

MacEwan University

10700 104 Avenue

Edmonton, AB, T5J4S2

Phone: 1 7806333290

Email: [email protected]

Background: In recent years, there has been an upwelling of artificial intelligence (AI) studies in the health care literature. During this period, there has been an increasing number of proposed standards to evaluate the quality of health care AI studies.

Objective: This rapid umbrella review examines the use of AI quality standards in a sample of health care AI systematic review articles published over a 36-month period.

Methods: We used a modified version of the Joanna Briggs Institute umbrella review method. Our rapid approach was informed by the practical guide by Tricco and colleagues for conducting rapid reviews. Our search was focused on the MEDLINE database supplemented with Google Scholar. The inclusion criteria were English-language systematic reviews regardless of review type, with mention of AI and health in the abstract, published during a 36-month period. For the synthesis, we summarized the AI quality standards used and issues noted in these reviews drawing on a set of published health care AI standards, harmonized the terms used, and offered guidance to improve the quality of future health care AI studies.

Results: We selected 33 review articles published between 2020 and 2022 in our synthesis. The reviews covered a wide range of objectives, topics, settings, designs, and results. Over 60 AI approaches across different domains were identified with varying levels of detail spanning different AI life cycle stages, making comparisons difficult. Health care AI quality standards were applied in only 39% (13/33) of the reviews and in 14% (25/178) of the original studies from the reviews examined, mostly to appraise their methodological or reporting quality. Only a handful mentioned the transparency, explainability, trustworthiness, ethics, and privacy aspects. A total of 23 AI quality standard–related issues were identified in the reviews. There was a recognized need to standardize the planning, conduct, and reporting of health care AI studies and address their broader societal, ethical, and regulatory implications.

Conclusions: Despite the growing number of AI standards to assess the quality of health care AI studies, they are seldom applied in practice. With increasing desire to adopt AI in different health topics, domains, and settings, practitioners and researchers must stay abreast of and adapt to the evolving landscape of health care AI quality standards and apply these standards to improve the quality of their AI studies.

Introduction

Growth of health care artificial intelligence.

In recent years, there has been an upwelling of artificial intelligence (AI)–based studies in the health care literature. While there have been reported benefits, such as improved prediction accuracy and monitoring of diseases [ 1 ], health care organizations face potential patient safety, ethical, legal, social, and other risks from the adoption of AI approaches [ 2 , 3 ]. A search of the MEDLINE database for the terms “artificial intelligence” and “health” in the abstracts of articles published in 2022 alone returned >1000 results. Even by narrowing it down to systematic review articles, the same search returned dozens of results. These articles cover a wide range of AI approaches applied in different health care contexts, including such topics as the application of machine learning (ML) in skin cancer [ 4 ], use of natural language processing (NLP) to identify atrial fibrillation in electronic health records [ 5 ], image-based AI in inflammatory bowel disease [ 6 ], and predictive modeling of pressure injury in hospitalized patients [ 7 ]. The AI studies reported are also at different AI life cycle stages, from model development, validation, and deployment to evaluation [ 8 ]. Each of these AI life cycle stages can involve different contexts, questions, designs, measures, and outcomes [ 9 ]. With the number of health care AI studies rapidly on the rise, there is a need to evaluate the quality of these studies in different contexts. However, the means to examine the quality of health care AI studies have grown more complex, especially when considering their broader societal and ethical implications [ 10 - 13 ].

Coiera et al [ 14 ] described a “replication crisis” in health and biomedical informatics where issues regarding experimental design and reporting of results impede our ability to replicate existing research. Poor replication raises concerns about the quality of published studies as well as the ability to understand how context could impact replication across settings. The replication issue is prevalent in health care AI studies as many are single-setting approaches and we do not know the extent to which they can be translated to other settings or contexts. One solution to address the replication issue in AI studies has been the development of a growing number of AI quality standards. Most prominent are the reporting guidelines from the Enhancing the Quality and Transparency of Health Research (EQUATOR) network [ 15 ]. Examples include the CONSORT-AI (Consolidated Standards of Reporting Trials–Artificial Intelligence) extension for reporting AI clinical trials [ 16 ] and the SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials–Artificial Intelligence) extension for reporting AI clinical trial protocols [ 17 ]. Beyond the EQUATOR guidelines, there are also the Minimum Information for Medical AI Reporting standard [ 18 ] and the Minimum Information About Clinical Artificial Intelligence Modeling checklist [ 19 ] on the minimum information needed in published AI studies. These standards mainly focus on the methodological and reporting quality aspects of AI studies to ensure that the published information is rigorous, complete, and transparent.

Need for Health Care AI Standards

However, there is a shortcoming of standard-driven guidance that spans the entire AI life cycle spectrum of design, validation, implementation, and governance. The World Health Organization has published six ethical principles to guide the use of AI [ 20 ] that cover (1) protecting human autonomy; (2) promoting human well-being and safety and the public interest; (3) ensuring transparency, explainability, and intelligibility; (4) fostering responsibility and accountability; (5) ensuring inclusiveness and equity; and (6) promoting AI that is responsive and sustainable. In a scoping review, Solanki et al [ 21 ] operationalized health care AI ethics through a framework of 6 guidelines that spans the entire AI life cycle of data management, model development, deployment, and monitoring. The National Health Service England has published a best practice guide on health care AI on how to get it right that encompasses a governance framework, addressing data access and protection issues, spreading the good innovation, and monitoring uses over time [ 22 ]. To further promote the quality of health care AI, van de Sande et al [ 23 ] have proposed a step-by-step approach with specific AI quality criteria that span the entire AI life cycle from development and implementation to governance.

Despite the aforementioned principles, frameworks, and guidance, there is still widespread variation in the quality of published AI studies in the health care literature. For example, 2 systematic reviews of 152 prediction and 28 diagnosis studies have shown poor methodological and reporting quality that have made it difficult to replicate, assess, and interpret the study findings [ 24 , 25 ]. The recent shifts beyond study quality to broader ethical, equity, and regulatory issues have also raised additional challenges for AI practitioners and researchers on the impact, transparency, trustworthiness, and accountability of the AI studies involved [ 13 , 26 - 28 ]. Increasingly, we are also seeing reports of various types of AI implementation issues [ 2 ]. There is a growing gap between the expected quality and performance of health care AI that needs to be addressed. We suggest that the overall issue is a lack of awareness and of the use of principles, frameworks, and guidance in health care AI studies.

This rapid umbrella review addressed the aforementioned issues by focusing on the principles and frameworks for health care AI design, implementation, and governance. We analyzed and synthesized the use of AI quality standards as reported in a sample of published health care AI systematic review articles. In this paper, AI quality standards are defined as guidelines, criteria, checklists, statements, guiding principles, or framework components used to evaluate the quality of health care AI studies in different domains and life cycle stages. In this context, quality covers the trustworthiness, methodological, reporting, and technical aspects of health care AI studies. Domains refer to the disciplines, branches, or areas in which AI can be found or applied, such as computer science, medicine, and robotics. The findings from this review can help address the growing need for AI practitioners and researchers to navigate the increasingly complex landscape of AI quality standards to plan, conduct, evaluate, and report health care AI studies.

With the increasing volume of systematic review articles that appear in the health care literature each year, an umbrella review has become a popular and timely approach to synthesize knowledge from published systematic reviews on a given topic. For this paper, we drew on the umbrella review method in the typology of systematic reviews for synthesizing evidence in health care by MacEntee [ 29 ]. In this typology, umbrella reviews are used to synthesize multiple systematic reviews from different sources into a summarized form to address a specific topic. We used a modified version of the Joanna Briggs Institute (JBI) umbrella review method to tailor the process, including developing of an umbrella review protocol, applying a rapid approach, and eliminating duplicate original studies [ 30 ]. Our rapid approach was informed by the practical guide to conducting rapid reviews in the areas of database selection, topic refinement, searching, study selection, data extraction, and synthesis by Tricco et al [ 31 ]. A PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram of our review process is shown in Figure 1 [ 32 ]. A PRISMA checklist is provided in Multimedia Appendix 1 [ 32 ].

standard 8.10 reporting research results

Objective and Questions

The objective of this rapid umbrella review was to examine the use of AI quality standards based on a sample of published health care AI systematic reviews. Specifically, our questions were as follows:

  • What AI quality standards have been applied to evaluate the quality of health care AI studies?
  • What key quality standard–related issues are noted in these reviews?
  • What guidance can be offered to improve the quality of health care AI studies through the incorporation of AI quality standards?

Search Strategy

Our search strategy focused on the MEDLINE database supplemented with Google Scholar. Our search terms consisted of “artificial intelligence” or “AI,” “health,” and “systematic review” mentioned in the abstract (refer to Multimedia Appendix 2 for the search strings used). We used the .TW search field tag as it searches on title and abstract as well as fields such as abstract, Medical Subject Heading terms, and Medical Subject Heading subheadings. Our rationale to limit the search to MEDLINE with simple terms was to keep the process manageable, recognizing the huge volume of health care AI–related literature reviews that have appeared in the last few years, especially on COVID-19. One author conducted the MEDLINE and Google Scholar searches with assistance from an academic librarian. For Google Scholar, we restricted the search to the first 100 citations returned.

Inclusion Criteria

We considered all English-language systematic review articles published over a 36-month period from January 1, 2020, to December 31, 2022. The review could be any type of systematic review, meta-analysis, narrative review, qualitative review, scoping review, meta-synthesis, realist review, or umbrella review as defined in the review typology by MacEntee [ 29 ]. The overarching inclusion criteria were AI and health as the focus. To be considered for inclusion, the review articles must meet the following criteria:

  • Each original study in the review is described, where an AI approach in the form of a model, method, algorithm, technique, or intervention is proposed, designed, implemented, or evaluated within a health care context to address a particular health care problem or topic area.
  • We define AI as a simulation of the approximation of human intelligence in machines that comprises learning, reasoning, and logic [ 33 ]. In that approximation, AI has different levels of adaptivity and autonomy. Weak AI requires supervision or reinforced learning with human intervention to adapt to the environment, with low autonomous interaction. Strong AI is highly adaptive and highly autonomous via unsupervised learning, with no human intervention.
  • We looked through all the articles, and our health care context categorization was informed by the stated settings (eg, hospital) and purpose (eg, diagnosis) mentioned in the included reviews.
  • The review can include all types of AI approaches, such as ML, NLP, speech recognition, prediction models, neural networks, intelligent robotics, and AI-assisted and automated medical devices.
  • The review must contain sufficient detail on the original AI studies, covering their objectives, contexts, study designs, AI approaches, measures, outcomes, and reference sources.

Exclusion Criteria

We excluded articles if any one of the following applied:

  • Review articles published before January 1, 2020; not accessible in web-based format; or containing only an abstract
  • Review articles in languages other than English
  • Earlier versions of the review article with the same title or topic by the same authors
  • Context not health care–related, such as electronic commerce or smart manufacturing
  • The AI studies not containing sufficient detail on their purpose, features, or reference sources
  • Studies including multiple forms of digital health technologies besides AI, such as telehealth, personal health records, or communication tools

Review Article Selection

One author conducted the literature searches and retrieved the citations after eliminating duplicates. The author then screened the citation titles and abstracts against the inclusion and exclusion criteria. Those that met the inclusion criteria were retrieved for full-text review independently by 2 other authors. Any disagreements in final article selection were resolved through consensus between the 2 authors or with a third author. The excluded articles and the reasons for their exclusion were logged.

Quality Appraisal

In total, 2 authors applied the JBI critical appraisal checklist independently to appraise the quality of the selected reviews [ 30 ]. The checklist has 11 questions that allow for yes , no , unclear , or not applicable as the response. The questions cover the areas of review question, inclusion criteria, search strategy and sources, appraisal criteria used, use of multiple reviewers, methods of minimizing data extraction errors and combining studies, publication bias, and recommendations supported by data. The reviews were ranked as high, medium, and low quality based on their JBI critical appraisal score (≥0.75 was high quality, ≥0.5 and <0.75 was medium quality, and <0.5 was low quality). All low-quality reviews were excluded from the final synthesis.

Data Extraction

One author extracted data from selected review articles using a predefined template. A second author validated all the articles for correctness and completeness. As this review was focused on AI quality standards, we extracted data that were relevant to this topic. We created a spreadsheet template with the following data fields to guide data extraction:

  • Author, year, and reference: first author last name, publication year, and reference number
  • URL: the URL where the review article can be found
  • Objective or topic: objective or topic being addressed by the review article
  • Type: type of review reported (eg, systematic review, meta-analysis, or scoping review)
  • Sources: bibliographic databases used to find the primary studies reported in the review article
  • Years: period of the primary studies covered by the review article
  • Studies: total number of primary studies included in the review article
  • Countries: countries where the studies were conducted
  • Settings: study settings reported in the primary studies of the review article
  • Participants: number and types of individuals being studied as reported in the review article
  • AI approaches: the type of AI model, method, algorithm, technique, tool, or intervention described in the review article
  • Life cycle and design: the stage or design of the AI study in the AI life cycle in the primary studies being reported, such as requirements, design, implementation, monitoring, experimental, observational, training-test-validation, or controlled trial
  • Appraisal: quality assessment of the primary studies using predefined criteria (eg, risk of bias)
  • Rating: quality assessment results of the primary studies reported in the review article
  • Measures: performance criteria reported in the review article (eg, mortality, accuracy, and resource use)
  • Analysis: methods used to summarize the primary study results (eg, narrative or quantitative)
  • Results: aggregate findings from the primary studies in the review article
  • Standards: name of the quality standards mentioned in the review article
  • Comments: issues mentioned in the review article relevant to our synthesis

Removing Duplicate AI Studies

We identified all unique AI studies across the selected reviews after eliminating duplicates that appeared in them. We retrieved full-text articles for every tenth of these unique studies and searched for mention of AI quality standard–related terms in them. This was to ensure that all relevant AI quality standards were accounted for even if the reviews did not mention them.

Analysis and Synthesis

Our analysis was based on a set of recent publications on health care AI standards. These include (1) the AI life cycle step-by-step approach by van de Sande et al [ 23 ] with a list of AI quality standards as benchmarks, (2) the reporting guidelines by Shelmerdine et al [ 15 ] with specific standards for different AI-based clinical studies, (3) the international standards for evaluating health care AI by Wenzel and Wiegand [ 26 ], and (4) the broader requirements for trustworthy health care AI across the entire life cycle stages by the National Academy of Medicine (NAM) [ 8 ] and the European Union Commission (EUC) [ 34 ]. As part of the synthesis, we created a conceptual organizing scheme drawing on published literature on AI domains and approaches to visualize their relationships (via a Euler diagram) [ 35 ]. All analyses and syntheses were conducted by one author and then validated by another to resolve differences.

For the analysis, we (1) extracted key characteristics of the selected reviews based on our predefined template; (2) summarized the AI approaches, life cycle stages, and quality standards mentioned in the reviews; (3) extracted any additional AI quality standards mentioned in the 10% sample of unique AI studies from the selected reviews; and (4) identified AI quality standard–related issues reported.

For the synthesis, we (1) mapped the AI approaches to our conceptual organizing scheme, visualized their relationships with the AI domains and health topics found, and described the challenges in harmonizing these terms; (2) established key themes from the AI quality standard issues identified and mapped them to the NAM and EUC frameworks [ 8 , 34 ]; and (3) created a summary list of the AI quality standards found and mapped them to the life cycle phases by van de Sande et al [ 23 ].

Drawing on these findings, we proposed a set of guidelines that can enhance the quality of future health care AI studies and described its practice, policy, and research implications. Finally, we identified the limitations of this rapid umbrella review as caveats for the readers to consider. As health care, AI, and standards are replete with industry terminologies, we used the acronyms where they are mentioned in the paper and compiled an alphabetical acronym list with their spelled-out form at the end of the paper.

Summary of Included Reviews

We found 69 health care AI systematic review articles published between 2020 and 2022, of which 35 (51%) met the inclusion criteria. The included articles covered different review types, topics, settings, numbers of studies, designs, participants, AI approaches, and performance measures (refer to Multimedia Appendix 3 [ 36 - 68 ] for the review characteristics). We excluded the remaining 49% (34/69) of the articles because they (1) covered multiple technologies (eg, telehealth), (2) had insufficient detail, (3) were not specific to health care, or (4) were not in English (refer to Multimedia Appendix 4 for the excluded reviews and reasons). The quality of these reviews ranged from JBI critical appraisal scores of 1.0 to 0.36, with 49% (17/35) rated as high quality, 40% (14/35) rated as moderate quality, and 6% (2/35) rated as poor quality ( Multimedia Appendix 5 [ 36 - 68 ]). A total of 6% (2/35) of the reviews were excluded for their low JBI scores [ 69 , 70 ], leaving a sample of 33 reviews for the final synthesis.

Regarding review types, most (23/33, 70%) were systematic reviews [ 37 - 40 , 45 - 51 , 53 - 57 , 59 - 64 , 66 , 67 ], with the remaining being scoping reviews [ 36 , 41 - 44 , 52 , 58 , 65 , 68 ]. Only 3% (1/33) of the reviews were meta-analyses [ 38 ], and another was a rapid review [ 61 ]. Regarding health topics, the reviews spanned a wide range of specific health conditions, disciplines, areas, and practices. Examples of conditions were COVID-19 [ 36 , 37 , 49 , 51 , 56 , 62 , 66 ], mental health [ 48 , 65 , 68 ], infection [ 50 , 59 , 66 ], melanoma [ 57 ], and hypoglycemia [ 67 ]. Examples of disciplines were public health [ 36 , 37 , 56 , 66 ], nursing [ 42 , 43 , 61 ], rehabilitation [ 52 , 64 ], and dentistry [ 55 , 63 ]. Areas included mobile health and wearables [ 41 , 52 , 54 , 65 ], surveillance and remote monitoring [ 51 , 61 , 66 ], robotic surgeries [ 47 ], and biobanks [ 39 ]. Practices included diagnosis [ 37 , 47 , 49 , 58 , 59 , 62 ], prevention [ 47 ], prediction [ 36 , 38 , 49 , 50 , 57 ], disease management [ 41 , 46 , 47 , 58 ], and administration [ 42 ]. Regarding settings, less than half (12/33, 36%) were explicit in their health care settings, which included multiple sources [ 36 , 42 , 43 , 50 , 54 , 61 ], hospitals [ 45 , 49 ], communities [ 44 , 51 , 58 ], and social media groups [ 48 ]. The number of included studies ranged from 794 on COVID-19 [ 49 ] to 8 on hypoglycemia [ 67 ]. Regarding designs, most were performance assessment studies using secondary data sources such as intensive care unit [ 38 ], imaging [ 37 , 62 , 63 ], and biobank [ 39 ] databases. Regarding participants, they included patients, health care providers, educators, students, simulated cases, and those who use social media. Less than one-third of the reviews (8/33, 24%) mentioned sample sizes, which ranged from 11 adults [ 44 ] to 1,547,677 electronic medical records [ 40 ] (refer to Multimedia Appendix 3 for details).

Regarding AI approaches, there were >60 types of AI models, methods, algorithms, tools, and techniques mentioned in varying levels of detail across the broad AI domains of computer science, data science with and without NLP, and robotics. The main AI approaches were ML and deep learning (DL), with support vector machine, convolutional neural network, neural network, logistic regression, and random forest being mentioned the most (refer to the next section for details). The performance measures covered a wide range of metrics, such as diagnostic and prognostic accuracies (eg, sensitivity, specificity, accuracy, and area under the curve) [ 37 - 40 , 46 - 48 , 53 , 57 , 59 , 63 , 67 ], resource use (eg, whether an intensive care unit stay was necessary, length of stay, and cost) [ 37 , 58 , 62 ], and clinical outcomes (eg, COVID-19 severity, mortality, and behavior change) [ 36 , 37 , 49 , 56 , 62 , 65 ]. A few reviews (6/33, 18%) focused on the extent of the socioethical guidelines addressed [ 44 , 51 , 55 , 58 , 66 , 68 ]. Regarding life cycle stages, different schemes were applied, including preprocessing and classification [ 48 , 57 ], data preparation-preprocessing [ 37 , 38 ], different stages of adoption (eg, knowledge, persuasion, decision making, implementation) [ 44 ], conceptual research [ 42 ], model development [ 36 , 37 , 40 , 42 , 45 , 46 , 50 - 56 , 58 - 64 , 66 , 67 ], design [ 43 ], training and testing [ 38 , 42 , 45 , 50 - 53 , 58 , 61 - 64 ], validation [ 36 - 38 , 40 , 45 , 46 , 50 , 51 , 53 , 55 , 56 , 58 - 64 , 67 ], pilot trials [ 65 ], public engagement [ 68 ], implementation [ 42 , 44 , 60 - 62 , 66 , 68 ], confirmation [ 44 ], and evaluation [ 42 , 43 , 53 , 60 - 62 , 65 ] (refer to Multimedia Appendix 3 for details). It is worth noting that the period covered for our review did not include any studies on large language models (LLMs). LLM studies became more prevalent in the literature in the period just after our review.

Use of Quality Standards in Health Care AI Studies

To make sense of the different AI approaches mentioned, we used a Euler diagram [ 71 ] as a conceptual organizing scheme to visualize their relationships with AI domains and health topics ( Figure 2 [ 36 , 41 - 43 , 47 , 48 , 51 - 54 , 56 - 58 , 60 , 62 , 65 , 67 ]). The Euler diagram shows that AI broadly comprised approaches in the domains of computer science, data science with and without NLP, and robotics that could be overlapping. The main AI approaches were ML and DL, with DL being a more advanced form of ML through the use of artificial neural networks [ 33 ]. The diagram also shows that AI can exist without ML and DL (eg, decision trees and expert systems). There are also outliers in these domains with borderline AI-like approaches mostly intended to enhance human-computer interactions, such as social robotics [ 42 , 43 ], robotic-assisted surgery [ 47 ], and exoskeletons [ 54 ]. The health topics in our reviews spanned the AI domains, with most falling within data science with or without NLP. This was followed by computer science mostly for communication or database and other functional support and robotics for enhanced social interactions that may or may not be AI driven. There were borderline AI approaches such as programmed social robotics [ 42 , 43 ] or AI-enhanced social robots [ 54 ]. These approaches focus on AI enabled social robotic programming and did not use ML or DL. Borderline AI approaches also included virtual reality [ 60 ] and wearable sensors [ 65 , 66 , 68 ].

Regarding AI life cycle stages, we harmonized the different terms used in the original studies by mapping them to the 5 life cycle phases by van de Sande et al [ 23 ]: 0 (preparation), I (model development), II (performance assessment), III (clinical testing), and IV (implementation). Most AI studies in the reviews mapped to the first 3 life cycle phases by van de Sande et al [ 23 ]. These studies would typically describe the development and performance of the AI approach on a given health topic in a specific domain and setting, including their validation, sometimes done using external data sets [ 36 , 38 ]. A small number of reviews reported AI studies that were at the clinical testing phase [ 60 , 61 , 66 , 68 ]. A total of 7 studies were described as being in the implementation phase [ 66 , 68 ]. On the basis of the descriptions provided, few of the AI approaches in the studies in the AI reviews had been adopted for routine use in clinical settings [ 66 , 68 ] with quantifiable improvements in health outcomes (refer to Multimedia Appendix 6 [ 36 - 68 ] for details).

Regarding AI quality standards, only 39% (13/33) of the reviews applied specific AI quality standards in their results [ 37 - 40 , 45 , 46 , 50 , 54 , 58 , 59 , 61 , 63 , 66 ], and 12% (4/33) mentioned the need for standards [ 55 , 63 , 68 ]. These included the Prediction Model Risk of Bias Assessment Tool [ 37 , 38 , 58 , 59 ], Newcastle-Ottawa Scale [ 39 , 50 ], Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modeling Studies [ 38 , 59 ], Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis–Machine Learning Extension [ 50 ], levels of evidence [ 61 ], Critical Appraisal Skills Program Clinical Prediction Rule Checklist [ 40 ], Mixed Methods Appraisal Tool [ 66 ], and CONSORT-AI [ 54 ]. Another review applied 7 design justice principles as the criteria to appraise the quality of their AI studies [ 68 ]. There were also broader-level standards mentioned. These included the European Union ethical guidelines for trustworthy AI [ 44 ]; international AI standards from the International Organization for Standardization (ISO); and AI policy guidelines from the United States, Russia, and China [ 46 ] (refer to Multimedia Appendix 6 for details). We updated the Euler diagram ( Figure 2 [ 36 , 41 - 43 , 47 , 48 , 51 - 54 , 56 - 58 , 60 , 62 , 65 , 67 ]) to show in red the health topics in reviews with no mention of specific AI standards.

standard 8.10 reporting research results

Of the 178 unique original AI studies from the selected reviews that were examined, only 25 (14%) mentioned the use of or need for specific AI quality standards (refer to Multimedia Appendix 7 [ 36 - 68 ] for details). They were of six types: (1) reporting—COREQ (Consolidated Criteria for Reporting Qualitative Research), Strengthening the Reporting of Observational Studies in Epidemiology, Standards for Reporting Diagnostic Accuracy Studies, PRISMA, and EQUATOR; (2) data—Unified Medical Language System, Food and Drug Administration (FDA) Adverse Event Reporting System, MedEx, RxNorm, Medical Dictionary for Regulatory Activities, and PCORnet; (3) technical—ISO-12207, FDA Software as a Medical Device, EU-Scholarly Publishing and Academic Resources Coalition, Sensor Web Enablement, Open Geospatial Consortium, Sensor Observation Service, and the American Medical Association AI recommendations; (4) robotics—ISO-13482 and ISO and TC-299; (5) ethics—Helsinki Declaration and European Union AI Watch; and (6) regulations—Health Insurance Portability and Accountability Act (HIPAA) and World Health Organization World Economic Forum. These standards were added to the list of AI quality standards mentioned by review in Multimedia Appendix 6 .

A summary of the harmonized AI topics, approaches, domains, the life cycle phases by van de Sande et al [ 23 ], and quality standards derived from our 33 reviews and 10% of unique studies within them is shown in Table 1 .

a Borderline AI approaches in the AI domains are identified with (x) .

b Italicized entries are AI quality standards mentioned only in the original studies in the reviews.

c CNN: convolutional neural network.

d SVM: support vector machine.

e RF: random forest.

f DT: decision tree.

g LoR: logistic regression.

h NLP: natural language processing.

i Phase 0: preparation before model development; phase I: AI model development; phase II: assessment of AI performance and reliability; phase III: clinical testing of AI; and phase IV: implementing and governing AI.

j AB: adaptive boosting or adaboost.

k ARMED: attribute reduction with multi-objective decomposition ensemble optimizer.

l BE: boost ensembling.

m BNB: Bernoulli naïve Bayes.

n PROBAST: Prediction Model Risk of Bias Assessment Tool.

o TRIPOD: Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis.

p FDA-SaMD: Food and Drug Administration–Software as a Medical Device.

q STROBE: Strengthening the Reporting of Observational Studies in Epidemiology.

r ICU: intensive care unit.

s ANN-ELM: artificial neural network extreme learning machine.

t ELM: ensemble machine learning.

u LSTM: long short-term memory.

v ESICULA: super intensive care unit learner algorithm.

w CHARMS: Checklist for Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modeling Studies.

x SFCN: sparse fully convolutional network.

y NOS: Newcastle-Ottawa scale.

z ANN: artificial neural network.

aa EN: elastic net.

ab GAM: generalized additive model.

ac CASP: Critical Appraisal Skills Programme.

ad mHealth: mobile health.

ae DL: deep learning.

af FL: federated learning.

ag ML: machine learning.

ah SAR: socially assistive robot.

ai CDSS: clinical decision support system.

aj COREQ: Consolidated Criteria for Reporting Qualitative Research.

ak ISO: International Organization for Standardization.

al EU-SPARC: Scholarly Publishing and Academic Resources Coalition Europe.

am AMS: Associated Medical Services.

an BICMM: Bayesian independent component mixture model.

ao BNC: Bayesian network classifier.

ap C4.5: a named algorithm for creating decision trees.

aq CPH: Cox proportional hazard regression.

ar IEC: international electrotechnical commission.

as NIST: National Institute of Standards and Technology.

at OECD-AI: Organisation for Economic Co-operation and Development–artificial intelligence.

au AUC: area under the curve.

av BCP-NN: Bayesian classifier based on propagation neural network.

aw BCPNN: Bayesian confidence propagation neural network.

ax BNM: Bayesian network model.

ay TRIPOD-ML: Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis–Machine Learning.

az FAERS: Food and Drug Administration Adverse Event Reporting System.

ba MedDRA: Medical Dictionary for Regulatory Activities.

bb MADE1.0: Medical Artificial Intelligence Data Set for Electronic Health Records 1.0.

bc ANFIS: adaptive neuro fuzzy inference system.

bd EML: ensemble machine learning.

be cTAKES: clinical Text Analysis and Knowledge Extraction System.

bf CUI: concept unique identifier.

bg KM: k-means clustering.

bh UMLS: Unified Medical Language System.

bi 3DQI: 3D quantitative imaging.

bj ACNN: attention-based convolutional neural network.

bk LASSO: least absolute shrinkage and selection operator.

bl MCRM: multivariable Cox regression model.

bm MLR: multivariate linear regression.

bn CNN-TF: convolutional neural network using Tensorflow.

bo IRRCN: inception residual recurrent convolutional neural network.

bp IoT: internet of things.

bq NVHDOL: notal vision home optical-based deep learning.

br HIPAA: Health Insurance Portability and Accountability Act.

bs BC: Bayesian classifier.

bt EM: ensemble method.

bu PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses.

bv RCT: randomized controlled trial.

bw ROBINS-I: Risk of Bias in Non-Randomised Studies of Interventions.

bx DSP: deep supervised learning.

by NN: neural network.

bz SPIRIT: Standard Protocol Items: Recommendations for Interventional Trials.

ca ABS: agent based simulation.

cb LiR: linear regression.

cc TOPSIS: technique for order of preference by similarity to ideal solution.

cd ABC: artificial bee colony.

ce DCNN: deep convolutional neural network.

cf AL: abductive learning.

cg AR: automated reasoning.

ch BN: Bayesian network.

ci COBWEB: a conceptual clustering algorithm.

cj CH: computer heuristic.

ck AR-HMM: auto-regressive hidden Markov model.

cl MLoR: multivariate logistic regression.

cm ITS: intelligent tutoring system.

cn AMA: American Medical Association.

co APS: automated planning and scheduling.

cp ES: expert system.

cq SWE: software engineering.

cr OGC: open geospatial consortium standard.

cs SOS: start of sequence.

ct BiGAN: bidirectional generative adversarial network.

cu ADA-NN: adaptive dragonfly algorithms with neural network.

cv F-CNN: fully convolutional neural network.

cw FFBP-ANN: feed-forward backpropagation artificial neural network.

cx AFM: adaptive finite state machine.

cy ATC: anatomical therapeutic chemical.

cz AFC: active force control.

da FDA: Food and Drug Administration.

db MMAT: Mixed Methods Appraisal Tool.

dc STARD: Standards for Reporting of Diagnostic Accuracy Study.

dd VR: virtual reality.

de EU: European Union.

df EQUATOR: Enhancing the Quality and Transparency of Health Research.

dg WHO-WEF: World Health Organization World Economic Forum.

dh CCC: concordance correlation coefficient.

di IEEE: Institute of Electrical and Electronics Engineers.

There were also other AI quality standards not mentioned in the reviews or their unique studies. They included guidelines such as the do no harm road map, Factor Analysis of Information Risk, HIPAA, and the FDA regulatory framework mentioned by van de Sande et al [ 23 ]; AI clinical study reporting guidelines such as Clinical Artificial Intelligence Modeling and Minimum Information About Clinical Artificial Intelligence Modeling mentioned by Shelmerdine et al [ 15 ]; and the international technical AI standards such as ISO and International Electrotechnical Commission 22989, 23053, 23894, 24027, 24028, 24029, and 24030 mentioned by Wenzel and Wiegand [ 26 ].

With these additional findings, we updated the original table of AI standards in the study by van de Sande et al [ 23 ] showing crucial steps and key documents by life cycle phase ( Table 2 ).

a Italicized references are original studies cited in the reviews, and references denoted with the footnote t are those cited in our paper but not present in any of the reviews.

b AI: artificial intelligence.

c FDA: Food and Drug Administration.

d ECLAIR: Evaluate Commercial AI Solutions in Radiology.

e FHIR: Fast Healthcare Interoperability Resources.

f FAIR: Findability, Accessibility, Interoperability, and Reusability.

g PROBAST: Prediction Model Risk of Bias Assessment Tool.

h HIPAA: Health Insurance Portability and Accountability Act.

i OOTA: Office of The Assistant Secretary.

j GDPR: General Data Protection Regulation.

k EU: European Union.

l WMA: World Medical Association.

m WEF: World Economic Forum.

n SORMAS: Surveillance, Outbreak Response Management and Analysis System.

o WHO: World Health Organization.

p ML: machine learning.

q TRIPOD: Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis.

r TRIPOD-ML: Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis—Machine Learning.

s CLAIM: Checklist for Artificial Intelligence in Medical Imaging.

t References denoted with the footnote t are those cited in our paper but not present in any of the reviews.

u CHARMS: Checklist for Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modeling Studies.

v PRISMA-DTA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses of Diagnostic Test Accuracy.

w MI-CLAIM: Minimum Information About Clinical Artificial Intelligence Modeling.

x MINIMAR: Minimum Information for Medical AI Reporting.

y NOS: Newcastle-Ottawa Scale.

z LOE: level of evidence.

aa MMAT: Mixed Methods Appraisal Tool.

ab CASP: Critical Appraisal Skills Programme.

ac STARD: Standards for Reporting of Diagnostic Accuracy Studies.

ad COREQ: Consolidated Criteria for Reporting Qualitative Research.

ae MADE1.0: Model Agnostic Diagnostic Engine 1.0.

af DECIDE-AI: Developmental and Exploratory Clinical Investigations of Decision-Support Systems Driven by Artificial Intelligence.

ag SPIRIT-AI: Standard Protocol Items: Recommendations for Interventional Trials–Artificial Intelligence.

ah CONSORT-AI: Consolidated Standards of Reporting Trials–Artificial Intelligence.

ai RoB 2: Risk of Bias 2.

aj ROBINS-I: Risk of Bias in Non-Randomised Studies of Interventions.

ak RCT: randomized controlled trial.

al STROBE: Strengthening the Reporting of Observational Studies in Epidemiology.

am AI-ML: artificial intelligence–machine learning.

an TAM: Technology Acceptance Model.

ao SaMD: Software as a Medical Device.

ap IMDRF: International Medical Device Regulators Forum.

aq EQUATOR: Enhancing the Quality and Transparency of Health Research.

ar NIST: National Institute of Standards and Technology.

as OECD: Organisation for Economic Co-operation and Development.

at AMA: American Medical Association.

au CCC: Computing Community Consortium.

av ISO: International Organization for Standardization.

aw IEEE: Institute of Electrical and Electronics Engineers.

ax OGC: Open Geospatial Consortium.

ay SWE: Sensor Web Enablement.

az SOS: Sensor Observation Service.

ba IEC: International Electrotechnical Commission.

bb FAERS: Food and Drug Administration Adverse Event Reporting System.

bc MedDRA: Medical Dictionary for Regulatory Activities.

bd UMLS: Unified Medical Language System.

be R&D: research and development.

bf SPARC: Scholarly Publishing and Academic Resources Coalition.

bg TC: technical committee.

Quality Standard–Related Issues

We extracted a set of AI quality standard–related issues from the 33 reviews and assigned themes based on keywords used in the reviews ( Multimedia Appendix 8 [ 36 - 68 ]). In total, we identified 23 issues, with the most frequently mentioned ones being clinical utility and economic benefits (n=10); ethics (n=10); benchmarks for data, model, and performance (n=9); privacy, security, data protection, and access (n=8); and federated learning and integration (n=8). Table 3 shows the quality standard issues by theme from the 33 reviews. To provide a framing and means of conceptualizing the quality-related issues, we did a high-level mapping of the issues to the AI requirements proposed by the NAM [ 8 ] and EUC [ 20 ]. The mapping was done by 2 of the authors, with the remaining authors validating the results. Final mapping was the result of consensus across the authors ( Table 4 ).

a AI: artificial intelligence.

b SDOH: social determinants of health.

a B5-1: key considerations in model development; T6-2: key considerations for institutional infrastructure and governance; and T6-3: key artificial intelligence tool implementation concepts, considerations, and tasks.

b 1—human agency and oversight; 2—technical robustness and safety; 3—privacy and data governance; 4—transparency; 5—diversity, nondiscrimination, and fairness; 6—societal and environmental well-being; and 7—accountability.

c N/A: not applicable.

d Themes not addressed.

e SDOH: social determinants of health.

We found that all 23 quality standard issues were covered in the AI frameworks by the NAM and EUC. Both frameworks have a detailed set of guidelines and questions to be considered at different life cycle stages of the health care AI studies. While there was consistency in the mapping of the AI issues to the NAM and EUC frameworks, there were some differences across them. Regarding the NAM, the focus was on key aspects of AI model development, infrastructure and governance, and implementation tasks. Regarding the EUC, the emphasis was on achieving trustworthiness by addressing all 7 interconnected requirements of accountability; human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, nondiscrimination, and fairness; and societal and environmental well-being. The quality standard issues were based on our analysis of the review articles, and our mapping was at times more granular than the issues from the NAM and EUC frameworks. However, our results showed that the 2 frameworks do provide sufficient terminology for quality standard–related issues. By embracing these guidelines, one can enhance the buy-in and adoption of the AI interventions in the health care system.

Principal Findings

Overall, we found that, despite the growing number of health care AI quality standards in the literature, they are seldom applied in practice, as is shown in a sample of recently published systematic reviews of health care AI studies. Of the reviews that mentioned AI quality standards, most were used to ensure the methodological and reporting quality of the AI studies involved. At the same time, the reviews identified many AI quality standard–related issues, including those broader in nature, such as ethics, regulations, transparency, interoperability, safety, and governance. Examples of broader standards mentioned in a handful of reviews or original studies are the ISO-12207, Unified Medical Language System, HIPAA, FDA Software as a Medical Device, World Health Organization AI governance, and American Medical Association augmented intelligence recommendations. These findings reflect the evolving nature of health care AI, which has not yet reached maturity or been widely adopted. There is a need to apply appropriate AI quality standards to demonstrate the transparency, robustness, and benefits of these AI approaches in different AI domains and health topics while protecting the privacy, safety, and rights of individuals and society from the potential unintended consequences of such innovations.

Another contribution of our study was a conceptual reframing for a systems-based perspective to harmonize health care AI. We did not look at AI studies solely as individual entities but rather as part of a bigger system that includes clinical, organizational, and societal aspects. Our findings complement those of recent publications, such as an FDA paper that advocates for a need to help people understand the broader system of AI in health care, including across different clinical settings [ 72 ]. Moving forward, we advocate for AI research that looks at how AI approaches will mature over time. AI approaches evolve through different phases of maturity as they move from development to validation to implementation. Each phase of maturity has different requirements [ 23 ] that must be assessed as part of evaluating AI approaches across domains as the number of health care applications rapidly increases [ 73 ]. However, comparing AI life cycle maturity across studies was challenging as there were a variety of life cycle terms used across the reviews, making it hard to compare life cycle maturity in and across studies. To address this issue, we provided a mapping of life cycle terms from the original studies but also used the system life cycle phases by van de Sande et al [ 23 ] as a common terminology for AI life cycle stages. A significant finding from the mapping was that most AI studies in our selected reviews were still at early stages of maturity (ie, model preparation, development, or validation), with very few studies progressing to later phases of maturity such as clinical testing and implementation. If AI research in health systems is to evolve, we need to move past single-case studies with external data validation to studies that achieve higher levels of life cycle maturity, such as clinical testing and implementation over a variety of routine health care settings (eg, hospitals, clinics, and patient homes and other community settings).

Our findings also highlighted that there are many AI approaches and quality standards used across domains in health care AI studies. To better understand their relationships and the overall construct of the approach, our applied conceptual organizing scheme for harmonized health care characterizes AI studies according to AI domains, approaches, health topics, life cycle phases, and quality standards. The health care AI landscape is complex. The Euler diagram shows multiple AI approaches in one or more AI domains for a given health topic. These domains can overlap, and the AI approaches can be driven by ML, DL, or other types (eg, decision trees, robotics). This complexity is expected to increase as the number of AI approaches and range of applications across all health topics and settings grows over time. For meaningful comparison, we need a harmonized scheme such as the one described in this paper to make sense of the multitude of AI terminology for the types of approaches reported in the health care AI literature. The systems-based perspective in this review provides the means for harmonizing AI life cycles and incorporating quality standards through different maturity stages, which could help advance health care AI research by scaling up to clinical validation and implementation in routine practice. Furthermore, we need to move toward explainable AI approaches where applications are based on clinical models if we are to move toward later stages of AI maturity in health care (eg, clinical validation, and implementation) [ 74 ].

Proposed Guidance

To improve the quality of future health care AI studies, we urge AI practitioners and researchers to draw on published health care AI quality standard literature, such as those identified in this review. The type of quality standards to be considered should cover the trustworthiness, methodological, reporting, and technical aspects. Examples include the NAM and EUC AI frameworks that address trustworthiness and the EQUATOR network with its catalog of methodological and reporting guidelines identified in this review. Also included are the Minimum Information for Medical AI Reporting guidelines and technical ISO standards (eg, robotics) that are not in the EQUATOR. Components that should be standardized are the AI ethics, approaches, life cycle stages, and performance measures used in AI studies to facilitate their meaningful comparison and aggregation. The technical standards should address such key design features as data, interoperability, and robotics. Given the complexities of the different AI approaches involved, rather than focusing on the underlying model or algorithm design, one should compare their actual performance based on life cycle stages (eg, degree of accuracy in model development or assessment vs outcome improvement in implementation). The summary list of the AI quality standards described in this paper is provided in Multimedia Appendix 9 for those wishing to apply them in future studies.

Implications

Our review has practice, policy, and research implications. For practice, better application of health care AI quality standards could help AI practitioners and researchers become more confident regarding the rigor and transparency of their health care AI studies. Developers adhering to standards may help make AI approaches in domains less of a black box and reduce unintended consequences such as systemic bias or threats to patient safety. AI standards may help health care providers better understand, trust, and apply the study findings in relevant clinical settings. For policy, these standards can provide the necessary guidance to address the broader impacts of health care AI, such as the issues of data governance, privacy, patient safety, and ethics. For research, AI quality standards can help advance the field by improving the rigor, reproducibility, and transparency in the planning, design, conduct, reporting, and appraisal of health care AI studies. Standardization would also allow for the meaningful comparison and aggregation of different health care AI studies to expand the evidence base in terms of their performance impacts, such as cost-effectiveness, and clinical outcomes.

Limitations

Despite our best effort, this umbrella review has limitations. First, we only searched for peer-reviewed English articles with “health” and “AI” as the keywords in MEDLINE and Google Scholar covering a 36-month period. It is possible to have missed relevant or important reviews that did not meet our inclusion criteria. Second, some of the AI quality standards were only published in the last few years, at approximately the same time when the AI reviews were conducted. As such, it is possible for AI review and study authors to have been unaware of these standards or the need to apply them. Third, the AI standard landscape is still evolving; thus, there are likely standards that we missed in this review (eg, Digital Imaging and Communications in Medicine in pattern recognition with convolutional neural networks [ 75 ]). Fourth, the broader socioethical guidelines are still in the early stages of being refined, operationalized, and adopted. They may not yet be in a form that can be easily applied when compared with the more established methodological and reporting standards with explicit checklists and criteria. Fifth, our literature review did not include any literature reviews on LLMs [ 76 ], and we know there are reviews of LLMs published in 2023 and beyond. Nevertheless, our categorization of NLP could coincide with NLP and DL in our Euler diagram, and furthermore, LLMs could be used in health care via approved chatbot applications at an early life cycle phase, for example, using decision trees first to prototype the chatbot as clinical decision support [ 77 ] before advancing it in the mature phase toward a more robust AI solution in health care with LLMs. Finally, only one author was involved in screening citation titles and abstracts (although 2 were later involved in full-text review of all articles that were screened in), and there is the possibility that we erroneously excluded an article on the basis of title and abstract. Despite these limitations, this umbrella review provided a snapshot of the current state of knowledge and gaps that exist with respect to the use of and need for AI quality standards in health care AI studies.

Conclusions

Despite the growing number of AI standards to assess the quality of health care AI studies, they are seldom applied in practice. With the recent unveiling of broader ethical guidelines such as those of the NAM and EUC, more transparency and guidance in health care AI use are needed. The key contribution of this review was the harmonization of different AI quality standards that could help practitioners, developers, and users understand the relationships among AI domains, approaches, life cycles, and standards. Specifically, we advocate for common terminology on AI life cycles to enable comparison of AI maturity across stages and settings and ensure that AI research scales up to clinical validation and implementation.

Acknowledgments

CK acknowledges funding support from a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada (RGPIN/04884-2019). The authors affirm that no generative artificial intelligence tools were used in the writing of this manuscript.

Authors' Contributions

CK contributed to conceptualization (equal), methodology (equal), data curation (equal), formal analysis (equal), investigation (equal), and writing—original draft (lead). DC contributed to conceptualization (equal), methodology (equal), data curation (equal), formal analysis (equal), investigation (equal), and visualization (equal). SM contributed to conceptualization (equal), methodology (equal), data curation (equal), formal analysis (equal), investigation (equal), and visualization (equal). MM contributed to conceptualization (equal), methodology (equal), data curation (equal), formal analysis (equal), and investigation (equal). FL contributed to conceptualization (equal), methodology (lead), data curation (lead), formal analysis (lead), investigation (equal), writing—original draft (equal), visualization (equal), project administration (lead), and supervision (lead).

Conflicts of Interest

None declared.

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist.

PubMed search strings.

Characteristics of the included reviews.

List of excluded reviews and reasons.

Quality of the included reviews using Joanna Briggs Institute scores.

Health care artificial intelligence reviews by life cycle stage.

Quality standards found in 10% of unique studies in the selected reviews.

Quality standard–related issues mentioned in the artificial intelligence reviews.

Summary list of artificial intelligence quality standards.

  • Saleh L, Mcheick H, Ajami H, Mili H, Dargham J. Comparison of machine learning algorithms to increase prediction accuracy of COPD domain. In: Proceedings of the 15th International Conference on Enhanced Quality of Life and Smart Living. 2017. Presented at: ICOST '17; August 29-31, 2017:247-254; Paris, France. URL: https://doi.org/10.1007/978-3-319-66188-9_22 [ CrossRef ]
  • Gerke S, Minssen T, Cohen IG. Ethical and legal challenges of artificial intelligence-driven healthcare. Artif Intell Healthc. 2020:295-336. [ FREE Full text ] [ CrossRef ]
  • Čartolovni A, Tomičić A, Lazić Mosler E. Ethical, legal, and social considerations of AI-based medical decision-support tools: a scoping review. Int J Med Inform. May 2022;161:104738. [ CrossRef ] [ Medline ]
  • Das K, Cockerell CJ, Patil A, Pietkiewicz P, Giulini M, Grabbe S, et al. Machine learning and its application in skin cancer. Int J Environ Res Public Health. Dec 20, 2021;18(24):13409. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Elkin P, Mullin S, Mardekian J, Crowner C, Sakilay S, Sinha S, et al. Using artificial intelligence with natural language processing to combine electronic health record's structured and free text data to identify nonvalvular atrial fibrillation to decrease strokes and death: evaluation and case-control study. J Med Internet Res. Nov 09, 2021;23(11):e28946. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kawamoto A, Takenaka K, Okamoto R, Watanabe M, Ohtsuka K. Systematic review of artificial intelligence-based image diagnosis for inflammatory bowel disease. Dig Endosc. Nov 2022;34(7):1311-1319. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Anderson C, Bekele Z, Qiu Y, Tschannen D, Dinov ID. Modeling and prediction of pressure injury in hospitalized patients using artificial intelligence. BMC Med Inform Decis Mak. Aug 30, 2021;21(1):253. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Matheny M, Israni ST, Ahmed M, Whicher D. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC. National Academy of Medicine; 2019.
  • Park Y, Jackson GP, Foreman MA, Gruen D, Hu J, Das AK. Evaluating artificial intelligence in medicine: phases of clinical research. JAMIA Open. Oct 2020;3(3):326-331. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Yang C, Kors JA, Ioannou S, John LH, Markus AF, Rekkas A, et al. Trends in the conduct and reporting of clinical prediction model development and validation: a systematic review. J Am Med Inform Assoc. Apr 13, 2022;29(5):983-989. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Van Calster B, Wynants L, Timmerman D, Steyerberg EW, Collins GS. Predictive analytics in health care: how can we know it works? J Am Med Inform Assoc. Dec 01, 2019;26(12):1651-1654. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. Oct 29, 2019;17(1):195. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Yin J, Ngiam KY, Teo HH. Role of artificial intelligence applications in real-life clinical practice: systematic review. J Med Internet Res. Apr 22, 2021;23(4):e25759. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Coiera E, Ammenwerth E, Georgiou A, Magrabi F. Does health informatics have a replication crisis? J Am Med Inform Assoc. Aug 01, 2018;25(8):963-968. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Shelmerdine SC, Arthurs OJ, Denniston A, Sebire NJ. Review of study reporting guidelines for clinical studies using artificial intelligence in healthcare. BMJ Health Care Inform. Aug 23, 2021;28(1):e100385. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Liu X, Rivera SC, Moher D, Calvert MJ, Denniston AK, SPIRIT-AI and CONSORT-AI Working Group. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. BMJ. Sep 09, 2020;370:m3164-m3148. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rivera SC, Liu X, Chan AW, Denniston AK, Calvert MJ, SPIRIT-AI and CONSORT-AI Working Group. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. BMJ. Sep 09, 2020;370:m3210. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hernandez-Boussard T, Bozkurt S, Ioannidis JP, Shah NH. MINIMAR (MINimum information for medical AI reporting): developing reporting standards for artificial intelligence in health care. J Am Med Inform Assoc. Dec 09, 2020;27(12):2011-2015. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Norgeot B, Quer G, Beaulieu-Jones BK, Torkamani A, Dias R, Gianfrancesco M, et al. Minimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist. Nat Med. Sep 2020;26(9):1320-1324. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ethics and governance of artificial intelligence for health: WHO guidance. Licence: CC BY-NC-SA 3.0 IGO. World Health Organization. URL: https://apps.who.int/iris/bitstream/handle/10665/341996/9789240029200-eng.pdf [accessed 2024-04-05]
  • Solanki P, Grundy J, Hussain W. Operationalising ethics in artificial intelligence for healthcare: a framework for AI developers. AI Ethics. Jul 19, 2022;3(1):223-240. [ FREE Full text ] [ CrossRef ]
  • Joshi I, Morley J. Artificial intelligence: how to get it right: putting policy into practice for safe data-driven innovation in health and care. National Health Service. 2019. URL: https://transform.england.nhs.uk/media/documents/NHSX_AI_report.pdf [accessed 2024-04-05]
  • van de Sande D, Van Genderen ME, Smit JM, Huiskens J, Visser JJ, Veen RE, et al. Developing, implementing and governing artificial intelligence in medicine: a step-by-step approach to prevent an artificial intelligence winter. BMJ Health Care Inform. Feb 19, 2022;29(1):e100495. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Andaur Navarro CL, Damen JA, Takada T, Nijman SW, Dhiman P, Ma J, et al. Completeness of reporting of clinical prediction models developed using supervised machine learning: a systematic review. BMC Med Res Methodol. Jan 13, 2022;22(1):12. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Yusuf M, Atal I, Li J, Smith P, Ravaud P, Fergie M, et al. Reporting quality of studies using machine learning models for medical diagnosis: a systematic review. BMJ Open. Mar 23, 2020;10(3):e034568. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wenzel MA, Wiegand T. Towards international standards for the evaluation of artificial intelligence for health. In: Proceedings of the 2019 ITU Kaleidoscope: ICT for Health: Networks, Standards and Innovation. 2019. Presented at: ITU K '19; December 4-6, 2019:1-10; Atlanta, GA. URL: https://ieeexplore.ieee.org/abstract/document/8996131 [ CrossRef ]
  • Varghese J. Artificial intelligence in medicine: chances and challenges for wide clinical adoption. Visc Med. Dec 2020;36(6):443-449. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Nyarairo M, Emami E, Abbasgholizadeh S. Integrating equity, diversity and inclusion throughout the lifecycle of artificial intelligence in health. In: Proceedings of the 13th Augmented Human International Conference. 2022. Presented at: AH '22; May 26-27, 2022:1-4; Winnipeg, MB. URL: https://dl.acm.org/doi/abs/10.1145/3532530.3539565 [ CrossRef ]
  • MacEntee MI. A typology of systematic reviews for synthesising evidence on health care. Gerodontology. Dec 06, 2019;36(4):303-312. [ CrossRef ] [ Medline ]
  • Aromataris E, Fernandez R, Godfrey C, Holly C, Khalil H, Tungpunkom P. Umbrella reviews. In: Aromataris E, Lockwood C, Porritt K, Pilla B, Jordan Z, editors. JBI Manual for Evidence Synthesis. Adelaide, South Australia. Joanna Briggs Institute; 2020.
  • Tricco AC, Langlois EV, Straus SE. Rapid reviews to strengthen health policy and systems: a practical guide. Licence CC BY-NC-SA 3.0 IGO. World Health Organization. 2017. URL: https://apps.who.int/iris/handle/10665/258698 [accessed 2024-04-05]
  • Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. Mar 29, 2021;372:n71. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Russell S, Norvig P. Artificial Intelligence: A Modern Approach. 4th edition. London, UK. Pearson Education; 2021.
  • Ethics guidelines for trustworthy AI. European Commission, Directorate-General for Communications Networks, Content and Technology. URL: https://data.europa.eu/doi/10.2759/346720 [accessed 2024-04-05]
  • Lloyd N, Khuman AS. AI in healthcare: malignant or benign? In: Chen T, Carter J, Mahmud M, Khuman AS, editors. Artificial Intelligence in Healthcare: Recent Applications and Developments. Singapore, Singapore. Springer; 2022:1-46.
  • Abd-Alrazaq A, Alajlani M, Alhuwail D, Schneider J, Al-Kuwari S, Shah Z, et al. Artificial intelligence in the fight against COVID-19: scoping review. J Med Internet Res. Dec 15, 2020;22(12):e20756. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Adamidi ES, Mitsis K, Nikita KS. Artificial intelligence in clinical care amidst COVID-19 pandemic: a systematic review. Comput Struct Biotechnol J. 2021;19:2833-2850. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Barboi C, Tzavelis A, Muhammad LN. Comparison of severity of illness scores and artificial intelligence models that are predictive of intensive care unit mortality: meta-analysis and review of the literature. JMIR Med Inform. May 31, 2022;10(5):e35293. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Battineni G, Hossain MA, Chintalapudi N, Amenta F. A survey on the role of artificial intelligence in biobanking studies: a systematic review. Diagnostics (Basel). May 09, 2022;12(5):1179. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bertini A, Salas R, Chabert S, Sobrevia L, Pardo F. Using machine learning to predict complications in pregnancy: a systematic review. Front Bioeng Biotechnol. Jan 19, 2021;9:780389. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bhatt P, Liu J, Gong Y, Wang J, Guo Y. Emerging artificial intelligence-empowered mHealth: scoping review. JMIR Mhealth Uhealth. Jun 09, 2022;10(6):e35053. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Buchanan C, Howitt ML, Wilson R, Booth RG, Risling T, Bamford M. Predicted influences of artificial intelligence on the domains of nursing: scoping review. JMIR Nurs. Dec 17, 2020;3(1):e23939. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Buchanan C, Howitt ML, Wilson R, Booth RG, Risling T, Bamford M. Predicted influences of artificial intelligence on nursing education: scoping review. JMIR Nurs. Jan 28, 2021;4(1):e23933. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chew HS, Achananuparp P. Perceptions and needs of artificial intelligence in health care to increase adoption: scoping review. J Med Internet Res. Jan 14, 2022;24(1):e32939. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Choudhury A, Renjilian E, Asan O. Use of machine learning in geriatric clinical care for chronic diseases: a systematic literature review. JAMIA Open. Oct 2020;3(3):459-471. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Choudhury A, Asan O. Role of artificial intelligence in patient safety outcomes: systematic literature review. JMIR Med Inform. Jul 24, 2020;8(7):e18599. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Eldaly AS, Avila FR, Torres-Guzman RA, Maita K, Garcia JP, Serrano LP, et al. Artificial intelligence and lymphedema: state of the art. J Clin Transl Res. Jun 29, 2022;8(3):234-242. [ FREE Full text ] [ Medline ]
  • Le Glaz A, Haralambous Y, Kim-Dufor DH, Lenca P, Billot R, Ryan TC, et al. Machine learning and natural language processing in mental health: systematic review. J Med Internet Res. May 04, 2021;23(5):e15708. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Guo Y, Zhang Y, Lyu T, Prosperi M, Wang F, Xu H, et al. The application of artificial intelligence and data integration in COVID-19 studies: a scoping review. J Am Med Inform Assoc. Aug 13, 2021;28(9):2050-2067. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hassan N, Slight R, Weiand D, Vellinga A, Morgan G, Aboushareb F, et al. Preventing sepsis; how can artificial intelligence inform the clinical decision-making process? A systematic review. Int J Med Inform. Jun 2021;150:104457. [ CrossRef ] [ Medline ]
  • Huang JA, Hartanti IR, Colin MN, Pitaloka DA. Telemedicine and artificial intelligence to support self-isolation of COVID-19 patients: recent updates and challenges. Digit Health. May 15, 2022;8:20552076221100634. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kaelin VC, Valizadeh M, Salgado Z, Parde N, Khetani MA. Artificial intelligence in rehabilitation targeting the participation of children and youth with disabilities: scoping review. J Med Internet Res. Nov 04, 2021;23(11):e25745. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kirk D, Catal C, Tekinerdogan B. Precision nutrition: a systematic literature review. Comput Biol Med. Jun 2021;133:104365. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Loveys K, Prina M, Axford C, Domènec Ò, Weng W, Broadbent E, et al. Artificial intelligence for older people receiving long-term care: a systematic review of acceptability and effectiveness studies. Lancet Healthy Longev. Apr 2022;3(4):e286-e297. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mörch CM, Atsu S, Cai W, Li X, Madathil SA, Liu X, et al. Artificial intelligence and ethics in dentistry: a scoping review. J Dent Res. Dec 01, 2021;100(13):1452-1460. [ CrossRef ] [ Medline ]
  • Payedimarri AB, Concina D, Portinale L, Canonico M, Seys D, Vanhaecht K, et al. Prediction models for public health containment measures on COVID-19 using artificial intelligence and machine learning: a systematic review. Int J Environ Res Public Health. Apr 23, 2021;18(9):4499. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Popescu D, El-Khatib M, El-Khatib H, Ichim L. New trends in melanoma detection using neural networks: a systematic review. Sensors (Basel). Jan 10, 2022;22(2):496. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Abbasgholizadeh Rahimi S, Légaré F, Sharma G, Archambault P, Zomahoun HT, Chandavong S, et al. Application of artificial intelligence in community-based primary health care: systematic scoping review and critical appraisal. J Med Internet Res. Sep 03, 2021;23(9):e29839. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sahu P, Raj Stanly EA, Simon Lewis LE, Prabhu K, Rao M, Kunhikatta V. Prediction modelling in the early detection of neonatal sepsis. World J Pediatr. Mar 05, 2022;18(3):160-175. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sapci AH, Sapci HA. Artificial intelligence education and tools for medical and health informatics students: systematic review. JMIR Med Educ. Jun 30, 2020;6(1):e19285. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Seibert K, Domhoff D, Bruch D, Schulte-Althoff M, Fürstenau D, Biessmann F, et al. Application scenarios for artificial intelligence in nursing care: rapid review. J Med Internet Res. Nov 29, 2021;23(11):e26522. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Syeda HB, Syed M, Sexton KW, Syed S, Begum S, Syed F, et al. Role of machine learning techniques to tackle the COVID-19 crisis: systematic review. JMIR Med Inform. Jan 11, 2021;9(1):e23811. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Talpur S, Azim F, Rashid M, Syed SA, Talpur BA, Khan SJ. Uses of different machine learning algorithms for diagnosis of dental caries. J Healthc Eng. Mar 31, 2022;2022:5032435-5032413. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Vélez-Guerrero MA, Callejas-Cuervo M, Mazzoleni S. Artificial intelligence-based wearable robotic exoskeletons for upper limb rehabilitation: a review. Sensors (Basel). Mar 18, 2021;21(6):2146. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Welch V, Wy TJ, Ligezka A, Hassett LC, Croarkin PE, Athreya AP, et al. Use of mobile and wearable artificial intelligence in child and adolescent psychiatry: scoping review. J Med Internet Res. Mar 14, 2022;24(3):e33560. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zhao IY, Ma YX, Yu MW, Liu J, Dong WN, Pang Q, et al. Ethics, integrity, and retributions of digital detection surveillance systems for infectious diseases: systematic literature review. J Med Internet Res. Oct 20, 2021;23(10):e32328. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zheng Y, Dickson VV, Blecker S, Ng JM, Rice BC, Melkus GD, et al. Identifying patients with hypoglycemia using natural language processing: systematic literature review. JMIR Diabetes. May 16, 2022;7(2):e34681. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zidaru T, Morrow EM, Stockley R. Ensuring patient and public involvement in the transition to AI-assisted mental health care: a systematic scoping review and agenda for design justice. Health Expect. Aug 12, 2021;24(4):1072-1124. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Khan ZF, Alotaibi SR. Applications of artificial intelligence and big data analytics in m-Health: a healthcare system perspective. J Healthc Eng. 2020;2020:8894694. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sarker S, Jamal L, Ahmed SF, Irtisam N. Robotics and artificial intelligence in healthcare during COVID-19 pandemic: a systematic review. Rob Auton Syst. Dec 2021;146:103902. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Herrmann H. The arcanum of artificial intelligence in enterprise applications: toward a unified framework. J Eng Technol Manag. Oct 2022;66:101716. [ CrossRef ]
  • Wu E, Wu K, Daneshjou R, Ouyang D, Ho DE, Zou J. How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals. Nat Med. Apr 05, 2021;27(4):582-584. [ CrossRef ] [ Medline ]
  • Bohr A, Memarzadeh K. The rise of artificial intelligence in healthcare applications. In: Bohr A, Memarzadeh K, editors. Artificial Intelligence in Healthcare. New York, NY. Academic Press; 2020:25-57.
  • Jacobson FL, Krupinski EA. Clinical validation is the key to adopting AI in clinical practice. Radiol Artif Intell. Jul 01, 2021;3(4):e210104. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tran K, Bøtker JP, Aframian A, Memarzadeh K. Artificial intelligence for medical imaging. In: Bohr A, Memarzadeh K, editors. Artificial Intelligence in Healthcare. New York, NY. Academic Press; 2020:143-161.
  • Hassija V, Chamola V, Mahapatra A, Singal A, Goel D, Huang K, et al. Interpreting black-box models: a review on explainable artificial intelligence. Cogn Comput. Aug 24, 2023;16(1):45-74. [ FREE Full text ] [ CrossRef ]
  • Chrimes D. Using decision trees as an expert system for clinical decision support for COVID-19. Interact J Med Res. Jan 30, 2023;12:e42540. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Edited by S Ma, T Leung; submitted 19.11.23; peer-reviewed by K Seibert, K Washington, X Yan; comments to author 21.03.24; revised version received 03.04.24; accepted 04.04.24; published 22.05.24.

©Craig E Kuziemsky, Dillon Chrimes, Simon Minshall, Michael Mannerow, Francis Lau. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 22.05.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

IMAGES

  1. Types of Research Report

    standard 8.10 reporting research results

  2. reporting research results

    standard 8.10 reporting research results

  3. Reporting Research Results

    standard 8.10 reporting research results

  4. reporting research results

    standard 8.10 reporting research results

  5. reporting research results

    standard 8.10 reporting research results

  6. reporting research results

    standard 8.10 reporting research results

VIDEO

  1. Scheduled for deletion. Watch now. I need your help

  2. Why Is This The Most Dangerous Job

  3. 10th maths chapter 8 statistics and probability example 8.8 tn samacheer hiba maths

  4. Research and Publication Ethics| APA Ethical Standard VIII (part3)| Ethical Issues in Psychology

  5. 10th maths chapter 8 statistics example 8.1 range of the following data tn samacheer hiba maths

  6. Account for current liabilities

COMMENTS

  1. Ethics Code Updates to the Publication Manual

    8.10 Reporting Research Results (a) Psychologists do not fabricate data. (See also Standard 5.01a, Avoidance of False or Deceptive Statements.) (b) If psychologists discover significant errors in their published data, they take reasonable steps to correct such errors in a correction, retraction, erratum, or other appropriate publication means.

  2. PDF APA Ethical Principles of Psychologists and Code of Conduct (2017)

    to Standard 3.04 Adopted August 3, 2016 Effective January 1, 2017. INTRODUCTION AND APPLICABILITY PREAMBLE GENERAL PRINCIPLES ... in Research 8.10 Reporting Research Results 8.11 P ms i aragi l 8.12 P ubical ion t Cedrit 8.13 Duplicate Publication of Data 8.14 Sharing Research Data for Verification

  3. Ethical principles of psychologists and code of conduct

    (See also Standard 8.07, Deception in Research.) 8.04 Client/Patient, Student, and Subordinate Research Participants ... 8.10 Reporting Research Results (a) Psychologists do not fabricate data. (See also Standard 5.01a, ... After research results are published, psychologists do not withhold the data on which their conclusions are based from ...

  4. From Moral Principles to Ethics Codes

    8.10 Reporting Research Results. Psychologists do not fabricate data. (See also Standard 5.01a, Avoidance of False or Deceptive Statements.) If psychologists discover significant errors in their published data, they take reasonable steps to correct such errors in a correction, retraction, erratum, or other appropriate publication means. 8.11 ...

  5. APA Code of Ethics

    8.10 Reporting Research Results. Psychologists do not fabricate data. (See also Standard 5.01a, Avoidance of False or Deceptive Statements.) If psychologists discover significant errors in their published data, they take reasonable steps to correct such errors in a correction, retraction, erratum, or other appropriate publication means. 8.11 ...

  6. Section 8: Research and Publication

    8.10 Reporting Research Results (a) Psychologists do not fabricate data. (See also Standard 5.01a, Avoidance of False or Deceptive Statements.) (b) If psychologists discover significant errors in their published data, they take reasonable steps to correct such errors in a correction, retraction, erratum, or other appropriate publication means.

  7. PDF ETHICAL PRINCIPLES AND GUIDELINES

    aries (Standard G.3), (d) reporting results (Standard G.4), and (e) publications and ... (Standard 8.09), (j) reporting research results (Standard 8.10), and (k) plagiarism (Standard 8.11). These topics in both the ACA and APA codes of ethics provide resources for practitioners who conduct or use research studies. You can review

  8. 3.2 From Moral Principles to Ethics Codes

    8.10 Reporting Research Results. Psychologists do not fabricate data. (See also Standard 5.01a, Avoidance of False or Deceptive Statements.) If psychologists discover significant errors in their published data, they take reasonable steps to correct such errors in a correction, retraction, erratum, or other appropriate publication means. 8.11 ...

  9. PDF Introduction to research integrity at Griffith University

    8.10 Report the results of the research responsibly. 8.11 As appropriate, bank/archive/share the data and materials. 8.12 Retain the data for the requisite period. 9) Scope of these matters These guidelines apply to all Griffith University research,

  10. PDF Certification of Compliance With APA Ethical Principles

    For your information, the APA Ethical Principles concerning research and publication are reprinted below. Please review the Principles and sign the form provided on the back of this sheet to indicate that you are in compliance. From Ethical Principles of Psychologists and Code of Conduct. (2002). American Psychologist, 57, 1060-1073.

  11. PDF Standards on Research and Publication

    Research and Publication 8. Research and Publication 8.01 Institutional Approval When institutional approval is required, psychologists provide accurate information about their research proposals and obtain approval prior to conducting the research. They conduct the research in accordance with the approved research protocol.

  12. Research Guides: Systematic Reviews: Reporting Results

    The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. Ann Intern Med. 2009 Aug 18;151 (4):W65-94. doi: 10.7326/0003-4819-151-4-200908180-00136. Epub 2009 Jul 20.

  13. APA Section 8 Research & Publication Flashcards

    8.02 Informed Consent to Research. every participant has the right to withdraw or stay from the study, they are given a form to sign that tells them what they will be doing during the study. 8.03 Informed Consent for Recording Voices and Images in Research. every participate must be told that they will possibly be recorded or documented for ...

  14. (PDF) Reporting research results effectively

    of communicating the findings effectively. The importance of effective reporting is underscored by the. Association for Institutional Research (AIR), which has published two. volumes on the topic ...

  15. Legal and ethical issues in research

    Abstract. Legal and ethical issues form an important component of modern research, related to the subject and researcher. This article seeks to briefly review the various international guidelines and regulations that exist on issues related to informed consent, confidentiality, providing incentives and various forms of research misconduct.

  16. Ethics

    (See also Standard 8.07, Deception in Research.) 8.04 Client/Patient, Student, and Subordinate Research Participants ... 8.10 Reporting Research Results (a) Psychologists do not fabricate data. (See also Standard 5.01a, Avoidance of False or Deceptive Statements.) ... After research results are published, psychologists do not withhold the data ...

  17. ETHICAL STANDARD 8: research and publication

    Study with Quizlet and memorize flashcards containing terms like 8.01 INSTITUTIONAL APPROVAL When institutional approval is required, psychologists obtain it. They conduct the research in _____ with the approved research protocol., 8.02 INFORMED CONSENT TO RESEARCH (a) When obtaining informed consent as required, psychologists inform participants about: 1) the _____ of the research, expected ...

  18. PDF Ethical PrinciPlEs of Psychologists and codE of conduct

    Offering Inducements for Research Participation Deception in Research 8.08 Debriefing 8.09 Humane Care and Use of Animals in Research Reporting Research Results 8.11 Plagiarism 8.12 Publication Credit 8.13 Duplicate Publication of Data 8.14 Sharing Research Data for Verification 8.15 Reviewers Assessment 9.01 Bases for Assessments

  19. PDF Ethical Standards Relating to Harm and Exploitation

    8.07 Deception in Research 8.08 Debriefing 8.09 Humane Care and Use of Animals in Research 8.10 Reporting Research Results 8.11 Plagiarism 8.12 Publication Credit 8.15 Reviewers Assessment (Section 9) 9.01 Bases for Assessments 9.02 Use of Assessments 9.04 Release of Test Data 9.05 Test Construction 9.06 Interpreting Assessment Results

  20. Guidance for Clinical Trial Protocols

    The SPIRIT 2013 Statement provides evidence-based recommendations for the minimum content of a clinical trial protocol. SPIRIT is widely endorsed as an international standard for trial protocols. The recommendations are outlined in a 33-item checklist and figure. Each checklist item is detailed in the Explanation & Elaboration paper.

  21. PDF Ethical Principles of Psychologists and Code Of Conduct 2002

    Consent for Research 8.06 Offering Inducements for Research Participation 8.07 Deception in Research 8.08 Debriefing 8.09 Humane Care and Use of Animals in Research 8.10 Reporting Research Results 8.11 Plagiarism 8.12 Publication Credit 8.13 Duplicate Publication of Data 8.14 Sharing Research Data for Verification 8.15 Reviewers 9. Assessment

  22. Ch 11: Standards on Research and Publication Flashcards

    1. After research results are published, psychologists are ____________ to provide the data on which their conclusions were based if requested by other competent professionals who want to re-analyze the data only. required. Dr. Waters is a reviewer for a major psychological journal.

  23. Journal of Medical Internet Research

    Background: In recent years, there has been an upwelling of artificial intelligence (AI) studies in the health care literature. During this period, there has been an increasing number of proposed standards to evaluate the quality of health care AI studies. Objective: This rapid umbrella review examines the use of AI quality standards in a sample of health care AI systematic review articles ...

  24. Ethical Principles of Psychologists and Code of Conduct

    (a) Psychologists design, conduct, and report research in accordance with recognized standards of scientific competence and ethical research. (b) Psychologists plan their research so as to minimize the possibility that results will be misleading. (c) In planning research, psychologists consider its ethical acceptability under the Ethics Code.