Peer Review

  • Animal Subjects
  • Biosecurity
  • Collaboration
  • Conflicts of Interest
  • Data Management
  • Human Subjects
  • Publication
  • Research Misconduct
  • Social Responsibility
  • Stem Cell Research
  • Whistleblowing
  • Regulations and Guidelines
  • The integrity of science depends on effective peer review A published paper reflects not only on the authors of that paper, but also on the community of scientists. Without the judgment of knowledgeable peers as a standard for the quality of science, it would not be possible to differentiate what is and is not credible.
  • Effective peer review depends on competent and responsible reviewers The privilege of being part of the research community implies a responsibility to share in the task of reviewing the work of peers.

For much of the last century, peer review has been the principal mechanism by which the quality of research is judged. In general, the most respected research findings are those that are known to have faced peer review. Most funding decisions in science are based on peer review. Academic advancement is generally based on success in publishing peer-reviewed research and being awarded funding based on peer review; further, it involves direct peer review of the candidate's academic career. In short, research and researchers are judged primarily by peers.

The peer-review process is based on the notion that, because much of academic inquiry is relatively specialized, peers with similar expertise are in the best position to judge one another's work. This mechanism was largely designed to evaluate the relative quality of research. However, with appropriate feedback, it can also be a valuable tool to improve a manuscript, a grant application, or the focus of an academic career. Despite these advantages, the process of peer review is hampered by both perceived and real limitations.

Critics of peer review worry that reviewers may be biased in favor of well-known researchers, or researchers at prestigious institutions, that reviewers may review the work of their competitors unfairly, that reviewers may not be qualified to provide an authoritative review, and even that reviewers will take advantage of ideas in unpublished manuscripts and grant proposals that they review. Many attempts have been made to examine these assumptions about the peer review process. Most have found such problems to be, at worst, infrequent (e.g., Abby et al., 1994; Garfunkel et al., 1994; Godlee et al., 1998; Justice et al., 1998; van Rooyen et al., 1998; Ward and Donnelly, 1998). Nonetheless, problems do occur.

Because the process of peer review is highly subjective, it is possible that some people will abuse their privileged position and act based on unconscious bias. For example, reviewers may be less likely to criticize work that is consistent with their own perceptions (Ernst and Resch, 1994) or to award a fellowship to a woman rather than a man (Wennerds and Wold, 1997). It is also important to keep in mind that peer review does not do well either at detecting innovative research or filtering out fraudulent, plagiarized, or redundant publications (reviewed by Godlee, 2000).

Despite its flaws, peer review does work to improve the quality of research. Considering the possible failings of peer review, the potential for bias and abuse, how can the process be managed so as to minimize problems while maintaining the advantages?

Most organizations reviewing research have specific guidelines regarding confidentiality and conflicts of interest. In addition, many organizations and institutions have guidelines dealing explicitly with the responsibilities of peer reviewers, such as those of the American Chemical Society (2006), the Society for Neuroscience (1998, and the Council of Biology Editors (CBE Peer Review Retreat Consensus Group, 1995).And, though currently suspended, there had been a federal requirement that made discussion of peer review part of instruction in the responsible conduct of research (Office of Research Integrity, 2000).

Peer review is governed by federal regulations in two respects. First, federal misconduct regulations can be invoked if a reviewer seriously abuses the review process, and second, peer review for the grant process prohibits review by individuals with conflicts of interest.

Despite these regulations, much of peer review is not directly regulated. It is governed instead by guidelines and custom.

Case Studies

Discussion questions.

  • Based on your own experience, or on discussion with someone who is an experienced reviewer, which of the following are common practice? Which of the following should not be acceptable practice?     a. The reviewer is not competent to perform a review, but does so anyway.     b. Reviewer bias results in a negative review that is misleading or untruthful.     c. The reviewer delays the review or provides an unfairly critical review for the purpose of personal advantage.     d. The reviewer and his or her research group take advantage of privileged information to redirect research efforts.     e. The reviewer shares review material with others (for the purpose of training or scientific discussion) without notifying or obtaining approval from the editor or funding agency.
  • What are the advantages and disadvantages of having a reviewer blinded to the identity of manuscript authors, a grant applicant, or a candidate for academic advancement?
  • What are the advantages and disadvantages of having manuscript authors, a grant applicant, or a candidate for academic advancement blinded to the identity of a reviewer?
  • What are the ethical responsibilities of peer reviewers?
  • List and describe federal regulations relevant to peer review.
  • Should reviewers working in the same field of research be excluded from reviewing each others' work? How can the risks of bias and the advantages of expertise be reconciled in the selection of peer reviewers?
  • What are the responsibilities of a reviewer to preserve the confidentiality of work under review? What protections, if any, help to prevent the loss of confidentiality?

Additional Considerations

The purpose of peer review is not merely to evaluate the submitted work, but also to promote better work within the scientific community. As such, there are several essential responsibilities for peer reviews.

  • Provide a timely response Reviewers should make every effort to complete a review in the time requested. If it is not possible to meet the conditions for the review, then the reviewer should promptly decline or see if some accommodation is possible. Research reports, grant applications, and academic files submitted for review all represent a significant investment of time and effort, and frequently the documents under review contain timely results that will suffer if delayed in the review process.
  • Ensure Competence Reviewers who realize that their expertise is limited have a responsibility to make their degree of competence clear to the editor, funding agency, or academic institution asking for their expert opinion. A reviewer who does not have the requisite expertise is at risk of approving a submission that has substantial deficiencies or rejecting one that is meritorious. Such errors are a waste of resources and hamper the scientific enterprise.
  • Avoid Bias Reviewers' comments and conclusions should be based on a consideration of the facts, exclusive of personal or professional bias. To the extent possible, the system of review should be designed to minimize actual or perceived bias on the reviewers' part. If reviewers have any interest that might interfere with an objective review, then they should either decline a role as reviewer or declare the conflict of interest to the editor, funding agency, or academic institution and ask how best to manage the conflict.
  • Maintain Confidentiality Material submitted for peer review is a privileged communication that should be treated in confidence. Material under review should not be shared or discussed with anyone outside the designated review process unless approved by the editor, funding agency, or academic institution. Authors, grant applicants, and candidates for academic review have a right to expect that the review process will remain confidential. Reviewers unsure about policies for enlisting the help of others should ask.
  • Avoid unfair advantage A reviewer should not take advantage of material available through the privileged communication of peer review. One exception is that if a reviewers becomes aware on the basis of work under review that a line of her or his own research is likely to be unprofitable or a waste of resources, then they may ethically discontinue that work (American Chemical Society, 2006; Society for Neuroscience, 1998. In such cases, the circumstances should be communicated to those who requested the review. Beyond this exception, every effort should be made to avoid even the appearance of taking advantage of information obtained through the review process. Potential reviewers concerned that their participation would be a substantial conflict of interest should decline the request to review.
  • Offer constructive criticism Reviewers' comments should acknowledge positive aspects of the material under review, assess negative aspects constructively, and indicate clearly the improvements needed. The purpose of peer review is not to demonstrate the reviewer's proficiency in identifying flaws, but to help the authors or candidates identify and resolve weaknesses in their work.
  • Abby M, Massey MD, Galandiuk S, Polk HC (1994): Peer review is an effective screening process to evaluate medical manuscripts. JAMA 272: 105-107.
  • American Chemical Society (2006): Ethical guidelines to publication of chemical research. ACS Publications http://pubs.acs.org/userimages/ContentEditor/1218054468605/ethics.pdf
  • CBE Peer Review Retreat Consensus Group (1995): Peer review guidelines - A working draft. CBE Views 18(5): 79-81.
  • Ernst E, Resch KL (1994): Reviewer bias: a blinded experimental study. Journal of Laboratory and Clinical Medicine 124(2): 178-82.
  • Garfunkel JM, Ulshen MH, Hamrick HJ, Lawson EE (1994): Effect of institutional prestige on reviewers' recommendations and editorial decisions. JAMA 272: 137-138.
  • Godlee F, Gale CR, Martyn CN (1998): Effect on the quality of peer review of blinding reviewers and asking them to sign their reports. JAMA 280: 237-240.
  • Godlee F (2000): The ethics of peer review. In (Jones AH, McLellan F, eds.): Ethical Issues in Biomedical Publication. Johns Hopkins University Press, Baltimore, MD, pp. 59-84.
  • Justice AC, Cho MK, Winker MA, Berlin JA, Rennie D, PEER Investigators (1998): Does masking author identity improve peer review quality? JAMA 280: 240-242.
  • Office of Research Integrity (2000): PHS Policy on Instruction in RCR. http://ori.hhs.gov/policies/RCR_Policy.shtml
  • Society for Neuroscience (1998): Responsible Conduct Regarding Scientific Communication. http://www.sfn.org/skins/main/pdf/Guidelines/ResponsibleConduct.pdf
  • van Rooyen S, Godlee F, Evans S, Smith R, Black N (1998): Effect of blinding and unmasking on the quality of peer review. JAMA 280: 234-237.
  • Wennerds C, Wold A (1997): Nepotism and sexism in peer review . Nature 307: 341.
  • Terms of Use
  • Site Feedback
  • Open access
  • Published: 18 August 2017

Improving the process of research ethics review

  • Stacey A. Page   ORCID: orcid.org/0000-0001-6494-3671 1 , 2 &
  • Jeffrey Nyeboer 3  

Research Integrity and Peer Review volume  2 , Article number:  14 ( 2017 ) Cite this article

20k Accesses

31 Citations

15 Altmetric

Metrics details

Research Ethics Boards, or Institutional Review Boards, protect the safety and welfare of human research participants. These bodies are responsible for providing an independent evaluation of proposed research studies, ultimately ensuring that the research does not proceed unless standards and regulations are met.

Concurrent with the growing volume of human participant research, the workload and responsibilities of Research Ethics Boards (REBs) have continued to increase. Dissatisfaction with the review process, particularly the time interval from submission to decision, is common within the research community, but there has been little systematic effort to examine REB processes that may contribute to inefficiencies. We offer a model illustrating REB workflow, stakeholders, and accountabilities.

Better understanding of the components of the research ethics review will allow performance targets to be set, problems identified, and solutions developed, ultimately improving the process.

Peer Review reports

Instances of research misconduct and abuse of research participants have established the need for research ethics oversight to protect the rights and welfare of study participants and the integrity of the research enterprise [ 1 , 2 ]. In response to such egregious events, national and international regulations have emerged that are intended to protect research participants (e.g. [ 3 , 4 , 5 ]).

Research Ethics Boards (REBs) also known as Institutional Review Boards (IRBs) and Research Ethics Committees (RECs) are charged with ensuring that research is planned and conducted in accordance with such laws and regulatory standards. In protecting the rights and welfare of participants, REBs must weigh possible harms to individuals against the plausible societal benefits of the research. They must ensure fair participant selection and, where applicable, confirm that appropriate provisions are in place for obtaining participant consent.

REBs often operate under the auspices of post-secondary institutions. Larger universities may support multiple REBs that serve different research areas, such as medical and health research and social science, psychology, and humanities research. Boards are constituted of people from a variety of backgrounds, each of whom contributes specific expertise to review and discussions. Members are appointed to the Board through established institutional practice. Nevertheless, most Board members bring a sincere interest and commitment to their roles. For university Faculty, Board membership may fulfil a service requirement that is part of their academic responsibilities.

The Canadian Tri-Council Policy Statement (TCPS2) advances a voluntary, self-governing model for REBs and institutions. The TCPS2 is a joint policy of Canada’s three federal research agencies (Canadian Institutes of Health Research, Natural Sciences and Engineering Research Council of Canada, and Social Sciences and Humanities Research Council), and institutional and researcher adherence to the policy standards is a condition of funding. Recognizing the independence of REBs in their decision-making, institutions are required to support their functioning. Central to the agreement is that institutions conducting research must establish an REB and ensure that it has the “necessary and sufficient ongoing financial and administrative resources” to fulfil its duties (TCPS2 [ 3 ] p. 68). A similar requirement for support of IRB functioning is included in the US Common Rule (45 CFR 46.103 [ 5 ]). The operationalization of “necessary and sufficient” is subjective and likely to vary widely. To the extent that the desired outcomes (i.e. timely reviews and approvals) depend on the allocation of these resources, they too will vary.

Time and research ethics review

From the academic hallways to the literature, characterizations of REBs and the research ethics review process are seldom complimentary. While numerous criticisms have been levelled, it is the time to decision that is most consistently maligned [ 6 , 7 , 8 , 9 , 10 , 11 ].

Factors associated with lengthy review time include incomplete or poorly completed applications [ 7 , 12 , 13 ], lack of administrative support [ 14 ], inadequately trained REB members [ 15 ], REB member competing commitments, expanding oversight requirements, and the sheer volume of applications [ 16 , 17 , 18 ]. Nevertheless, objective data on the inner workings of REBs are lacking [ 6 , 19 , 20 ].

Consequences of slow review times include centres’ withdrawing from multisite trials or limiting their participation in available trials [ 21 , 22 ], loss of needed research resources [ 23 ], and recruitment challenges in studies dependent on seasonal factors [ 24 ]. Lengthy time to study approval may ultimately delay patient access to potentially effective therapies [ 8 ].

Some jurisdictions have moved to regionalize or consolidate ethics review, using a centralized ethics review of protocols conducted on several sites. This enhances review efficiency for multisite research by removing the need for repeating reviews across centres [ 9 , 25 , 26 , 27 , 28 ]. Recommendations for systemic improvement include better standardization of review practices, enhanced training for REB members, and requiring accreditation of review boards [ 9 ].

The research ethics review processes are not well understood, and no gold standard exists against which to evaluate board practices [ 19 , 20 ]. Consequently, there is little information on how REBs may systematically improve their methods and outcomes. This paper presents a model based on stakeholder responsibilities in the process of research ethics review and illustrates how each makes contributions to the time an application spends in this process. This model focusses on REBs operating under the auspices of academic institutions, typical in Canada and the USA.

Modelling the research ethics review process

The research ethics review process may appear to some like the proverbial black box. An application is submitted and considered and a decision is made:

SUBMIT > REVIEW > DECISION

In reality, the first step to understanding and improving the process is recognizing that research ethics review involves more than just the REB. Contributing to the overall efficiency—or inefficiency—of the review are other stakeholders and their roles in the development and submission of the application and the subsequent movement of the application back and forth between PIs, administrative staff, reviewers, the Board, and the Chair, until ideally the application is deemed ready for approval.

Identifying how a research ethics review progresses permits better understanding of the workflow, including the administrative and technological supports, roles, and responsibilities. The goal is to determine where challenges in the system exist so they can be remediated and efficiencies gained.

One way of understanding details of the process is to model it. We have used a modelling approach based in part on a method advanced by Ishikawa and further developed by the second author (JN) [ 29 , 30 ]. Traditionally, the Ishikawa “fishbone” or cause and effect diagram has been used to represent the components of a manufacturing enterprise and its application facilitates understanding how the elements of an operation may cause inefficiencies. This modelling provides a means of analysing process dispersion (e.g. who is accountable for what specific outcomes) and is frequently used when trying to understand time delays in undertakings.

In our model (Fig.  1 ), “Categories” represent key role actions that trigger a subsequent series of work activities. The “Artefacts” are the products resulting from a set of completed activities and reflect staged movement in the process. Implicit in the model is a temporal sequence and the passage of time, represented by the arrows.

Basic business activity model

Applying this strategy to facilitate understanding of time delays in ethics review requires that the problem (i.e. time) be considered in the context of all stakeholders. This includes those involved in the development and submission of the application, those involved in the administrative movement of the application through the system, those involved in the substantive consideration and deliberation of the application, and those involved in the final decision-making.

The model developed (Fig.  2 ) was based primarily on a review of the lead author’s (SP) institution’s REB application process. The model is generally consistent with the process and practices of several other REBs with which she has had experience over the past 20 years.

Research ethics activity model

What this model illustrates is that the research ethics review process is complex. There are numerous stakeholders involved, each of whom bears a portion of the responsibility for an application’s time in the system. The model illustrates a temporal sequence of events where, ideally, the movement of an application is unidirectional, left to right. Time is lost when applications stall or backflow in the process.

Stakeholders, accountabilities, and the research ethics review model

There are four main stakeholder groups in the research ethics review process: researchers/research teams, research ethics unit administrative staff, REB members, and the institution. Each plays a role in the transit of an application through the process and how well they undertake their role responsibilities affects the time that the application takes to move through. Table  1 presents a summary of recommendations for best practices.

Researchers

The researcher initiates the process of research ethics review by developing a proposal involving human participants and submitting an application. Across standards, the principal investigator is accountable for the conduct of the study, including adherence to research ethics requirements. Such standards are readily available both from the source (e.g. Panel on Research Ethics [Canada], National Institutes of Health [USA], Food and Drug Administration [USA]) and, typically, through institutional websites. Researchers have an obligation to be familiar with the rules for human participant research. Developing a sound proposal where ethics requirements are met at the outset places the application in a good position at the time of submission. Researchers are accountable for delays in review when ethical standards are not met and the application must be returned for revision. Tracking the reasons for return permits solutions, such as targeted educational activities, to be developed.

Core issues that investigators can address in the development of their applications include an ethical recruitment strategy, a sound consent process, and application of relevant privacy standards and legislation. Most research ethics units associated with institutions maintain websites where key information and resources may be found, such as consent templates, privacy standards, “frequently asked questions,” and application submission checklists [ 31 , 32 , 33 ]. Moreover, consulting with the REB in advance of submission may help researchers to prevent potentially challenging issues [ 15 ]. Investigators who are diligent in knowing about and applying required standards will experience fewer requests for revision and fewer stalls or backtracking once their applications are submitted. Some have suggested that researchers should be required, rather than merely expected, to have an understanding of legal and ethics standards before they are even permitted to submit an application [ 19 ].

The scholarly integrity of proposed research is an essential element of ethically acceptable human participant research. Researchers must be knowledgeable about the relevant scientific literature and present proposals that are justified based on what is known and where knowledge gaps exist. Research methods must be appropriate to the question and studies adequately powered. Novice or inexperienced researchers whose protocols have not undergone formal peer review (e.g. via supervisory committees, internal peer review committees, or competitive grant reviews) should seek consultation and informal peer review prior to ethics review to ensure the scientific validity of their proposals. While it is within the purview of REBs to question methods and design, it is not their primary mandate. Using REB resources for science review is an opportunity cost that can compromise efficient ethics review.

Finally, researchers are advised to review and proof their applications prior to submission to ensure that all required components have been addressed and the information in the application and supporting documents (e.g. consent forms, protocol) is consistent. Missing or discrepant information is causal to application return and therefore to time lost [ 7 ].

Administrators

Prior to submission, administrators may be the first point of contact for researchers seeking assistance with application requirements. Subsequently, they are often responsible for undertaking a preliminary, screening review of applications to make sure they are complete, with all required supporting documents and approvals in place. Once an application is complete, the administrative staff assign it to a reviewer. The reviewer may be a Board member or a subject-matter expert accountable to the Board.

Initial consultation and screening activities work best when staff have good knowledge of both institutional application requirements and ethics standards. Administrative checklists are useful tools to help ensure consistent application of standards in this preliminary application review. Poorly screened applications that reach reviewers may be delayed if the application must be returned to the administrator or the researcher for repair.

Reviewers typically send their completed reviews back to the administrators. In turn, the administrators either forward the applications to the Chair to consider (i.e. for delegated approval) or to a Board meeting agenda. In addition to ensuring that applications are complete, administrators may be accountable for monitoring how long a file is out for review. When reviews are delayed or incomplete for any reason, administrators may need to reassign the file to a different reviewer.

Administrators are therefore key players in the ethics review process, as they may be both initial resources for researchers and subsequently facilitate communication between researchers and Board members. Moreover, given past experience with both research teams and reviewers, they may be aware of areas where applicants struggle and when applications or reviews are likely to be deficient or delinquent. Actively tracking such patterns in the review process may reveal problems to which solutions can be developed. For example, applications consistently deficient in a specific area may signal the need for educational outreach and reviews that are consistently submitted late may provide impetus to recruit new Board members or reviewers.

REB members

The primary responsibility for evaluating the substantive ethics issues in applications and how they are managed rests with the REB members and the Chair. The Board may approve applications, approve pending modifications, or reject them based on their compliance with standards and regulations.

Like administrators, an REB member’s efficiency and review quality are enhanced by the use of standard tools, in this case standardized review templates, intended to guide reviewers and Board members to address a consistent set of criteria. Where possible, matching members’ expertise to the application to be reviewed also contributes to timely, good quality reviews.

REB functioning is enhanced with ongoing member training and education, yielding consistent, efficient application of ethics principles and regulatory standards [ 15 ]. This may be undertaken in a variety of ways, including Board member retreats, regular circulation of current articles, and attending presentations and conferences. REB Chairs are accountable to ensure consistency in the decisions made by the Board (TCPS 2014, Article 6.8). This demands that Chairs thoroughly understand ethical principles and regulatory standards and that they maintain awareness of previous decisions. Much time can be spent at Board meetings covering old ground. The use of REB decision banks has been recommended as a means of systematizing a record of precedents, thus contributing to overall quality improvement [ 34 ].

Institution

Where research ethics review takes place under the auspices of an academic institution, the institutions must typically take responsibility to adequately support the functioning of their Boards and promote a positive culture of research ethics [ 3 , 5 ]. Supporting the financial and human resource costs of participating in ongoing education (e.g. retreats, speakers, workshops, conferences) is therefore the responsibility of the institution.

Operating an REB is costly [ 35 ]. It is reasonable to assume that there is a relationship between the adequacy of resources allocated to the workload and flow and the time to an REB decision. Studies have demonstrated wide variability in times to determination [ 8 , 9 , 10 , 22 ]. However, comparisons are difficult to make because of confounding factors such as application volume, number of staff, number of REB members, application quality, application type (e.g. paper vs. electronic), and protocol complexity. Despite these variables, it appears that setting a modal target turnaround time of 6 weeks (±2 weeks) is reasonable and in line with the targets set in the European Union and the UK’s National Health Service [ 36 , 37 ]. Tracking the time spent at each step in the model may reveal where applications are typically delayed for long periods and may be indicative of areas where more resources need to be allocated or workflows redesigned.

As institutions grow their volumes of research, workloads correspondingly increase for institutional REBs. To maintain service levels, institutions need to ensure that resources allocated to REBs match the volume and intensity of work. Benchmarking costs (primarily human resources) relative to the number of applications and time to a decision will help to inform the allocation of resources needed to maintain desired service levels.

Finally, most REB members typically volunteer their Board services to the institution. Despite their good-faith intent to serve, Board members occasionally find that researchers view them as obstacles to or adversaries in the research enterprise. Board members may believe that researchers do not value the time and effort they contribute to review, while researchers may believe the REB and its members are unreasonable, obstructive, and a “thorn in their side” [ 15 ]. Clearly, relationships can be improved. Nevertheless, improving the timeliness and efficiency of research ethics review should help to soothe fevered brows on both sides of the issue.

Upshur [ 12 ] has previously noted that the contributions to research ethics such as Board membership and application review need to be accorded the same academic prestige as serving on peer review grant panels and editorial boards and undertaking manuscript reviews. In doing so, institutions will help to facilitate a culture of respect for, and shared commitment to, research ethics review, which may only benefit the process.

The activities, roles, and responsibilities identified in the ethics review model illustrate that it is a complex activity and that “the REB” is not a single entity. Multiple stakeholders each bear a portion of the accountability for how smoothly a research ethics application moves through the process. Time is used most efficiently when forward momentum is maintained and the application advances. Delays occur when the artefact (i.e. either the application or the application review) is not advanced as the accountable stakeholders fail to discharge their responsibilities or when the artefact fails to meet a standard and it is sent back. Ensuring that all stakeholders understand and are able to operationalize their responsibilities is essential. Success depends in part on the institutional context, where standards and expectations should be well communicated, and resources like education and administrative support provided, so that capacity to execute responsibilities is assured.

Applying this model will assist in identifying activities, accountabilities, and baseline performance levels. This information will contribute to improving local practice when deficiencies are identified and solutions implemented, such as training opportunities or reduction in duplicate activities. It will also facilitate monitoring as operational improvements over baseline performance could be measured. Where activities and benchmarks are well defined and consistent, comparisons both within and across REBs can be made.

Finally, this paper focused primarily on administrative efficiency in the context of research ethics review time. However, the identified problems and their suggested solutions would contribute not only to enhanced timeliness of review but also to enhanced quality of review and therefore human participant protection.

Beecher HK. Ethics and clinical research. NEMJ. 1966;274(24):1354–60.

Article   Google Scholar  

Kim WO. Institutional review board (IRB) and ethical issues in clinical research. Korean J Anesthesiol. 2012;62(1):3–12.

Canadian Institutes of Health Research, Natural Sciences and Engineering Council of Canada, and Social Sciences and Humanities Research Council of Canada. Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans, December 2014.  http://www.ethics.gc.ca/eng/index/ . Accessed 21 Jun 2017.

World Medical Association. Declaration of Helsinki: ethical principles for medical research involving human subjects as amended by the 64th WMA General Assembly, Fortaleza, Brazil, October 2013 U.S. Department of Health. https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-researchinvolving-human-subjects/ . Accessed 21 Jun 2017.

U.S. Department of Health and Human Services, HHS.gov, Office for Human Research Protections. 45 CFR 46. Code of Federal Regulations. Title 45. Public Welfare. Department of Health and Human Services. Part 46. Protection of Human Subjects. Revised January 15, 2009. Effective July 14, 2009. Subpart A. Basic HHS Policy for Protection of Human Research Subjects. 2009. https://www.hhs.gov/ohrp/regulations-and-policy/regulations/45-cfr-46/ . Accessed 21 Jun 2017.

Abbott L, Grady C. A systematic review of the empirical literature evaluating IRBs: what we know and what we still need to learn. J Empir Res Hum Res Ethics. 2011;6:3–19.

Egan-Lee E, Freitag S, Leblanc V, Baker L, Reeves S. Twelve tips for ethical approval for research in health professions education. Med Teach. 2011;33(4):268–72.

Hicks SC, James RE, Wong N, Tebbutt NC, Wilson K. A case study evaluation of ethics review systems for multicentre clinical trials. Med J Aust. 2009;191(5):3.

Google Scholar  

Larson E, Bratts T, Zwanziger J, Stone P. A survey of IRB process in 68 U.S. hospitals. J Nurs Scholarsh. 2004;36(3):260–4.

Silberman G, Kahn KL. Burdens on research imposed by institutional review boards: the state of the evidence and its implications for regulatory reform. Milbank Q. 2011;89(4):599–627.

Whitney SN, Schneider CE. Viewpoint: a method to estimate the cost in lives of ethics board review of biomedical research. J Intern Med. 2011;269(4):396–402.

Upshur REG. Ask not what your REB can do for you; ask what you can do for your REB. Can Fam Physician. 2011;57(10):1113–4.

Taylor H. Moving beyond compliance: measuring ethical quality to enhance the oversight of human subjects research. IRB. 2007;29(5):9–14.

De Vries RG, Forsberg CP. What do IRBs look like? What kind of support do they receive? Account Res. 2002;9(3-4):199–216.

Guillemin M, Gillam L, Rosenthal D, Bolitho A. Human research ethics committees: examining their roles and practices. J Empir Res Hum Res Ethics. 2012;7(3):38–49. doi: 10.1525/jer.2012.7.3.38 .

Grady C. Institutional review boards: purpose and challenges. Chest. 2015;148(5):1148–55.

Burman WJ, Reves RR, Cohn DL, Schooley RT. Breaking the camel’s back: multicenter clinical trials and local institutional review boards. Ann Intern Med. 2001;134(2):152–7.

Whittaker E. Adjudicating entitlements: the emerging discourses of research ethics boards. Health: An Interdisciplinary Journal for the Social Study of Health, Illness and Medicine. 2005;9(4):513–35. doi: 10.1177/1363459305056416 .

Turner L. Ethics board review of biomedical research: improving the process. Drug Discov Today. 2004;9(1):8–12.

Nicholls SG, Hayes TP, Brehaut JC, McDonald M, Weijer C, Saginur R, et al. A Scoping Review of Empirical Research Relating to Quality and Effectiveness of Research Ethics Review. PLoS ONE. 2015;10(7):e0133639. doi: 10.1371/journal.pone.0133639 .

Mansbach J, Acholonu U, Clark S, Camargo CA. Variation in institutional review board responses to a standard, observational, pediatric research protocol. Acad Emerg Med. 2007;14(4):377–80.

Christie DRH, Gabriel GS, Dear K. Adverse effects of a multicentre system for ethics approval on the progress of a prospective multicentre trial of cancer treatment: how many patients die waiting? Internal Med J. 2007;37(10):680–6.

Greene SM, Geiger AM. A review finds that multicenter studies face substantial challenges but strategies exist to achieve institutional review board approval. J Clin Epidemiol. 2006;59(8):784–90.

Jester PM, Tilden SJ, Li Y, Whitley RJ, Sullender WM. Regulatory challenges: lessons from recent West Nile virus trials in the United States. Contemp Clin Trials. 2006;27(3):254–9.

Flynn KE, Hahn CL, Kramer JM, Check DK, Dombeck CB, Bang S, et al. Using central IRBs for multicenter clinical trials in the United States. Plos One. 2013;8(1):e54999.

National Institutes of Health (NIH). Final NIH policy on the use of a single institutional review board for multi-site research. 2017. NOT-OD-16-094. https://grants.nih.gov/grants/guide/notice-files/NOT-OD-16-094.html . Accessed 21 Jun 2017.

Check DK, Weinfurt KP, Dombeck CB, Kramer JM, Flynn KE. Use of central institutional review boards for multicenter clinical trials in the United States: a review of the literature. Clin Trials. 2013;10(4):560–7.

Dove ES, Townend D, Meslin EM, Bobrow M, Littler K, Nicol D, et al. Ethics review for international data-intensive research. Science. 2016;351(6280):1399–400.

Ishikawa K. Introduction to Quality Control. J. H. Loftus (trans.). Tokyo: 3A Corporation; 1990.

Nyeboer N. Early-stage requirements engineering to aid the development of a business process improvement strategy. Oxford: Kellogg College, University of Oxford; 2014.

University of Calgary. Researchers. Ethics and Compliance. CHREB. 2017. http://www.ucalgary.ca/research/researchers/ethics-compliance/chreb . Accessed 21 Jun 2017.

Harvard University. Committee on the Use of Human Subjects. University-area Institutional Review Board at Harvard. 2017. http://cuhs.harvard.edu . Accessed 21 Jun 2017.

Oxford University. UAS Home. Central University Research Ethics Committee (CUREC). 2016. https://www.admin.ox.ac.uk/curec/ . Accessed 21 Jun 2017.

Bean S, Henry B, Kinsey JM, McMurray K, Parry C, Tassopoulos T. Enhancing research ethics decision-making: an REB decision bank. IRB. 2010;32(6):9–12.

Sugarman J, Getz K, Speckman JL, Byrne MM, Gerson J, Emanuel EJ. The cost of institutional review boards in academic medical centers. NEMJ. 2005;352(17):1825–7.

National Health Service, Health Research Authority. Resources, Research legislation and governance, Standard Operating Procedures. 2017. http://www.hra.nhs.uk/resources/research-legislation-and-governance/standard-operating-procedures/ . Accessed 21 June 2017.

European Commission. Clinical Trials Directive 2001/20/EC of the European Parliament and of the Council of 4 April 2001. https://ec.europa.eu/health/sites/health/files/files/eudralex/vol-1/dir_2001_20/dir_2001_20_en.pdf . Accessed 21 Jun 2017.

Download references

Acknowledgements

The authors would like to thank Dr. Michael C. King for his review of the manuscript draft.

Availability of data and materials

Not applicable.

Authors’ contributions

The listed authors (SP, JN) have each undertaken the following: made substantial contributions to conception and design of the model; been involved in drafting the manuscript; have read and given final approval of the version to be published and participated sufficiently in the work to take public responsibility for appropriate portions of the content; and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Authors’ information

SP is the Chair of the Conjoint Health Research Ethics Board at the University of Calgary. She is also a member of the Human Research Ethics Board at Mount Royal University and a member of the Research Ethics Board at the Alberta College of Art and Design. She serves on the Board of Directors for the Canadian Association of Research Ethics Boards.

JN is an Executive Technology Consultant specializing in Enterprise and Business Architecture. He has worked on process improvement initiatives across multiple industries as well as on the delivery of technology-based solutions. He was the project manager for the delivery of the IRISS online system for the Province of Alberta’s Health Research Ethics Harmonization initiative.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Ethics approval and consent to participate, publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and affiliations.

Department of Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada

Stacey A. Page

Conjoint Health Research Board, University of Calgary, Calgary, Alberta, Canada

ITM Vocational University, Vadodara, Gujurat, India

Jeffrey Nyeboer

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stacey A. Page .

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Page, S.A., Nyeboer, J. Improving the process of research ethics review. Res Integr Peer Rev 2 , 14 (2017). https://doi.org/10.1186/s41073-017-0038-7

Download citation

Received : 31 March 2017

Accepted : 14 June 2017

Published : 18 August 2017

DOI : https://doi.org/10.1186/s41073-017-0038-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research ethics
  • Research Ethics Boards
  • Research Ethics Committees
  • Medical research
  • Applied ethics
  • Institutional Review Boards

Research Integrity and Peer Review

ISSN: 2058-8615

peer review in research ethics

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections

Ethics for Peer Reviewers

peer review in research ethics

As a reviewer, you have a crucial role in supporting research integrity in the peer review and publishing process. This guide outlines some of the basic ethical principles that you should expect to follow.

These recommendations are based on the Committee on Publication Ethics (COPE) Ethical Guidelines for Peer Reviewers. Visit the COPE website to see the full ethical guidelines for reviewers.

PLOS is a member of the Committee on Publication Ethics (COPE). We abide by the COPE Core Practices and follow COPE’s best practice guidelines.

Basic ethical guidelines for peer reviewers

peer review in research ethics

Choose assignments wisely

You should agree to review a manuscript only if you have the appropriate subject expertise and a sufficient amount of time to complete the review, in accordance with the journal deadline.

peer review in research ethics

Provide an objective, honest, and unbiased review

→ Declare any potentially competing interests and/or recuse yourself from assignments if you have a conflict of interest. → Make sure your perspective is not influenced by authors’ origins, nationality, beliefs, gender, or other characteristics. → Do not impersonate another individual in your work as a reviewer.

peer review in research ethics

Honor the confidentiality of the review process

Do not share information about manuscripts or reviews during or after peer review, and do not use any information from the review process for your own advantage.

peer review in research ethics

Be respectful and professional

Make sure your review comments are professional. Keep your focus on the work and not on the individuals.

Read more about peer review ethics

Council of Science Editors, Reviewer Roles and Responsibilities

International Committee of Medical Journal Editors, Responsibilities in the Submission and Peer Review Process – Reviewers

PLOS policy on ethical publishing practice

Thank you for helping to promote greater transparency and integrity in the review process!

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 14 December 2022

Advancing ethics review practices in AI research

  • Madhulika Srikumar   ORCID: orcid.org/0000-0002-6776-4684 1 ,
  • Rebecca Finlay 1 ,
  • Grace Abuhamad 2 ,
  • Carolyn Ashurst 3 ,
  • Rosie Campbell 4 ,
  • Emily Campbell-Ratcliffe 5 ,
  • Hudson Hongo 1 ,
  • Sara R. Jordan 6 ,
  • Joseph Lindley   ORCID: orcid.org/0000-0002-5527-3028 7 ,
  • Aviv Ovadya   ORCID: orcid.org/0000-0002-8766-0137 8 &
  • Joelle Pineau   ORCID: orcid.org/0000-0003-0747-7250 9 , 10  

Nature Machine Intelligence volume  4 ,  pages 1061–1064 ( 2022 ) Cite this article

9218 Accesses

9 Citations

46 Altmetric

Metrics details

A Publisher Correction to this article was published on 11 January 2023

This article has been updated

The implementation of ethics review processes is an important first step for anticipating and mitigating the potential harms of AI research. Its long-term success, however, requires a coordinated community effort, to support experimentation with different ethics review processes, to study their effect, and to provide opportunities for diverse voices from the community to share insights and foster norms.

You have full access to this article via your institution.

As artificial intelligence (AI) and machine learning (ML) technologies continue to advance, awareness of the potential negative consequences on society of AI or ML research has grown. Anticipating and mitigating these consequences can only be accomplished with the help of the leading experts on this work: researchers themselves.

Several leading AI and ML organizations, conferences and journals have therefore started to implement governance mechanisms that require researchers to directly confront risks related to their work that can range from malicious use to unintended harms. Some have initiated new ethics review processes, integrated within peer review, which primarily facilitate a reflection on the potential risks and effects on society after the research is conducted (Box 1 ). This is distinct from other responsibilities that researchers undertake earlier in the research process, such as the protection of the welfare of human participants, which are governed by bodies such as institutional review boards (IRBs).

Box 1 Current ethics review practices

Current ethics review practices can be thought of as a sliding scale that varies according to how submitting authors must conduct an ethical analysis and document it in their contributions. Most conferences and journals are yet to initiate ethics review.

Key examples of different types of ethics review process are outlined below.

Impact statement

NeurIPS 2020 broader impact statements - all authors were required to include a statement of the potential broader impact of their work, such as its ethical aspects and future societal consequences of the research, including positive and negative effects. Organizers also specified additional evaluation criteria for paper reviewers to flag submissions with potential ethical issues.

Other examples include the NAACL 2021 and the EMNLP 2021 ethical considerations sections, which encourages authors and reviewers to consider ethical questions in their submitted papers.

Nature Machine Intelligence asks authors for ethical and societal impact statements in papers that involve the identification or detection of humans or groups of humans, including behavioural and socio-economic data.

NeurIPS 2021 paper checklist - a checklist to prompt authors to reflect on potential negative societal effects of their work during the paper writing process (as well as other criteria). Authors of accepted papers were encouraged to include the checklist as an appendix. Reviewers could flag papers that required additional ethics review by the appointed ethics committee.

Other examples include the ACL Rolling Review (ARR) Responsible NLP Research checklist, which is designed to encourage best practices for responsible research.

Code of ethics or guidelines

International Conference on Learning Representations (ICLR) code of ethics - ICLR required authors to review and acknowledge the conference’s code of ethics during the submission process. Authors were not expected to include discussion on ethical aspects in their submissions unless necessary. Reviewers were encouraged to flag papers that may violate the code of ethics.

Other examples include the ACM Code of Ethics and Professional Conduct, which considers ethical principles but through the wider lens of professional conduct.

Although these initiatives are commendable, they have yet to be widely adopted. They are being pursued largely without the benefit of community alignment. As researchers and practitioners from academia, industry and non-profit organizations in the field of AI and its governance, we believe that community coordination is needed to ensure that critical reflection is meaningfully integrated within AI research to mitigate its harmful downstream consequences. The pace of AI and ML research and its growing potential for misuse necessitates that this coordination happen today.

Writing in Nature Machine Intelligence , Prunkl et al. 1 argue that the AI research community needs to encourage public deliberation on the merits and future of impact statements and other self-governance mechanisms in conference submissions. We agree. Here, we build on this suggestion, and provide three recommendations to enable this effective community coordination, as more ethics review approaches begin to emerge across conferences and journals. We believe that a coordinated community effort will require: (1) more research on the effects of ethics review processes; (2) more experimentation with such processes themselves; and (3) the creation of venues in which diverse voices both within and beyond the AI or ML community can share insights and foster norms. Although many of the challenges we address have been previously highlighted 1 , 2 , 3 , 4 , 5 , 6 , this Comment takes a wider view, calling for collaboration between different conferences and journals by contextualizing this conversation against more recent studies 7 , 8 , 9 , 10 , 11 and developments.

Developments in AI research ethics

In the past, many applied scientific communities have contended with the potential harmful societal effects of their research. The infamous anthrax attacks in 2001, for example, catalysed the creation of the National Science Advisory Board for Biosecurity to prevent the misuse of biomedical research. Virology, in particular, has had long-running debates about the responsibility of individual researchers conducting gain-of-function research. Today, the field of AI research finds itself at a similar juncture 12 . Algorithmic systems are now being deployed for high-stakes applications such as law enforcement and automated decision-making, in which the tools have the potential to increase bias, injustice, misuse and other harms at scale. The recent adoption of ethics and impact statements and checklists at some AI conferences and journals signals a much-needed willingness to deal with these issues. However, these ethics review practices are still evolving and are experimental in nature. The developments acknowledge gaps in existing, well-established governance mechanisms, such as IRBs, which focus on risks to human participants rather than risks to society as a whole. This limited focus leaves ethical issues such as the welfare of data workers and non-participants, and the implications of data generated by or about people outside of their scope 6 . We acknowledge that such ethical reflection, beyond IRB mechanisms, may also be relevant to other academic disciplines, particularly those for whom large datasets created by or about people are increasingly common, but such a discussion is beyond the scope of this piece. The need to reflect on ethical concerns seems particularly pertinent within AI, because of its relative infancy as a field, the rapid development of its capabilities and outputs, and its increasing effects on society.

In 2020, the NeurIPS ML conference required all papers to carry a ‘broader impact’ statement examining the ethical and societal effects of the research. The conference updated its approach in 2021, asking authors to complete a checklist and to document potential downstream consequences of their work. In the same year, the Partnership on AI released a white paper calling for the field to expand peer review criteria to consider the potential effects of AI research on society, including accidents, unintended consequences, inappropriate applications and malicious uses 3 . In an editorial citing the white paper, Nature Machine Intelligence announced that it would ask submissions to carry an ethical statement when the research involves the identification of individuals and related sensitive data 13 , recognizing that mitigating downstream consequences of AI research cannot be completely disentangled from how the research itself is conducted. In another recent development, Stanford University’s Ethics and Society Review (ESR) requires AI researchers who apply for funding to identify if their research poses any risks to society and also explain how those risks will be mitigated through research design 14 .

Other developments include the rising popularity of interdisciplinary conferences examining the effects of AI, such as the ACM Conference on Fairness, Accountability, and Transparency (FAccT), and the emergence of ethical codes of conduct for professional associations in computer science, such as the Association for Computing Machinery (ACM). Other actors have focused on upstream initiatives such as the integration of ethics reflection into all levels of the computer science curriculum.

Reactions from the AI research community to the introduction of ethics review practices include fears that these processes could restrict open scientific inquiry 3 . Scholars also note the inherent difficulty of anticipating the consequences of research 1 , with some AI researchers expressing concern that they do not have the expertise to perform such evaluations 7 . Other challenges include concerns about the lack of transparency in review practices at corporate research labs (which increasingly contribute to the most highly cited papers at premier AI conferences such as NeurIPS and ICML 9 ) as well as academic research culture and incentives supporting the ‘publish or perish’ mentality that may not allow time for ethical reflection.

With the emergence of these new attempts to acknowledge and articulate unique ethical considerations in AI research and the resulting concerns from some researchers, the need for the AI research community to come together to experiment, share knowledge and establish shared best practices is all the more urgent. We recommend the following three steps.

Study community behaviour and share learnings

So far, there are limited studies that have explored the responses of ML researchers to the launch of experimental ethics review practices. To understand how behaviour is changing and how to align practice with intended effect, we need to study what is happening and share learnings iteratively to advance innovation. For example, in response to the NeurIPS 2020 requirement for broader impact statements, a paper found that most researchers surveyed spent fewer than two hours working on this process 7 , perhaps retroactively towards the end of their research, making it difficult to know whether this reflection influenced or shifted research directions or not. Surveyed researchers also expressed scepticism about the mandated reflection on societal impacts 7 . An analysis of preprints found that researchers assessed impact through the narrow lens of technical contributions (that is, describing their work in the context of how it contributes to the research space and not how it may affect society), thereby overlooking potential effects on vulnerable stakeholders 8 . A qualitative analysis of a larger sample 10 and a quantitative analysis of all submitted papers 11 found that engagement was highly variable, and that researchers tended to favour the discussion of positive effects over negative effects.

We need to understand what works. These findings, all drawn from studies examining the implementation of ethics review at NeurIPS 2020, point to a pressing need to review actual versus intended community behaviour more thoroughly and consistently to evaluate the effectiveness of ethics review practices. We recognize that other fields have considered ethics in research in different ways. To get started, we propose the following approach, building on and expanding the analysis of Prunkl et al. 1 .

First, clear articulation of the purposes behind impact statements and other ethics review requirements is needed to evaluate efficacy and motivate future iterations by the community. Publication venues that organize ethics review must communicate expectations of this process comprehensively both at the level of individual contribution and for the community at large. At the individual level, goals could include encouraging researchers to reflect on the anticipated effects on society. At the community level, goals could include creating a culture of shared responsibility among researchers and (in the longer run) identifying and mitigating harms.

Second, because the exercise of anticipating downstream effects can be abstract and risks being reduced to a box-ticking endeavour, we need more data to ascertain whether they effectively promote reflection. Similar to the studies above, conference organizers and journal editors must monitor community behaviour through surveys with researchers and reviewers, partner with information scientists to analyse the responses 15 , and share their findings with the larger community. Reviewing community attitudes more systematically can provide data both on the process and effect of reflecting on harms for individual researchers, the quality of exploration encountered by reviewers, and uncover systemic challenges to practicing thoughtful ethical reflection. Work to better understand how AI researchers view their responsibility about the effects of their work in light of changing social contexts is also crucial.

Evaluating whether AI or ML researchers are more explicit about the downsides of their research in their papers is a preliminary metric for measuring change in community behaviour at large 2 . An analysis of the potential negative consequences of AI research can consider the types of application the research can make possible, the potential uses of those applications, and the societal effects they can cause 4 .

Building on the efforts at NeurIPS 16 and NAACL 17 , we can openly share our learnings as conference organizers and ethics committee members to gain a better understanding of what does and does not work.

Community behaviour in response to ethics review at the publication stage must also be studied to evaluate how structural and cultural forces throughout the research process can be reshaped towards more responsible research. The inclusion of diverse researchers and ethics reviewers, as well as people who face existing and potential harm, is a prerequisite to conduct research responsibly and improve our ability to anticipate harms.

Expand experimentation of ethical review

The low uptake of ethics review practices, and the lack of experimentation with such processes, limits our ability to evaluate the effectiveness of different approaches. Experimentation cannot be limited to a few conferences that focus on some subdomains of ML and computing research — especially for subdomains that envision real-world applications such as in employment, policing and healthcare settings. For instance, NeurIPS, which is largely considered a methods and theoretical conference, began an ethics review process in 2020, whereas conferences closer to applications, such as top-tier conferences in computer vision, have yet to implement such practices.

Sustained experimentation across subfields of AI can help us to study actual community behaviour, including differences in researcher attitudes and the unique opportunities and challenges that come with each domain. In the absence of accepted best practices, implementing ethics review processes will require conference organizers and journal editors to act under uncertainty. For that reason, we recognize that it may be easier for publication venues to begin their ethics review process by making it voluntary for authors. This can provide researchers and reviewers with the opportunity to become familiar with ethical and societal reflection, remove incentives for researchers to ‘game’ the process, and help the organizers and wider community to get closer to identifying how they can best facilitate the reflection process.

Create venues for debate, alignment and collective action

This work requires considerable cultural and institutional change that goes beyond the submission of ethical statements or checklists at conferences.

Ethical codes in scientific research have proven to be insufficient in the absence of community-wide norms and discussion 1 . Venues for open exchange can provide opportunities for researchers to share their experiences and challenges with ethical reflection. Such venues can be conducive to reflect on values as they evolve in AI or ML research, such as topics chosen for research, how research is conducted, and what values best reflect societal needs.

The establishment of venues for dialogue where conference organizers and journal editors can regularly share experiences, monitor trends in attitudes, and exchange insights on actual community behaviour across domains, while considering the evolving research landscape and range of opinions, is crucial. These venues would bring together an international group of actors involved throughout the research process, from funders, research leaders, and publishers to interdisciplinary experts adopting a critical lens on AI impact, including social scientists, legal scholars, public interest advocates, and policymakers.

In addition, reflection and dialogue can have a powerful role in influencing the future trajectory of a technology. Historically, gatherings convened by scientists have had far-reaching effects — setting the norms that guide research, and also creating practices and institutions to anticipate risks and inform downstream innovation. The Asilomar Conference on Recombinant DNA in 1975 and the Bermuda Meetings on genomic data sharing in the 1990s are instructive examples of scientists and funders, respectively, creating spaces for consensus-building 18 , 19 .

Proposing a global forum for gene-editing, scholars Jasanoff and Hulburt argued that such a venue should promote reflection on “what questions should be asked, whose views must be heard, what imbalances of power should be made visible, and what diversity of views exist globally” 20 . A forum for global deliberation on ethical approaches to AI or ML research will also need to do this.

By focusing on building the AI research field’s capacity to measure behavioural change, exchange insights, and act together, we can amplify emerging ethical review and oversight efforts. Doing this will require coordination across the entire research community and, accordingly, will come with challenges that need to be considered by conference organizers and others in their funding strategies. That said, we believe that there are important incremental steps that can be taken today towards realizing this change. For example, hosting an annual workshop on ethics review at pre-eminent AI conferences, or holding public panels on this subject 21 , hosting a workshop to review ethics statements 22 , and bringing conference organizers together 23 . Recent initiatives undertaken by AI research teams at companies to implement ethics review processes 24 , better understand societal impacts 25 and share learnings 26 , 27 also show how industry practitioners can have a positive effect. The AI community recognizes that more needs to be done to mitigate this technology’s potential harms. Recent developments in ethics review in AI research demonstrate that we must take action together.

Change history

11 january 2023.

A Correction to this paper has been published: https://doi.org/10.1038/s42256-023-00608-6

Prunkl, C. E. A. et al. Nat. Mach. Intell. 3 , 104–110 (2021).

Article   Google Scholar  

Hecht, B. et al. Preprint at https://doi.org/10.48550/arXiv.2112.09544 (2021).

Partnership on AI. https://go.nature.com/3UUX0p3 (2021).

Ashurst, C. et al. https://go.nature.com/3gsQfvp (2020).

Hecht, B. https://go.nature.com/3AASZhf (2020).

Ashurst, C., Barocas, S., Campbell, R., Raji, D. in FAccT ‘22: 2022 ACM Conf. on Fairness, Accountability, and Transparency 2057–2068 (2022).

Abuhamad, G. et al. Preprint at https://arxiv.org/abs/2011.13032 (2020).

Boyarskaya, M. et al. Preprint at https://arxiv.org/abs/2011.13416 (2020).

Birhane, A. et al. in FAccT ‘ 22: 2022 ACM Conference on Fairness, Accountability, and Transparency 173–184 (2022).

Nanayakkara, P. et al. in AIES ‘ 21: Proc. 2021 AAAI/ACM Conference on AI, Ethics, and Society 795–806 (2021).

Ashurst, C., Hine, E., Sedille, P. & Carlier, A. in FAccT ‘22: 2022 ACM Conf. on Fairness, Accountability, and Transparency 2047–2056 (2022).

National Academies of Sciences, Engineering, and Medicine. https://go.nature.com/3UTKOEJ (date accessed 16 September 2022).

Nat. Mach. Intell . 3 , 367 (2021).

Bernstein, M. S. et al. Proc. Natl Acad. Sci. USA 118 , e2117261118 (2021).

Pineau, J. et al. J. Mach. Learn. Res. 22 , 7459–7478 (2021).

Google Scholar  

Benjio, S. et al. Neural Information Processing Systems. https://go.nature.com/3tQxGEO (2021).

Bender, E. M. & Fort, K. https://go.nature.com/3TWnbua (2021).

Gregorowius, D., Biller-Andorno, N. & Deplazes-Zemp, A. EMBO Rep. 18 , 355–358 (2017).

Jones, K. M., Ankeny, R. A. & Cook-Deegan, R. J. Hist. Biol. 51 , 693–805 (2018).

Jasanoff, S. & Hurlbut, J. B. Nature 555 , 435–437 (2018).

Partnership on AI. https://go.nature.com/3EpQwY4 (2021).

Sturdee, M. et al. in CHI Conf.Human Factors in Computing Systems Extended Abstracts (CHI ’21 Extended Abstracts) ; https://doi.org/10.1145/3411763.3441330 (2021).

Partnership on AI. https://go.nature.com/3AzdNFW (2022).

DeepMind. https://go.nature.com/3EQyUWT (2022).

Meta AI. https://go.nature.com/3i3PBVX (2022).

Munoz Ferrandis, C. OpenRAIL; https://huggingface.co/blog/open_rail (2022).

OpenAI. https://go.nature.com/3GyZPYk (2022).

Download references

Author information

Authors and affiliations.

Partnership on AI, San Francisco, CA, USA

Madhulika Srikumar, Rebecca Finlay & Hudson Hongo

ServiceNow, Santa Clara, CA, USA

Grace Abuhamad

The Alan Turing Institute, London, UK

Carolyn Ashurst

OpenAI, San Francisco, CA, USA

Rosie Campbell

Centre for Data Ethics and Innovation, London, UK

Emily Campbell-Ratcliffe

Future of Privacy Forum, Washington, DC, USA

Sara R. Jordan

Design Research Works, Lancaster University, Lancaster, UK

Joseph Lindley

Belfer Center for Science and International Affairs, Harvard Kennedy School, Cambridge, MA, USA

Aviv Ovadya

Meta AI, Menlo Park, CA, USA

Joelle Pineau

McGill University, Montreal, Canada

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Madhulika Srikumar .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Machine Intelligence thanks Carina Prunkl and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Srikumar, M., Finlay, R., Abuhamad, G. et al. Advancing ethics review practices in AI research. Nat Mach Intell 4 , 1061–1064 (2022). https://doi.org/10.1038/s42256-022-00585-2

Download citation

Published : 14 December 2022

Issue Date : December 2022

DOI : https://doi.org/10.1038/s42256-022-00585-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

How to design an ai ethics board.

  • Jonas Schuett
  • Ann-Katrin Reuel
  • Alexis Carlier

AI and Ethics (2024)

Machine learning in precision diabetes care and cardiovascular risk prediction

  • Evangelos K. Oikonomou
  • Rohan Khera

Cardiovascular Diabetology (2023)

Generative AI entails a credit–blame asymmetry

  • Sebastian Porsdam Mann
  • Brian D. Earp
  • Julian Savulescu

Nature Machine Intelligence (2023)

Recommendations for the use of pediatric data in artificial intelligence and machine learning ACCEPT-AI

  • V. Muralidharan

npj Digital Medicine (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

peer review in research ethics

Ethics responsibilities for peer reviewers

Thumbnail

Jaap van Harten

About this video

When you accept a request to review a manuscript, what are you actually agreeing to? Some reviewers think they have a duty to identify authors’ unethical behavior, but, is that really the case? And are there any ethics “rules” reviewers should bear in mind when wielding their red pen?

In this webinar recording, experienced publisher Dr. Jaap van Harten answers those questions. He looks at the trust placed in reviewers not to share or reuse the contents of the unpublished manuscript. He explores the obligations of reviewers to provide an honest assessment of the work and highlight potential conflicts of interest. And, he reminds researchers they should only accept a review request if they have the time – and knowledge – required.

He also examines who carries the burden of ensuring a manuscript is free of plagiarism, fraud and other ethics issues. (Clue: the good news is, it’s not the reviewer! Although you can play an important role in flagging problem submissions.) You’ll come away armed with the confidence to identify where your reviewing responsibilities lie and the skills you need to meet them effectively

About the presenter

Thumbnail

Executive Publisher Pharmacology & Pharmaceutical Sciences, Elsevier

Jaap van Harten was trained as a pharmacist at Leiden University, The Netherlands, and got a PhD in clinical pharmacology in 1988. He then joined Solvay Pharmaceuticals, where he held positions in pharmacokinetics, clinical pharmacology, medical marketing, and regulatory affairs. In 2000 he moved to Excerpta Medica, Elsevier’s Medical Communications branch, where he headed the Medical Department and the Strategic Publication Planning Department. In 2004 he joined Elsevier’s Publishing organization, initially as Publisher of the genetics journals and books, and currently as Executive Publisher Pharmacology & Pharmaceutical Sciences.

Recognizing peer reviewers: A webinar to celebrate editors and researchers

Recognizing peer reviewers: A webinar to celebrate editors and researchers

Transparency in peer review

Transparency in peer review

The fundamentals of peer review for the Chemical Sciences

How do I review a Data Article?

Generative AI in research evaluation

Generative AI in research evaluation

Building trust and engagement in peer review

Building trust and engagement in peer review

Peer review - the nuts and bolts, quick guide research ethics.

Tools and resources for Reviewers

Transparency — the key to trust in peer review

Articles tagged in: Peer Review

Peer Review Week

Elsevier for Reviewers

Volunteer to Review

SYSTEMATIC REVIEW article

The view of synthetic biology in the field of ethics: a thematic systematic review provisionally accepted.

  • 1 Ankara University, Türkiye
  • 2 Department of Medical History and Ethics, School of Medicine, Ankara University, Ankara, Türkiye, Türkiye

The final, formatted version of the article will be published soon.

Synthetic biology is designing and creating biological tools and systems for useful purposes. It uses knowledge from biology, such as biotechnology, molecular biology, biophysics, biochemistry, bioinformatics, and other disciplines, such as engineering, mathematics, computer science, and electrical engineering. It is recognized as both a branch of science and technology. The scope of synthetic biology ranges from modifying existing organisms to gain new properties to creating a living organism from non-living components. Synthetic biology has many applications in important fields such as energy, chemistry, medicine, environment, agriculture, national security, and nanotechnology. The development of synthetic biology also raises ethical and social debates. This article aims to identify the place of ethics in synthetic biology. In this context, the theoretical ethical debates on synthetic biology from the 2000s to 2020, when the development of synthetic biology was relatively faster, were analyzed using the systematic review method. Based on the results of the analysis, the main ethical problems related to the field, problems that are likely to arise, and suggestions for solutions to these problems are included. The data collection phase of the study included a literature review conducted according to protocols, including planning, screening, selection and evaluation. The analysis and synthesis process was carried out in the next stage, and the main themes related to synthetic biology and ethics were identified. Searches were conducted in Web of Science, Scopus, PhilPapers and MEDLINE databases. Theoretical research articles and reviews published in peer-reviewed journals until the end of 2020 were included in the study. The language of publications was English. According to preliminary data, 1453 publications were retrieved from the four databases. Considering the inclusion and exclusion criteria, 58 publications were analyzed in the study. Ethical debates on synthetic biology have been conducted on various issues. In this context, the ethical debates in this article were examined under five themes: the moral status of synthetic biology products, synthetic biology and the meaning of life, synthetic biology and metaphors, synthetic biology and knowledge, and expectations, concerns, and problem solving: risk versus caution.

Keywords: Synthetic Biology, Ethics, Bioethics, Systematic review, Technology ethics, Responsible research and innovation

Received: 08 Mar 2024; Accepted: 10 May 2024.

Copyright: © 2024 Kurtoglu, Yıldız and Arda. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: PhD. Ayse Kurtoglu, Ankara University, Ankara, Türkiye

People also looked at

  • Coronavirus Updates
  • Education at MUSC
  • Adult Patient Care
  • Hollings Cancer Center
  • Children's Health

Biomedical Research

  • Research Matters Blog
  • NIH Peer Review

NIH announces the Simplified Framework for Peer Review

NIH Peer Review

The mission of NIH is to seek fundamental knowledge about the nature and behavior of living systems and to apply that knowledge to enhance health, lengthen life, and reduce illness and disability . In support of this mission, Research Project Grant (RPG) applications to support biomedical and behavioral research are evaluated for scientific and technical merit through the NIH peer review system.

The Simplified Framework for NIH Peer Review initiative reorganizes the five regulatory criteria (Significance, Investigators, Innovation, Approach, Environment;  42 C.F.R. Part 52h.8 ) into three factors – two will receive numerical criterion scores and one will be evaluated for sufficiency. All three factors will be considered in determining the overall impact score. The reframing of the criteria serves to focus reviewers on three central questions they should be evaluating: 1) how important is the proposed research? 2) how rigorous and feasible are the methods? 3) do the investigators and institution have the expertise/resources necessary to carry out the project? 

•        Factor 1: Importance of the Research  (Significance, Innovation), scored 1-9

•        Factor 2: Rigor and Feasibility  (Approach), scored 1-9

•        Factor 3: Expertise and Resources  (Investigator, Environment), to be evaluated with a selection from a drop-down menu

o             Appropriate (no written explanation needed)

o             Identify need for additional expertise and/or resources (requires reviewer to briefly address specific gaps in expertise or resources needed to carry out the project) 

Simplifying Review of Research Project Grant Applications

NIH Activity Codes Affected by the Simplified Review Framework.

R01, R03, R15, R16, R21, R33, R34, R36, R61, RC1, RC2, RC4, RF1, RL1, RL2, U01, U34, U3R, UA5, UC1, UC2, UC4, UF1, UG3, UH2, UH3, UH5, (including the following phased awards: R21/R33, UH2/UH3, UG3/UH3, R61/R33).

Changes Coming to NIH Applications and Peer Review in 2025

•        Simplified Review Framework for Most Research Project Grants (RPGs )

•        Revisions to the NIH Fellowship Application and Review Process

•        Updates to NRSA Training Grant Applications (under development)

•        Updated Application Forms and Instructions

•        Common Forms for Biographical Sketch and Current and Pending (Other) Support (coming soon)

Webinars, Notices, and Resources

Apr 17, 2024 - NIH Simplified Review Framework for Research Project Grants (RPG): Implementation and Impact on Funding Opportunities Webinar Recording & Resources

Nov 3, 2023 - NIH's Simplified Peer Review Framework for NIH Research Project Grant (RPG) Applications: for Applicants and Reviewers Webinar Recording & Resources

Oct 19, 2023 - Online Briefing on NIH’s Simplified Peer Review Framework for NIH Research Project Grant (RPG) Applications: for Applicants and Reviewers. See  NOT-OD-24-010

Simplifying Review FAQs

Upcoming Webinars

Learn more and ask questions at the following upcoming webinars:

June 5, 2024 :  Webinar on Updates to NIH Training Grant Applications  (registration open)

September 19, 2024 :  Webinar on Revisions to the Fellowship Application and Review Pro

Categories: NIH Policies , Research Education , Science Communications

  • Open access
  • Published: 09 May 2024

A systematic review of workplace triggers of emotions in the healthcare environment, the emotions experienced, and the impact on patient safety

  • Raabia Sattar 1 ,
  • Rebecca Lawton 1 ,
  • Gillian Janes 2 ,
  • Mai Elshehaly 3 ,
  • Jane Heyhoe 1 ,
  • Isabel Hague 1 &
  • Chloe Grindey 1  

BMC Health Services Research volume  24 , Article number:  603 ( 2024 ) Cite this article

291 Accesses

19 Altmetric

Metrics details

Healthcare staff deliver patient care in emotionally charged settings and experience a wide range of emotions as part of their work. These emotions and emotional contexts can impact the quality and safety of care. Despite the growing acknowledgement of the important role of emotion, we know very little about what triggers emotion within healthcare environments or the impact this has on patient safety.

To systematically review studies to explore the workplace triggers of emotions within the healthcare environment, the emotions experienced in response to these triggers, and the impact of triggers and emotions on patient safety.

Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, four electronic databases were searched (MEDLINE, PsychInfo, Scopus, and CINAHL) to identify relevant literature. Studies were then selected and data synthesized in two stages. A quality assessment of the included studies at stage 2 was undertaken.

In stage 1 , 90 studies were included from which seven categories of triggers of emotions in the healthcare work environment were identified, namely: patient and family factors, patient safety events and their repercussions, workplace toxicity, traumatic events, work overload, team working and lack of supervisory support. Specific emotions experienced in response to these triggers (e.g., frustration, guilt, anxiety) were then categorised into four types: immediate, feeling states, reflective, and longer-term emotional sequelae. In stage 2 , 13 studies that explored the impact of triggers or emotions on patient safety processes/outcomes were included.

The various triggers of emotion and the types of emotion experienced that have been identified in this review can be used as a framework for further work examining the role of emotion in patient safety. The findings from this review suggest that certain types of emotions (including fear, anger, and guilt) were more frequently experienced in response to particular categories of triggers and that healthcare staff's experiences of negative emotions can have negative effects on patient care, and ultimately, patient safety. This provides a basis for developing and tailoring strategies, interventions, and support mechanisms for dealing with and regulating emotions in the healthcare work environment.

Peer Review reports

Healthcare is delivered in emotionally charged settings [ 1 ]. Worried patients present with complex health issues, and anxious relatives need information and support, and safe care is reliant on clinical judgement and effective multi-disciplinary teamwork within a time-pressured resource-limited, complex system. Working in this environment, healthcare staff experience a range of emotions (e.g., anxiety, anger, joy, sadness, pride, and guilt) which can impact the safety of the care delivered [ 1 , 2 , 3 , 4 , 5 ]. Clinical judgement often involves weighing up risk based on incomplete information and uncertain outcomes. Research outside healthcare [ 6 , 7 , 8 , 9 ] suggests that if, while making these decisions, healthcare staff experience strong emotions, this can influence their decisions and behavior.

Research focusing on the role of emotion in patient safety is still limited [ 1 , 2 , 10 ] and fragmented [ 11 ]. This may in part be because emotion research is complex. For example, the experience and influence of emotion can be approached and interpreted from a range of perspectives including cognitive and social psychology, cognitive neuroscience, and sociology. There is also a lack of consensus on what is meant by ‘emotion’. In decision-making research, ‘emotion’ has been distinguished from ‘affect’ [ 11 ]. In response to stimuli or situations, ‘emotion’ is viewed as a slower, more reflective process, whilst ‘affect’ is an instantaneous and automatic reaction. Other research has focused on identifying and examining different types of affect, such as ‘anticipatory affect’—an immediate, strong visceral state in response to stimuli e.g. anger (Knutson, 2008) and ‘anticipated affect’ – considering how current actions might make you feel in the future e.g. regret [ 12 ].

Research often lacks a clear distinction between the different types of feeling states being examined, and as such, it is difficult to build robust evidence of the processes involved and the role they each play in judgement and associated behaviour [ 11 ]. Furthermore, the emotions experienced by healthcare staff can be both positive and negative and can influence the delivery of safe care in positive and negative ways. Until more recently, the focus has tended to be on the impact of negative emotions, including their role in diagnostic accuracy [ 13 , 14 ], time spent on history taking, examinations, and treatment decisions [ 15 , 16 ], and the instigation of verbal checking during procedures [ 4 ]. More attention is now being given to the role of positive emotions in the workplace such as their effect on reasoning [ 17 ], and engagement and teamwork [ 18 ].

There are many potential triggers (e.g. physical, circumstantial, tangible, and psycho-social aspects of the immediate clinical work environment and the broader organisation) that generate a feeling state via reactions or interactions of emotion in the workplace. Research exploring some of the triggers of emotions within a healthcare environment has found that involvement in care that has gone wrong [ 4 ], and interactions with patients can elicit negative emotions [ 13 , 14 , 15 , 16 ] and that triggers of emotion can be at a clinical, hospital and system level [ 15 ]. Only a limited number of these studies have also explored how the emotions experienced by healthcare staff impact patient care [ 15 , 16 ]. While emotion has a direct effect on patient care, it can also indirectly influence patient safety. Burnout, sickness absence, and turnover are impacted by emotion [ 19 , 20 , 21 ] and, in turn, are associated with healthcare organisations’ ability to provide safe care [ 21 , 22 , 23 ]. Due to the multifaceted approaches to research in this area, it is currently unclear what contexts and settings elicit emotions in healthcare staff, how these make healthcare staff feel, and the influence these feelings may have on decisions and actions relevant to providing safe patient care. There is therefore a need to synthesise the current evidence to help develop an in-depth understanding of the triggers of emotions experienced by healthcare staff in the work environment, the emotions experienced and the impact these may have on patient safety.

The protocol was pre-registered on Prospero (ID: CRD42021298970).

This systematic review aimed to identify gaps in the evidence by answering these research questions:

What triggers emotions in the healthcare work environment?

What are the emotions experienced in response to these triggers?

Are certain emotions more often experienced in the context of particular triggers?

What impact do different triggers/ emotions have on patient safety processes and outcomes?

Search strategy and databases

Four electronic databases (MEDLINE, PsychInfo, Scopus, and CINAHL)( were systematically searched in March 2020 and updated in January 2022. Only studies published since 2000 were sought as this was when the Institute of Medicine’s seminal report, ‘To Err is Human’’ [ 24 ] was published promoting a widespread focus on patient safety. The search strategy had three main foci (patient safety, emotions, and healthcare staff). Previous systematic reviews examining any of these topics in combination; patient safety [ 25 ] and healthcare staff [ 26 ]were used to guide search strategy development. As a foundation to develop the search terms in relation to emotions, the six basic emotions (fear, anger, joy, sadness, disgust, and surprise) described by Ekman [ 27 ] were included, with synonyms for emotion. This resulted in a search strategy that combined all three concepts (Available in Appendix 1 ). The reference lists of all included studies were hand-searched.

Eligibility criteria

Studies were included if they were: published post-2000, original empirical research (either quantitative, qualitative, or mixed-methods), published in English, conducted in any healthcare environment, and included healthcare staff as participants. Studies were excluded if they; focused on healthcare staff’s non-work related emotions, included healthcare students/staff who were not involved in direct patient care (e.g. administrative staff), or if the primary focus was on longer-term emotional states (e.g. burnout and emotional exhaustion) with no reference to specific emotions.

This review had two stages:

Stage 1: The first stage addressed the first three research questions and identified studies focused on triggers of emotions in the healthcare work environment and the specific emotions experienced by healthcare staff in response to these.

Stage 2: The second stage examined the fourth research question and identified the impact of triggers and/or emotions on patient safety outcomes and processes. The studies included in stage 1 were further screened and considered at this stage if they included either (i.) triggers of emotions and their relationship with patient safety, (ii.) emotions experienced and their relationship with patient safety outcomes or processes (iii.) triggers, emotions and the relationship with patient safety outcomes or processes.

Study selection

PRISMA guidelines [ 28 ] for study selection were followed. The study selection process is described below. Throughout each stage, all decisions and any uncertainty or discrepancies were discussed by the review team to achieve consensus.

Stage 1: Title and abstract then full-text screening, was conducted by IH & CG independently and then discussed together. RL reviewed a random 10% at the abstract review stage and all included full-text articles.

Stage 2: RS independently conducted abstract and title screening for all included studies. A random 10% of these were each independently screened by two reviewers (JH&RL). Full texts were obtained for all studies deemed potentially eligible for inclusion. All full texts were screened by RS. JH &RL double-screened half each. A final set of studies meeting all the eligibility criteria was identified for data extraction.

Assessment of the methodological quality of included studies was carried out using the 16-item quality assessment tool (QuADS) [ 25 ] which is appropriate for studies using different methodological approaches. Quality assessment was undertaken independently for two studies by three reviewers (RS, GJ&JH) and scores were discussed to check for consistency. RS&GJ completed a quality assessment for the remaining studies and discussed scores to check for consistency. No studies were discarded based on low scoring.

Data extraction

Stage 1: A data extraction form developed in Microsoft Excel by IH&CG and agreed with the wider review team was used to extract: the study title, triggers of emotions in the healthcare work environment, and emotions experienced in response to these triggers. Two reviewers extracted these data (IH&CG), conferring at intervals throughout the extraction process to ensure consistency. Due to the large number of studies at this stage and our aim to take a broader approach to explore triggers and the associated emotions, we did not extract data related to study characteristics. We categorised both the types of triggers and emotions (drawing on existing theory and wider team expertise) to advance knowledge by providing an initial framework for further testing. The detailed process for categorisation of the emotions and triggers is described in supplementary Appendix 2 .

Stage 2: A data extraction form developed and agreed upon by authors was used to extract: information on the study population, setting, design and methods used, key findings, conclusions, recommendations, triggers of emotions, emotions experienced, and impact on patient safety. CG&IH completed data extraction for included studies. This was cross-checked by RS and discussed with all reviewers.

Categorising the patient safety outcomes and processes

The wide range of patient safety processes and outcomes ( n  = 50) from the included studies, meant it was necessary to reduce the data. Therefore, categories of outcomes were developed to allow the relationship between triggers/emotions and patient safety to be explored. The first step in the categorisation process involved a team of 8 patient safety researchers using a sorting process in which they were provided with 50 cards each describing a patient safety process/outcome extracted from the studies Working independently they grouped these 50 cards and gave each group a title. A large group discussion with all 8 patient safety researchers followed this, resulting in 7 categories. We then presented these categories and the items each contained, to a large group of patient safety researchers, healthcare staff, and patients ( n  = 16), at an inter-disciplinary meeting. This resulted in a final set of five categories representing patient safety processes: altered interaction with patients, disengagement with the job, negative consequences for work performance, defensive practice, being more cautious, negative impact on team relationships, and reduced staff confidence (see appendix 2 for further detail) and the sixth, patient safety outcomes.

Quality assessment

There was a very high level of agreement between RS & GJ regarding the quality assessment. The quality of studies was variable, with total scores ranging from 79 to 48% across the studies. There was limited discussion of relevant theories related to emotions and patient safety, and few studies provided a rationale for the choice of data collection tools. There was also limited evidence to suggest stakeholders had been considered in the research design and limited – or often no justification for analytical methods used. A detailed quality assessment table is available in Appendix 3 .

After duplicates were removed; the search resulted in 8,432 articles for initial review which were downloaded into the reference management software Endnote (see PRISMA flow diagram in (Fig.  1 ). Stage 1: 90 studies met the inclusion criteria, investigating triggers of emotions in the healthcare work environment and the emotions experienced by healthcare staff in response to these.

figure 1

PRISMA flow diagram

Research question 1: What triggers emotions in the healthcare work environment?

The following categories of triggers were identified:

(1) Patient and family factors ( included patient aggression, challenging patient behaviours, patient violence, patient hostility, and interactions with patients family)*

(2) Patient safety events and their repercussions (including adverse events, errors, medical errors, and surgical complications)*

(3) Workplace toxicity (including workplace bullying, and staff hostility)*

(4) Traumatic events with negative outcomes for patients (including patient deaths/suicide, patient deterioration, and critical incidents).

(5) Work overload (including work pressures and poor staffing levels).

(6) Team working and lack of supervisory support (including teamwork and the lack of appropriate managerial support).

*The most common triggers investigated and reported in the literature

Research question 2: What are the emotions experienced in response to these triggers?

In response to the triggers described above, healthcare staff experienced four main types ofemotions:

1. Immediate: an instantaneous, visceral emotional response to a trigger ( e.g. fear, anxiety, anger, comfort, satisfaction, joy ).

2. Feeling states: short-lived, more mindful and conscious cognitive-based responses to a trigger (e.g. include feeling disoriented, confused, helpless, inadequate, alone).

3. Reflective/self-conscious: Mindful and conscious cognitive-based response after exposure to a trigger and following time to reflect on how others may perceive them (e.g. moral distress, guilt, pride).

4. Sequelae: chronic and longer-term mental health states that arise as a result of repeated exposure to a trigger and experiencing the emotions in response to that trigger over time (e.g. chronic depression, fatigue, distress, PTSD symptoms).

Research question 3: Are certain emotions more often experienced in the context of particular triggers?

The frequency of the emotions experienced across the studies in response to the categories of triggers is illustrated using a heat map (Fig. 2 ) developed by a data visualisation expert (MA). Below is a summary of the most frequently experienced emotions by healthcare staff in response to the categories of triggers across the studies.

Patient and family factors: Immediate emotional responses including most commonly anger, frustration, etc.

Patient safety events & their repercussions: Reflective/self–conscious emotions including guilt and regret

Workplace toxicity: Immediate emotional responses including fear and anxiety

Traumatic events with negative outcomes for patients: Reflective/self–conscious emotions including guilt and regret

Work overload: Immediate emotional responses including anxiety and worry

Team working & supervisory support: Immediate emotional responses including anxiety and worry

figure 2

A heat map displaying the triggers of emotions experienced by healthcare staff and the emotions experienced in response to these. (The darker the colour on the heat map represents a higher frequency of that emotion being experienced across the studies)

Stage 2 Research question 4. What impact do different triggers/ emotions have on patient safety processes or outcomes?

Thirteen publications [ 15 , 16 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 ] addressed this research question and were included at this stage.

All 13 studies [ 15 , 16 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 ] described the following patient safety processes/outcomes as being impacted by either the triggers of emotions or the emotions experienced; altered interaction with patients, disengagement with the job, negative consequences for work performance, defensive practice, being more cautious, negative impact on team relationships and reduced staff confidence and patient safety outcomes.

Depending on the nature of the studies, some explored only one of the patient safety processes/outcomes, whereas others focused on several. Eight studies used quantitative methods [ 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 ], three qualitative methods [ 15 , 38 , 39 ], and two mixed-methods designs [ 16 , 40 ] to explore these relationships. Twelve studies were conducted in hospital settings [ 15 , 16 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 ], and one was conducted in hospital and community healthcare settings [ 40 ].

The impact of triggers of emotion on patient safety processes or outcomes

Of the 13 studies [ 15 , 16 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 , 40 ], 10 [ 30 , 31 , 32 , 33 , 34 , 35 , 36 , 38 , 39 , 40 ] included an exploration of or commented on the impact of triggers of emotion on patient safety processes/outcomes. None had a direct focus on exploring the relationship between the triggers and patient safety processes/outcomes; rather some stated this as one of multiple aims, whereas others reported any associations as part of the broader study findings. The relationship between specific triggers and patient safety processes/outcomes is displayed in Table  1 .

The most commonly described patient safety processes/outcomes following exposure to the triggers were 'disengagement with the job' [ 30 , 31 , 35 , 40 , 41 ] and ‘being more cautious’ [ 31 , 32 , 33 , 34 , 35 ]. There was increased disengagement with the job after experiencing workplace bullying, medical errors, and workplace violence. This included dissatisfaction and a desire to change jobs or leave healthcare practice. Involvement in medical errors, surgical complications, and workplace bullying also resulted in staff being more cautious. For example, they reported paying more attention to detail, keeping better patient records, and increased information-seeking from colleagues.

Within four studies, triggers were described as having a ‘negative impact on team relationships’ [ 30 , 34 , 36 , 39 ] where being exposed to workplace bullying resulted in communication problems amongst staff and conflicts with co-workers. In one case being involved in a patient safety incident resulted in staff feeling uncomfortable within their team [ 36 ]. Workplace bullying and being involved in a patient safety incident also resulted in ‘Negative consequences for work performance’ in four studies [ 30 , 34 , 36 , 40 ] which included delays in care delivery and being unable to provide quality care.

Exposure to triggers was also linked to ‘defensive practice’ [ 29 , 32 , 36 ]. There was an increase in defensive practice (e.g. ordering more tests, keeping errors to self, and avoiding risks) as a result of triggers such as workplace bullying and medical errors. 'Patient safety outcomes' were only described in one study, where it was perceived that there may be an increased risk of patient safety incidents such as increased medical errors, patient falls, adverse events, or patient mortality as a result of experiencing workplace bullying [ 30 ].

The impact of emotions on patient safety processes or outcomes

Only three studies [ 15 , 16 , 37 ] included an exploration of the impact of emotions on either patient safety processes [ 15 , 16 ] or outcomes [ 37 ]. These studies directly explored the relationship between the emotions experienced in response to triggers and patient safety processes/outcomes (see Table  2 ).

In response to the emotions experienced by healthcare staff, ‘negative consequences for work performance’ were described[15, 16} as staff feeling unable to provide quality care or a delay or failure to provide appropriate examination/treatment was reported. Emotions also influenced defensive practices [ 16 ] such as risk avoidance, and provision of unnecessary treatment, and emotions were described as an influencing factor for overprescribing. Emotions were also found to influence physical restraint in mental health settings, where a positive correlation was reported between staff experiencing anger (as a result of patient aggression) and the approval of physical restraint [ 37 ].

The type of patients also influenced the emotions experienced by staff, which in turn altered the interaction with patients [ 15 , 16 ]. Isbell et al. [ 16 ] found that encounters with angry and mental health patients elicited highly negative emotions such as fear and frustration, where staff spent less time with the patient and acted less compassionately. Increased interaction including expediting patient care and spending more time with the patient was associated with encounters with positive patients who elicited positive emotions (happiness, satisfaction)in staff. Isbell et al. [ 15 ] also found that patients with psychiatric conditions elicited negative emotions, which resulted in reduced patient interaction and potential for diagnostic error.

Whilst these studies [ 15 , 16 , 37 ] highlight that emotions may impact patient safety processes and/or outcomes, it was not always possible to ascertain the impact of specific emotions. Studies by Isbell et al. [ 15 , 16 ] illustrate how negative emotions elicited by patients have a negative impact on patient safety processes, whereas positive emotions resulting from patient behaviours have the potential to enhance patient care. However, it is difficult to disentangle the effect of specific emotions, due to a lack of evidence regarding the link between individual emotions and patient safety processes and/or outcomes as studies have not attempted to explore this. Only Jalil et al. [ 37 ] focused specifically on anger as an emotion and its impact on restraint practices, where higher levels of anger were correlated with greater approval of restraint of mental health patients.

Summary of main aims and findings

This review has identified and categorised the triggers of emotions in the healthcare work environment and the types of emotions experienced by healthcare staff in response to those triggers. It has also established the types of emotions more often experienced in the context of particular triggers, and the impact that different triggers and emotions may have on patient safety processes or outcomes. The most frequently reported triggers within the literature were 'patient and family factors', 'patient safety events and their repercussions', and ‘workplace toxicity’ , and the most frequently cited emotions were ‘ anger, frustration, rage, irritation, annoyance’ and ‘ guilt, regret and self-blame’ . These emotions were all negative in nature, which may reflect a bias in the research literature.

The studies that focused on the triggers did not directly set out to assess the impact of triggers of emotions on patient safety processes or outcomes, but the reporting of this link in study findings did enable knowledge to be gained about this. Studies that did focus on emotions and patient safety directly explored the relationship between the emotions experienced in response to triggers and patient safety processes and outcomes. Previous literature [ 41 , 42 , 43 ] supports the link between the categories of patient safety processes identified within this review ( including reduced staff confidence, disengagement with the job, and defensive practice ) and patient care and or/patient safety, suggesting these processes may serve as mechanisms to influence patient safety. Only three studies were identified that focused on the impact of emotions experienced by healthcare staff within the work environment on patient safety processes/outcomes [ 15 , 16 , 37 ]. These studies highlight that a majority of emotional responses experienced by healthcare staff are negative and have the potential to result in negative work performance ( including being unable to provide quality care ), increased defensive practice, and negative patient safety outcomes (increased approval of physical restraint ). In only one study [ 15 ], positive emotions were reported which resulted in positive outcomes including expediting patient care and spending more time with the patient.

The findings of this review support previous calls to acknowledge the importance of emotions and their impact on safe care [ 1 , 2 , 4 , 5 , 44 ], however, research in this area is still limited and fragmented [ 15 , 16 ]. Except for one study [ 41 ], it was not possible to ascertain the association between specific emotions and patient safety processes, and even for this study (a cross-sectional survey), causal relationships were not demonstrated. Nevertheless, the findings do suggest that negative emotions elicited by patients within healthcare staff have a negative impact on the described patient safety processes and positive emotions have a positive impact on these processes. Earlier work by Croskerry et al. [ 10 ] highlighted the importance of bringing attention to the notion that healthcare providers are not immune to emotional influences, and must therefore focus on not allowing their emotional experiences to negatively influence the care they provide.

Within this review, patients were described as the most common trigger eliciting emotions and subsequently influencing patient safety processes and outcomes. Although one study did also identify hospital and system-level factors as triggers [ 15 ], these were not explored in patient safety processes. As well as patients, many other factors within the healthcare work environment were identified in the first stage of this review as influencing the emotions staff experienced. However, how emotional responses to such triggers affect patient safety processes and outcomes is currently unclear and warrants further research. The studies reviewed here focused on the intrapersonal effects of emotion. Researchers have recently highlighted the need for further work to understand the social aspects of emotion [ 45 , 46 , 47 ]. The Emotions as Social Information (EASI) model [ 45 ] posits that many of our decisions and actions cannot be explained solely by individual thought processes, but are often due to social interaction which involves observing and responding to the emotional displays of others, providing a potentially useful framework for further exploration.

Workplace violence and patient aggression were identified in this review as triggers of emotions in the healthcare work environment. Research evidence suggests that gender plays a role in determining recipients who are subjected to workplace violence and the type of violence they may experience. Male healthcare staff report experiencing a higher prevalence of workplace violence compared to their female counterparts [ 48 , 49 ]. Gender influenced the types of violence experienced by healthcare staff, where in general, female healthcare staff experienced more verbal violence, and male healthcare staff experienced more physical violence [ 48 ]. Different risk factors for workplace violence have been reported for males and females. For male healthcare staff, lower income levels and managers were at a higher risk of workplace violence, whereas longer working hours were associated with a higher risk of workplace violence among female healthcare staff [ 49 ].

As experiencing workplace violence and patient aggression have been found to have a negative impact on the delivery of patient care, this is a topic area that warrants further research. The majority of emotions identified in response to the triggers in this review were negative in nature. Within the few studies where positive emotions were mentioned, experiencing these as a result of a positive patient encounter was associated with increased interaction with patients, where healthcare staff perceived they were more engaged and provided expedited care [ 15 , 16 ]. This finding is congruent with limited previous Research that suggests positive emotions may improve patient safety and patient care; positive affect led medical students to identify lung cancer in patients more quickly [ 50 ] and resulted in correctly diagnosing patients with liver disease sooner [ 51 ]. However, positive emotional responses may also have the opposite effect e.g. over-testing and over-treating patients, or reducing staff belief that the patient has a serious illness, resulting in adverse outcomes [ 16 ]. Greater understanding is required to articulate conditions and triggers of positive emotions and when these might support patient safety or cause harm [ 44 ].

Limitations

There was heterogeneity within the included studies and the primary aim of most studies was not to answer the research questions posed here. To answer our research questions, it was necessary to include articles where the study aims addressed only one of the concepts of interest or where only limited associations between triggers of emotion or the emotions experienced in response to triggers and patient safety processes or outcomes were made. Moreover, it is important to recognise that there is likely to be some bias in the research literature, meaning that the triggers of emotion we identified from the current published research and the emotions experienced in response to these cannot be assumed to accurately represent the routine experiences of healthcare staff. Also worthy of note is that the majority of studies focus on negative triggers of emotions or the negative emotions experienced which may also lead to reporting bias. We acknowledge that we did not search studies before the year 2000.

Implications for future research and practice

The triggers of emotion and types of emotion experienced that have been identified in this review can be used as a framework for further work examining the role of emotion in patient safety. Developing validated measures of the triggers of emotions, and the types of emotions experienced by healthcare staff in the work environment will facilitate this and is urgently needed. The findings also suggest that particular types of emotion were more frequently experienced in response to particular categories of triggers and that healthcare staff’s experiences of negative emotions have negative effects on patient care and ultimately, patient safety. This provides a basis for developing and tailoring strategies, interventions, and support mechanisms for dealing with either short-term or long-term consequences, and regulation of emotions in the healthcare work environment. For example, healthcare staff can be offered some time out from their clinical duties to take a brief pause when immediate and short-term emotional reactions are experienced. They may also be provided with one-to-one peer support to help healthcare staff experience a more reflective, self-conscious emotional response. It also highlights the possibility of preparing healthcare staff for likely emotional reactions in particular clinical situations to assist them in being more mindful of the possible impact on the safety of the care they provide. The limited research currently available suggests that emotions influence patient safety processes/outcomes. Further research is needed to explore this relationship further. For example, studies that focused exclusively on more amorphous emotional concepts like burnout were excluded. However, in some of the included studies, these longer-term emotional responses were identified in addition to the immediate, short-term, and reflective emotions. Further research needs to explore longer-term emotional responses such as PTSD, burnout, and work satisfaction, the associated triggers, and the impact on patient safety.

It is important to raise awareness of the potential impact of emotional triggers and the emotions experienced in response to these on patient safety through training and education for healthcare staff. As suggested by previous authors, we recommend that emotional awareness and regulation skills, both of which can be developed and enhanced using emotional intelligence training interventions [ 52 , 53 ] are included in healthcare staff training [ 44 , 54 , 55 , 56 ]. Future work should also distinguish between specific types of emotional responses rather than broadly classifying these as negative and positive, and explore how these influence patient safety. The findings also have potential implications for health equity given that the evidence indicates certain types of patients (e.g. angry and mental health patients) are more likely to provoke negative emotions, and such emotions can result in a negative impact on patient care and safety. This may suggest that such patient groups may receive poorer quality of care due to social factors beyond their control and is an area that requires further research.

Conclusions

Healthcare staff are exposed to many emotional triggers within their work environment including patient safety events, traumatic events, work overload, workplace toxicity, lack of supervisory support, and patient and family factors. In response, healthcare staff experience emotions ranging from anger and guilt to longer-term burnout and PTSD symptoms. Both triggers and the emotional responses to these are perceived to negatively impact patient care and safety, although robust empirical evidence is lacking.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Heyhoe J, Birks Y, Harrison R, O’Hara JK, Cracknell A, Lawton R. The role of emotion in patient safety: are we brave enough to scratch beneath the surface? J R Soc Med. 2016;109(2):52–8. https://doi.org/10.1177/2F0141076815620614 .

Article   PubMed   Google Scholar  

Croskerry P, Abbass A, Wu AW. Emotional influences in patient safety. J Patient Saf. 2010;6(4):199–205.

Ubel PA. Emotions, decisions, and the limits of rationality: symposium introduction. Med Decis Making. 2005;25(1):95–6. https://doi.org/10.1177/2F0272989X04273143 .

Iedema R, Jorm C, Lum M. Affect is central to patient safety: The horror stories of young anaesthetists. Soc Sci Med. 2009;69(12):1750–6. https://doi.org/10.1016/j.socscimed.2009.09.043 .

Heyhoe J, Lawton R. Social emotion and patient safety: an important and understudied intersection. BMJ Qual Saf. 2020;29(10):1–2. https://doi.org/10.1136/bmjqs-2019-010795 .

Finucane ML, Alhakami A, Slovic P, Johnson SM. The affect heuristic in judgments of risks and benefits. J Behav Decis Mak. 2000;13(1):1–17. https://doi.org/10.1002/(SICI)1099-0771(200001/03)13:1/3C1::AID-BDM333/3E3.0.CO;2-S .

Article   Google Scholar  

Finucane ML, Alhakami A, Slovic P, Johnson SM. The affect heuristic in judgments of risks and benefits. J Behav Decis Making. 2000;13(1):1–7.

Loewenstein G. Emotions in economic theory and economic behavior. Am Econ Rev. 2000;90(2):426–32.

Tversky A, Kahneman D. Judgment under Uncertainty: Heuristics and Biases: Biases in judgments reveal some heuristics of thinking under uncertainty. Science. 1974;185(4157):1124–31. https://doi.org/10.1126/science.185.4157.1124 .

Article   CAS   PubMed   Google Scholar  

Croskerry P, Abbass AA, Wu AW. How doctors feel: affective issues in patients’ safety. The Lancet. 2008;372(9645):1205–6.

Baumeister RF, Vohs KD, Nathan DeWall C, Zhang L. How emotion shapes behavior: Feedback, anticipation, and reflection, rather than direct causation. Pers Soc Psychol Rev. 2007;11(2):167–203. https://doi.org/10.1177/1088868307301033 .

Ferrer RA, Mendes WB. Emotion, health decision making, and health behaviour. Psychol Health. 2018;33(1):1–16. https://doi.org/10.1080/08870446.2017.1385787 .

Schmidt HG, Van Gog T, Schuit SC, Van den Berge K, Van Daele PL, Bueving H, Van der Zee T, Van den Broek WW, Van Saase JL, Mamede S. Do patients’ disruptive behaviours influence the accuracy of a doctor’s diagnosis? A randomised experiment. BMJ Qual Saf. 2017;26(1):19–23. https://doi.org/10.1136/bmjqs-2015-004109 .

Mamede S, Van Gog T, Schuit SC, Van den Berge K, Van Daele PL, Bueving H, Van der Zee T, Van den Broek WW, Van Saase JL, Schmidt HG. Why patients’ disruptive behaviours impair diagnostic reasoning: a randomised experiment. BMJ Qual Saf. 2017;26(1):13–8. https://doi.org/10.1136/bmjqs-2015-005065 .

Isbell LM, Boudreaux ED, Chimowitz H, Liu G, Cyr E, Kimball E. What do emergency department physicians and nurses feel? A qualitative study of emotions, triggers, regulation strategies, and effects on patient care. BMJ Qual Saf. 2020;29(10):1–2. https://doi.org/10.1136/bmjqs-2019-010179 .

Article   PubMed Central   PubMed   Google Scholar  

Isbell LM, Tager J, Beals K, Liu G. Emotionally evocative patients in the emergency department: a mixed methods investigation of providers’ reported emotions and implications for patient safety. BMJ Qual Saf. 2020;29(10):1–2. https://doi.org/10.1136/bmjqs-2019-010110 .

Craciun M. Emotions and knowledge in expert work: A comparison of two psychotherapies. Am J Sociol. 2018;123(4):959–1003. https://doi.org/10.1086/695682# .

Diener E, Thapa S, Tay L. Positive emotions at work. Annu Rev Organ Psych Organ Behav. 2020;7:451–77.

Sirriyeh R, Lawton R, Gardner P, Armitage G. Coping with medical error: a systematic review of papers to assess the effects of involvement in medical errors on healthcare professionals’ psychological well-being. Qual Saf Health Care. 2010;19(6):e43–e43. https://doi.org/10.1136/qshc.2009.035253 .

Seys D, Wu AW, Gerven EV, Vleugels A, Euwema M, Panella M, Scott SD, Conway J, Sermeus W, Vanhaecht K. Health care professionals as second victims after adverse events: a systematic review. Eval Health Prof. 2013;36(2):135–62. https://doi.org/10.1177/2F0163278712458918 .

Harrison R, Lawton R, Stewart K. Doctors’ experiences of adverse events in secondary care: the professional and personal impact. Clin Med. 2014;14(6):585. https://doi.org/10.7861/2Fclinmedicine.14-6-585 .

Lever I, Dyball D, Greenberg N, Stevelink SA. Health consequences of bullying in the healthcare workplace: a systematic review. J Adv Nurs. 2019;75(12):3195–209. https://doi.org/10.1111/jan.13986 .

Bailey, S. (2021). Parliamentary report on workforce burnout and resilience. bmj, 373. https://doi.org/10.1136/bmj.n1603

Kohn LT, Corrigan JM, Donaldson MS (Institute of Medicine). To err is human: building a safer health system. Washington, DC: National Academy Press, 2000

Hall LH, Johnson J, Watt I, Tsipa A, O’Connor DB. Healthcare staff wellbeing, burnout, and patient safety: a systematic review. PLoS ONE. 2016;11(7):e0159015. https://doi.org/10.1371/journal.pone.0159015 .

Article   CAS   PubMed Central   PubMed   Google Scholar  

Gagnon MP, Ngangue P, Payne-Gagnon J, Desmartis M. m-Health adoption by healthcare professionals: a systematic review. J Am Med Inform Assoc. 2016;23(1):212–20. https://doi.org/10.1093/jamia/ocv052 .

Ekman P. An argument for basic emotions. Cogn Emot. 1992;6(3–4):169–200. https://doi.org/10.1080/02699939208411068 .

Moher D, Liberati A, Tetzlaff J, et al. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151:264–9. https://doi.org/10.7326/0003-4819-151-4-200908180-00135 .

Harrison R, Jones B, Gardner P, Lawton R. Quality assessment with diverse studies (QuADS): an appraisal tool for methodological and reporting quality in systematic reviews of mixed-or multi-method studies. BMC Health Serv Res. 2021;21(1):1–20.

Google Scholar  

Al Omar M, Salam M, Al-Surimi K. Workplace bullying and its impact on the quality of healthcare and patient safety. Hum Resour Health. 2019;17(1):1–8.

Chard R. How perioperative nurses define, attribute causes of, and react to intraoperative nursing errors. AORN J. 2010;91(1):132–45. https://doi.org/10.1016/j.aorn.2009.06.028 .

Biggs S, Waggett HB, Shabbir J. Impact of surgical complications on the operating surgeon. Colorectal Dis. 2020;22(9):1169–74. https://doi.org/10.1111/codi.15021 .

Bari A, Khan RA, Rathore AW. Medical errors; causes, consequences, emotional response and resulting behavioral change. Pak J Med Sci. 2016;32(3):523. https://doi.org/10.12669/2Fpjms.323.9701 .

Yildirim A, Yildirim D. Mobbing in the workplace by peers and managers: mobbing experienced by nurses working in healthcare facilities in Turkey and its effect on nurses. J Clin Nurs. 2007;16(8):1444–53. https://doi.org/10.1111/j.1365-2702.2006.01814.x .

Karga M, Kiekkas P, Aretha D, Lemonidou C. Changes in nursing practice: associations with responses to and coping with errors. J Clin Nurs. 2011;20(21–22):3246–55. https://doi.org/10.1111/j.1365-2702.2011.03772.x .

Vanhaecht K, Seys D, Schouten L, Bruyneel L, CoeckelberghsPanella EM, Zeeman G. Duration of second victim symptoms in the aftermath of a patient safety incident and association with the level of patient harm: a cross-sectional study in the Netherlands. BMJ open. 2019;9(7):e029923. https://doi.org/10.1136/bmjopen-2019-029923 .

Jalil R, Huber JW, Sixsmith J, Dickens GL. Mental health nurses’ emotions, exposure to patient aggression, attitudes to and use of coercive measures: Cross sectional questionnaire survey. Int J Nurs Stud. 2017;75:130–8. https://doi.org/10.1016/j.ijnurstu.2017.07.018 .

Pinto A, Faiz O, Bicknell C, Vincent C. Surgical complications and their implications for surgeons’ well-being. J Br Surg. 2013;100(13):1748–55. https://doi.org/10.1002/bjs.9308 .

Article   CAS   Google Scholar  

Hassankhani H, Parizad N, Gacki-Smith J, Rahmani A, Mohammadi E. The consequences of violence against nurses working in the emergency department: A qualitative study. Int Emerg Nurs. 2018;39:20–5. https://doi.org/10.1016/j.ienj.2017.07.007 .

Chambers, Charlotte NL, et al. "‘It feels like being trapped in an abusive relationship’: bullying prevalence and consequences in the New Zealand senior medical workforce: a cross-sectional study." BMJ open 8.3 (2018): e020158. https://doi.org/10.1136/bmjopen-2017-020158

Janes G, Mills T, Budworth L, Johnson J, Lawton R. The association between health care staff engagement and patient safety outcomes: a systematic review and meta-analysis. J Patient Saf. 2021;17(3):207. https://doi.org/10.1097/PTS.0000000000000807 .

Owens KM, Keller S. Exploring workforce confidence and patient experiences: A quantitative analysis. Patient Exp J. 2018;5(1):97–105. https://doi.org/10.35680/2372-0247.1210 .

White AA, Gallagher TH. After the apology: Coping and recovery after errors. AMA J Ethics. 2011;13(9):593–600.

Liu G, Chimowitz H, Isbell LM. Affective influences on clinical reasoning and diagnosis: insights from social psychology and new research opportunities. Diagnosis. 2022. https://doi.org/10.1515/dx-2021-0115 .

Van Kleef GA. How emotions regulate social life: The emotions as social information (EASI) model. Curr Dir Psychol Sci. 2009;18(3):184–8.

Fischer AH, Van Kleef GA. Where have all the people gone? A plea for including social interaction in emotion research. Emotion Rev. 2010;2(3):208–11. https://doi.org/10.1177/2F1754073910361980 .

Van Kleef GA, Cheshin A, Fischer AH, Schneider IK. The social nature of emotions. Front Psychol. 2016;7:896.

PubMed Central   PubMed   Google Scholar  

Maran DA, Cortese CG, Pavanelli P, Fornero G, & Gianino MM. (2019). Gender differences in reporting workplace violence: a qualitative analysis of administrative records of violent episodes experienced by healthcare workers in a large public Italian hospital. BMJ open 9(11). https://doi.org/10.1136/bmjopen-2019-031546

Sun L, Zhang W, Qi F, Wang Y. Gender differences for the prevalence and risk factors of workplace violence among healthcare professionals in Shandong, China. Front Public Health. 2022;10:873–936. https://doi.org/10.3389/fpubh.2022.873936 .

Fischer AH, Manstead AS, Zaalberg R. Social influences on the emotion process. Eur Rev Soc Psychol. 2003;14(1):171–201. https://doi.org/10.1080/10463280340000054 .

Isen AM, Rosenzweig AS, Young MJ. The influence of positive affect on clinical problem solving. Med Decis Making. 1991;11(3):221–7. https://doi.org/10.1177/2F0272989X9101100313 .

Estrada CA, Isen AM, Young MJ. Positive affect facilitates integration of information and decreases anchoring in reasoning among physicians. Organ Behav Hum Decis Process. 1997;72(1):117–35. https://doi.org/10.1006/obhd.1997.2734 .

Hodzic S, Scharfen J, Ripoll P, Holling H, Zenasni F. How efficient are emotional intelligence trainings: a meta-analysis. Emot Rev. 2018;10:138–4869.

Mattingly V, Kraiger K. Can emotional intelligence be trained? A meta-analytical investigation. Hum Resour Manag Rev. 2019;29(2):140–55. https://doi.org/10.1016/j.hrmr.2018.03.002 .

Nightingale S, Spiby H, Sheen K, Slade P. The impact of emotional intelligence in health care professionals on caring behaviour towards patients in clinical and long-term care settings: Findings from an integrative review. Int J Nurs Stud. 2018;80:106–17. https://doi.org/10.1016/j.ijnurstu.2018.01.006 .

Bourgeon L, Bensalah M, Vacher A, Ardouin JC, Debien B. Role of emotional competence in residents’ simulated emergency care performance: a mixed-methods study. BMJ Qual Saf. 2016;25(5):364–71. https://doi.org/10.1136/bmjqs-2015-004032 .

Download references

Acknowledgements

The authors would like to thank all those were involved in supporting this review including the Workforce Engagement and Wellbeing Theme members within the Patient Safety Translational Research Centre.

This research is funded by the National Institute for Health Research (NIHR) Yorkshire and Humber Patient Safety Translational Research Centre (NIHR Yorkshire and Humber PSTRC) and Yorkshire and Humber ARC. This research has also been supported by the Yorkshire and Humber Patient Safety Research Collaboration (PSRC). The views expressed in this article are those of the author(s) and not necessarily those of the NIHR, or the Department of Health and Social Care.

Author information

Authors and affiliations.

Bradford Teaching Hospitals NHS Foundation Trust, Bradford, BD9 6RJ, UK

Raabia Sattar, Rebecca Lawton, Jane Heyhoe, Isabel Hague & Chloe Grindey

Anglia Ruskin University, Cambridge, CB1, 1PT, UK

Gillian Janes

University of London, London, EC1V 0HB, UK

Mai Elshehaly

You can also search for this author in PubMed   Google Scholar

Contributions

RS, RL, GJ, and JH contributed towards the development of the ideas for this review. RS built the search strategy and this was checked by RL and JH. RS performed the searches in all databases. Data extraction for stage 1 was carried out by CG and IH and checked by RS, RL, JH and GJ. Data extraction for stage 2 was carried out by RS and checked by RL, JH, and JH. Data analysis was led by RS . ME developed the infographic for the results and supported with the interpretation of the data. RS wrote a first draft of the manuscript.  All authors provided input and recommendations at all stages of the study and revised the draft manuscript. All authors read, contributed towards and approved the final manuscript.

Corresponding author

Correspondence to Raabia Sattar .

Ethics declarations

Ethics approval and consent to participate.

Ethics approvals and consent was not required for this systematic review.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., supplementary material 3., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Sattar, R., Lawton, R., Janes, G. et al. A systematic review of workplace triggers of emotions in the healthcare environment, the emotions experienced, and the impact on patient safety. BMC Health Serv Res 24 , 603 (2024). https://doi.org/10.1186/s12913-024-11011-1

Download citation

Received : 31 August 2023

Accepted : 18 April 2024

Published : 09 May 2024

DOI : https://doi.org/10.1186/s12913-024-11011-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Healthcare work environment
  • Patient safety
  • Systematic review

BMC Health Services Research

ISSN: 1472-6963

peer review in research ethics

Articles on évaluation par les pairs (peer review)

Displaying all articles.

peer review in research ethics

« Peer Community In », un système alternatif de publication scientifique

Denis Bourguet , Inrae ; Etienne Rouzies , Université de Perpignan , and Thomas Guillemaud , Inrae

peer review in research ethics

Débat : Comment l’évaluation ouverte renouvelle-t -elle la conversation scientifique ?

Anne Baillot , Le Mans Université ; Anthony Pecqueux , Université Lumière Lyon 2 ; Cédric Poivret , Université Gustave Eiffel ; Céline Barthonnat , Centre national de la recherche scientifique (CNRS) , and Julie Giovacchini , Centre national de la recherche scientifique (CNRS)

peer review in research ethics

Vers une « évaluation par les pairs » accessible à tous

Alex O. Holcombe , University of Sydney

peer review in research ethics

Petit guide pour bien lire les publications scientifiques

Simon Kolstoe , University of Portsmouth

peer review in research ethics

« Les mots de la science » : P comme peer review

Philippe Huneman , Université Paris 1 Panthéon-Sorbonne and Iris Deroeux , The Conversation France

peer review in research ethics

Pourquoi les chercheurs ouvrent-ils leurs recherches ?

Chérifa Boukacem-Zeghmouri , Université Claude Bernard Lyon 1

peer review in research ethics

Comment les chercheurs choisissent les journaux auxquels ils soumettent leurs articles

Bastien Castagneyrol , Inrae

peer review in research ethics

L’impératif d’impact de la recherche : état des lieux et enjeux dans les écoles de management

Tamym Abdessemed , ISC Paris Business School and Pascale Bueno Merino , EM Normandie

Related Topics

  • économie du savoir
  • Les belles histoires de la science ouverte
  • open access
  • publications
  • revues scientifiques
  • science ouverte
  • transparence
  • universités

Top contributors

peer review in research ethics

Professor, School of Psychology, University of Sydney

peer review in research ethics

PRAG/Docteur en sciences de gestion, Spécialiste d'Histoire de la Pensée Managériale, Université Gustave Eiffel

peer review in research ethics

Directeur Général Adjoint académique et recherche, ISC Paris Business School

peer review in research ethics

Sociologue au CNRS, Centre Max Weber, Université Lumière Lyon 2

peer review in research ethics

Associate Professor of Bioethics, University of Portsmouth

peer review in research ethics

Chercheur en écologie, Inrae

peer review in research ethics

Directeur de recherche CNRS, Institut d'histoire et de philosophie des sciences et des techniques, Université Paris 1 Panthéon-Sorbonne

peer review in research ethics

Directeur de recherches, Inrae

peer review in research ethics

Ingénieure de recherche en humanités numériques et sciences de l'Antiquité, Centre national de la recherche scientifique (CNRS)

peer review in research ethics

Professeure des Universités, Université Claude Bernard Lyon 1

peer review in research ethics

Professeure des Universités en Etudes Germaniques et en Humanités Numériques, en délégation CNRS au laboratoire ICAR (CNRS UMR 5191), Le Mans Université

peer review in research ethics

Directrice de la Recherche, Enseignant-Chercheur en Management Stratégique, Pôle Léonard de Vinci

peer review in research ethics

Conservateur des bibliothèques, Référent Science ouverte, Université de Perpignan

peer review in research ethics

Éditrice, Centre national de la recherche scientifique (CNRS)

  • X (Twitter)
  • Unfollow topic Follow topic
  • Search Menu
  • Author Guidelines
  • Submission Site
  • Open Access
  • About International Studies Review
  • About the International Studies Association
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Ai: a global governance challenge, empirical perspectives, normative perspectives, acknowledgement, conflict of interest.

  • < Previous

The Global Governance of Artificial Intelligence: Next Steps for Empirical and Normative Research

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Jonas Tallberg, Eva Erman, Markus Furendal, Johannes Geith, Mark Klamberg, Magnus Lundgren, The Global Governance of Artificial Intelligence: Next Steps for Empirical and Normative Research, International Studies Review , Volume 25, Issue 3, September 2023, viad040, https://doi.org/10.1093/isr/viad040

  • Permissions Icon Permissions

Artificial intelligence (AI) represents a technological upheaval with the potential to change human society. Because of its transformative potential, AI is increasingly becoming subject to regulatory initiatives at the global level. Yet, so far, scholarship in political science and international relations has focused more on AI applications than on the emerging architecture of global AI regulation. The purpose of this article is to outline an agenda for research into the global governance of AI. The article distinguishes between two broad perspectives: an empirical approach, aimed at mapping and explaining global AI governance; and a normative approach, aimed at developing and applying standards for appropriate global AI governance. The two approaches offer questions, concepts, and theories that are helpful in gaining an understanding of the emerging global governance of AI. Conversely, exploring AI as a regulatory issue offers a critical opportunity to refine existing general approaches to the study of global governance.

La inteligencia artificial (IA) representa una revolución tecnológica que tiene el potencial de poder cambiar la sociedad humana. Debido a este potencial transformador, la IA está cada vez más sujeta a iniciativas reguladoras a nivel global. Sin embargo, hasta ahora, el mundo académico en el área de las ciencias políticas y las relaciones internacionales se ha centrado más en las aplicaciones de la IA que en la arquitectura emergente de la regulación global en materia de IA. El propósito de este artículo es esbozar una agenda para la investigación sobre la gobernanza global en materia de IA. El artículo distingue entre dos amplias perspectivas: por un lado, un enfoque empírico, destinado a mapear y explicar la gobernanza global en materia de IA y, por otro lado, un enfoque normativo, destinado a desarrollar y a aplicar normas para una gobernanza global adecuada de la IA. Los dos enfoques ofrecen preguntas, conceptos y teorías que resultan útiles para comprender la gobernanza global emergente en materia de IA. Por el contrario, el hecho de estudiar la IA como si fuese una cuestión reguladora nos ofrece una oportunidad de gran relevancia para poder perfeccionar los enfoques generales existentes en el estudio de la gobernanza global.

L'intelligence artificielle (IA) constitue un bouleversement technologique qui pourrait bien changer la société humaine. À cause de son potentiel transformateur, l'IA fait de plus en plus l'objet d'initiatives réglementaires au niveau mondial. Pourtant, jusqu'ici, les chercheurs en sciences politiques et relations internationales se sont davantage concentrés sur les applications de l'IA que sur l’émergence de l'architecture de la réglementation mondiale de l'IA. Cet article vise à exposer les grandes lignes d'un programme de recherche sur la gouvernance mondiale de l'IA. Il fait la distinction entre deux perspectives larges : une approche empirique, qui vise à représenter et expliquer la gouvernance mondiale de l'IA; et une approche normative, qui vise à mettre au point et appliquer les normes d'une gouvernance mondiale de l'IA adéquate. Les deux approches proposent des questions, des concepts et des théories qui permettent de mieux comprendre l’émergence de la gouvernance mondiale de l'IA. À l'inverse, envisager l'IA telle une problématique réglementaire présente une opportunité critique d'affiner les approches générales existantes de l’étude de la gouvernance mondiale.

Artificial intelligence (AI) represents a technological upheaval with the potential to transform human society. It is increasingly viewed by states, non-state actors, and international organizations (IOs) as an area of strategic importance, economic competition, and risk management. While AI development is concentrated to a handful of corporations in the United States, China, and Europe, the long-term consequences of AI implementation will be global. Autonomous weapons will have consequences for armed conflicts and power balances; automation will drive changes in job markets and global supply chains; generative AI will affect content production and challenge copyright systems; and competition around the scarce hardware needed to train AI systems will shape relations among both states and businesses. While the technology is still only lightly regulated, state and non-state actors are beginning to negotiate global rules and norms to harness and spread AI’s benefits while limiting its negative consequences. For example, in the past few years, the United Nations Educational, Scientific and Cultural Organization (UNESCO) adopted recommendations on the ethics of AI, the European Union (EU) negotiated comprehensive AI legislation, and the Group of Seven (G7) called for developing global technical standards on AI.

Our purpose in this article is to outline an agenda for research into the global governance of AI. 1 Advancing research on the global regulation of AI is imperative. The rules and arrangements that are currently being developed to regulate AI will have a considerable impact on power differentials, the distribution of economic value, and the political legitimacy of AI governance for years to come. Yet there is currently little systematic knowledge on the nature of global AI regulation, the interests influential in this process, and the extent to which emerging arrangements can manage AI’s consequences in a just and democratic manner. While poised for rapid expansion, research on the global governance of AI remains in its early stages (but see Maas 2021 ; Schmitt 2021 ).

This article complements earlier calls for research on AI governance in general ( Dafoe 2018 ; Butcher and Beridze 2019 ; Taeihagh 2021 ; Büthe et al. 2022 ) by focusing specifically on the need for systematic research into the global governance of AI. It submits that global efforts to regulate AI have reached a stage where it is necessary to start asking fundamental questions about the characteristics, sources, and consequences of these governance arrangements.

We distinguish between two broad approaches for studying the global governance of AI: an empirical perspective, informed by a positive ambition to map and explain AI governance arrangements; and a normative perspective, informed by philosophical standards for evaluating the appropriateness of AI governance arrangements. Both perspectives build on established traditions of research in political science, international relations (IR), and political philosophy, and offer questions, concepts, and theories that are helpful as we try to better understand new types of governance in world politics.

We argue that empirical and normative perspectives together offer a comprehensive agenda of research on the global governance of AI. Pursuing this agenda will help us to better understand characteristics, sources, and consequences of the global regulation of AI, with potential implications for policymaking. Conversely, exploring AI as a regulatory issue offers a critical opportunity to further develop concepts and theories of global governance as they confront the particularities of regulatory dynamics in this important area.

We advance this argument in three steps. First, we argue that AI, because of its economic, political, and social consequences, presents a range of governance challenges. While these challenges initially were taken up mainly by national authorities, recent years have seen a dramatic increase in governance initiatives by IOs. These efforts to regulate AI at global and regional levels are likely driven by several considerations, among them AI applications creating cross-border externalities that demand international cooperation and AI development taking place through transnational processes requiring transboundary regulation. Yet, so far, existing scholarship on the global governance of AI has been mainly descriptive or policy-oriented, rather than focused on theory-driven positive and normative questions.

Second, we argue that an empirical perspective can help to shed light on key questions about characteristics and sources of the global governance of AI. Based on existing concepts, the emerging governance architecture for AI can be described as a regime complex—a structure of partially overlapping and diverse governance arrangements without a clearly defined central institution or hierarchy. IR theories are useful in directing our attention to the role of power, interests, ideas, and non-state actors in the construction of this regime complex. At the same time, the specific conditions of AI governance suggest ways in which global governance theories may be usefully developed.

Third, we argue that a normative perspective raises crucial questions regarding the nature and implications of global AI governance. These questions pertain both to procedure (the process for developing rules) and to outcome (the implications of those rules). A normative perspective suggests that procedures and outcomes in global AI governance need to be evaluated in terms of how they meet relevant normative ideals, such as democracy and justice. How could the global governance of AI be organized to live up to these ideals? To what extent are emerging arrangements minimally democratic and fair in their procedures and outcomes? Conversely, the global governance of AI raises novel questions for normative theorizing, for instance, by invoking aims for AI to be “trustworthy,” “value aligned,” and “human centered.”

Advancing this agenda of research is important for several reasons. First, making more systematic use of social science concepts and theories will help us to gain a better understanding of various dimensions of the global governance of AI. Second, as a novel case of governance involving unique features, AI raises questions that will require us to further refine existing concepts and theories of global governance. Third, findings from this research agenda will be of importance for policymakers, by providing them with evidence on international regulatory gaps, the interests that have influenced current arrangements, and the normative issues at stake when developing this regime complex going forward.

The remainder of this article is structured in three substantive sections. The first section explains why AI has become a concern of global governance. The second section suggests that an empirical perspective can help to shed light on the characteristics and drivers of the global governance of AI. The third section discusses the normative challenges posed by global AI governance, focusing specifically on concerns related to democracy and justice. The article ends with a conclusion that summarizes our proposed agenda for future research on the global governance of AI.

Why does AI pose a global governance challenge? In this section, we answer this question in three steps. We begin by briefly describing the spread of AI technology in society, then illustrate the attempts to regulate AI at various levels of governance, and finally explain why global regulatory initiatives are becoming increasingly common. We argue that the growth of global governance initiatives in this area stems from AI applications creating cross-border externalities that demand international cooperation and from AI development taking place through transnational processes requiring transboundary regulation.

Due to its amorphous nature, AI escapes easy definition. Instead, the definition of AI tends to depend on the purposes and audiences of the research ( Russell and Norvig 2020 ). In the most basic sense, machines are considered intelligent when they can perform tasks that would require intelligence if done by humans ( McCarthy et al. 1955 ). This could happen through the guiding hand of humans, in “expert systems” that follow complex decision trees. It could also happen through “machine learning,” where AI systems are trained to categorize texts, images, sounds, and other data, using such categorizations to make autonomous decisions when confronted with new data. More specific definitions require that machines display a level of autonomy and capacity for learning that enables rational action. For instance, the EU’s High-Level Expert Group on AI has defined AI as “systems that display intelligent behaviour by analysing their environment and taking actions—with some degree of autonomy—to achieve specific goals” (2019, 1). Yet, illustrating the potential for conceptual controversy, this definition has been criticized for denoting both too many and too few technologies as AI ( Heikkilä 2022a ).

AI technology is already implemented in a wide variety of areas in everyday life and the economy at large. For instance, the conversational chatbot ChatGPT is estimated to have reached 100 million users just  two months after its launch at the end of 2022 ( Hu 2023 ). AI applications enable new automation technologies, with subsequent positive or negative effects on the demand for labor, employment, and economic equality ( Acemoglu and Restrepo 2020 ). Military AI is integral to lethal autonomous weapons systems (LAWS), whereby machines take autonomous decisions in warfare and battlefield targeting ( Rosert and Sauer 2018 ). Many governments and public agencies have already implemented AI in their daily operations in order to more efficiently evaluate welfare eligibility, flag potential fraud, profile suspects, make risk assessments, and engage in mass surveillance ( Saif et al. 2017 ; Powers and Ganascia 2020 ; Berk 2021 ; Misuraca and van Noordt 2022 , 38).

Societies face significant governance challenges in relation to the implementation of AI. One type of challenge arises when AI systems function poorly, such as when applications involving some degree of autonomous decision-making produce technical failures with real-world implications. The “Robodebt” scheme in Australia, for instance, was designed to detect mistaken social security payments, but the Australian government ultimately had to rescind 400,000 wrongfully issued welfare debts ( Henriques-Gomes 2020 ). Similarly, Dutch authorities recently implemented an algorithm that pushed tens of thousands of families into poverty after mistakenly requiring them to repay child benefits, ultimately forcing the government to resign ( Heikkilä 2022b ).

Another type of governance challenge arises when AI systems function as intended but produce impacts whose consequences may be regarded as problematic. For instance, the inherent opacity of AI decision-making challenges expectations on transparency and accountability in public decision-making in liberal democracies ( Burrell 2016 ; Erman and Furendal 2022a ). Autonomous weapons raise critical ethical and legal issues ( Rosert and Sauer 2019 ). AI applications for surveillance in law enforcement give rise to concerns of individual privacy and human rights ( Rademacher 2019 ). AI-driven automation involves changes in labor markets that are painful for parts of the population ( Acemoglu and Restrepo 2020 ). Generative AI upends conventional ways of producing creative content and raises new copyright and data security issues ( Metz 2022 ).

More broadly, AI presents a governance challenge due to its effects on economic competitiveness, military security, and personal integrity, with consequences for states and societies. In this respect, AI may not be radically different from earlier general-purpose technologies, such as the steam engine, electricity, nuclear power, and the internet ( Frey 2019 ). From this perspective, it is not the novelty of AI technology that makes it a pressing issue to regulate but rather the anticipation that AI will lead to large-scale changes and become a source of power for state and societal actors.

Challenges such as these have led to a rapid expansion in recent years of efforts to regulate AI at different levels of governance. The OECD AI Policy Observatory records more than 700 national AI policy initiatives from 60 countries and territories ( OECD 2021 ). Earlier research into the governance of AI has therefore naturally focused mostly on the national level ( Radu 2021 ; Roberts et al. 2021 ; Taeihagh 2021 ). However, a large number of governance initiatives have also been undertaken at the global level, and many more are underway. According to an ongoing inventory of AI regulatory initiatives by the Council of Europe, IOs overtook national authorities as the main source of such initiatives in 2020 ( Council of Europe 2023 ).  Figure 1 visualizes this trend.

Origins of AI governance initiatives, 2015–2022. Source: Council of Europe (2023).

Origins of AI governance initiatives, 2015–2022. Source : Council of Europe (2023 ).

According to this source, national authorities launched 170 initiatives from 2015 to 2022, while IOs put in place 210 initiatives during the same period. Over time, the share of regulatory initiatives emanating from IOs has thus grown to surpass the share resulting from national authorities. Examples of the former include the OECD Principles on Artificial Intelligence agreed in 2019, the UNESCO Recommendation on Ethics of AI adopted in 2021, and the EU’s ongoing negotiations on the EU AI Act. In addition, several governance initiatives emanate from the private sector, civil society, and multistakeholder partnerships. In the next section, we will provide a more developed characterization of these global regulatory initiatives.

Two concerns likely explain why AI increasingly is becoming subject to governance at the global level. First, AI creates externalities that do not follow national borders and whose regulation requires international cooperation. China’s Artificial Intelligence Development Plan, for instance, clearly states that the country is using AI as a leapfrog technology in order to enhance national competitiveness ( Roberts et al. 2021 ). Since states with less regulation might gain a competitive edge when developing certain AI applications, there is a risk that such strategies create a regulatory race to the bottom. International cooperation that creates a level playing field could thus be said to be in the interest of all parties.

Second, the development of AI technology is a cross-border process carried out by transnational actors—multinational firms in particular. Big tech corporations, such as Google, Meta, or the Chinese drone maker DJI, are investing vast sums into AI development. The innovations of hardware manufacturers like Nvidia enable breakthroughs but depend on complex global supply chains, and international research labs such as DeepMind regularly present cutting-edge AI applications. Since the private actors that develop AI can operate across multiple national jurisdictions, the efforts to regulate AI development and deployment also need to be transboundary. Only by introducing common rules can states ensure that AI businesses encounter similar regulatory environments, which both facilitates transboundary AI development and reduces incentives for companies to shift to countries with laxer regulation.

Successful global governance of AI could help realize many of the potential benefits of the technology while mitigating its negative consequences. For AI to contribute to increased economic productivity, for instance, there needs to be predictable and clear regulation as well as global coordination around standards that prevent competition between parallel technical systems. Conversely, a failure to provide suitable global governance could lead to substantial risks. The intentional misuse of AI technology may undermine trust in institutions, and if left unchecked, the positive and negative externalities created by automation technologies might fall unevenly across different groups. Race dynamics similar to those that arose around nuclear technology in the twentieth century—where technological leadership created large benefits—might lead international actors and private firms to overlook safety issues and create potentially dangerous AI applications ( Dafoe 2018 ; Future of Life Institute 2023 ). Hence, policymakers face the task of disentangling beneficial from malicious consequences and then foster the former while regulating the latter. Given the speed at which AI is developed and implemented, governance also risks constantly being one step behind the technological frontier.

A prime example of how AI presents a global governance challenge is the efforts to regulate military AI, in particular autonomous weapons capable of identifying and eliminating a target without the involvement of a remote human operator ( Hernandez 2021 ). Both the development and the deployment of military applications with autonomous capabilities transcend national borders. Multinational defense companies are at the forefront of developing autonomous weapons systems. Reports suggest that such autonomous weapons are now beginning to be used in armed conflicts ( Trager and Luca 2022 ). The development and deployment of autonomous weapons involve the types of competitive dynamics and transboundary consequences identified above. In addition, they raise specific concerns with respect to accountability and dehumanization ( Sparrow 2007 ; Stop Killer Robots 2023 ). For these reasons, states have begun to explore the potential for joint global regulation of autonomous weapons systems. The principal forum is the Group on Governmental Experts (GGE) within the Convention on Certain Conventional Weapons (CCW). Yet progress in these negotiations is slow as the major powers approach this issue with competing interests in mind, illustrating the challenges involved in developing joint global rules.

The example of autonomous weapons further illustrates how the global governance of AI raises urgent empirical and normative questions for research. On the empirical side, these developments invite researchers to map emerging regulatory initiatives, such as those within the CCW, and to explain why these particular frameworks become dominant. What are the principal characteristics of global regulatory initiatives in the area of autonomous weapons, and how do power differentials, interest constellations, and principled ideas influence those rules? On the normative side, these developments invite researchers to address key normative questions raised by the development and deployment of autonomous weapons. What are the key normative issues at stake in the regulation of autonomous weapons, both with respect to the process through which such rules are developed and with respect to the consequences of these frameworks? To what extent are existing normative ideals and frameworks, such as just war theory, applicable to the governing of military AI ( Roach and Eckert 2020 )? Despite the global governance challenge of AI development and use, research on this topic is still in its infancy (but see Maas 2021 ; Schmitt 2021 ). In the remainder of this article, we therefore present an agenda for research into the global governance of AI. We begin by outlining an agenda for positive empirical research on the global governance of AI and then suggest an agenda for normative philosophical research.

An empirical perspective on the global governance of AI suggests two main questions: How may we describe the emerging global governance of AI? And how may we explain the emerging global governance of AI? In this section, we argue that concepts and theories drawn from the general study of global governance will be helpful as we address these questions, but also that AI, conversely, raises novel issues that point to the need for new or refined theories. Specifically, we show how global AI governance may be mapped along several conceptual dimensions and submit that theories invoking power dynamics, interests, ideas, and non-state actors have explanatory promise.

Mapping AI Governance

A key priority for empirical research on the global governance of AI is descriptive: Where and how are new regulatory arrangements emerging at the global level? What features characterize the emergent regulatory landscape? In answering such questions, researchers can draw on scholarship on international law and IR, which have conceptualized mechanisms of regulatory change and drawn up analytical dimensions to map and categorize the resulting regulatory arrangements.

Any mapping exercise must consider the many different ways in global AI regulation may emerge and evolve. Previous research suggests that legal development may take place in at least three distinct ways. To begin with, existing rules could be reinterpreted to also cover AI ( Maas 2021 ). For example, the principles of distinction, proportionality, and precaution in international humanitarian law could be extended, via reinterpretation, to apply to LAWS, without changing the legal source. Another manner in which new AI regulation may appear is via “ add-ons ” to existing rules. For example, in the area of global regulation of autonomous vehicles, AI-related provisions were added to the 1968 Vienna Road Traffic Convention through an amendment in 2015 ( Kunz and Ó hÉigeartaigh 2020 ). Finally, AI regulation may appear as a completely new framework , either through new state behavior that results in customary international law or through a new legal act or treaty ( Maas 2021 , 96). Here, one example of regulating AI through a new framework is the aforementioned EU AI Act, which would take the form of a new EU regulation.

Once researchers have mapped emerging regulatory arrangements, a central task will be to categorize them. Prior scholarship suggests that regulatory arrangements may be fruitfully analyzed in terms of five key dimensions (cf. Koremenos et al. 2001 ; Wahlgren 2022 , 346–347). A first dimension is whether regulation is horizontal or vertical . A horizontal regulation covers several policy areas, whereas a vertical regulation is a delimited legal framework covering one specific policy area or application. In the field of AI, emergent governance appears to populate both ends of this spectrum. For example, the proposed EU AI Act (2021), the UNESCO Recommendations on the Ethics of AI (2021), and the OECD Principles on AI (2019), which are not specific to any particular AI application or field, would classify as attempts at horizontal regulation. When it comes to vertical regulation, there are fewer existing examples, but discussions on a new protocol on LAWS within the CCW signal that this type of regulation is likely to become more important in the future ( Maas 2019a ).

A second dimension runs from centralization to decentralization . Governance is centralized if there is a single, authoritative institution at the heart of a regime, such as in trade, where the World Trade Organization (WTO) fulfills this role. In contrast, decentralized arrangements are marked by parallel and partly overlapping institutions, such as in the governance of the environment, the internet, or genetic resources (cf. Raustiala and Victor 2004 ). While some IOs with universal membership, such as UNESCO, have taken initiatives relating to AI governance, no institution has assumed the role as the core regulatory body at the global level. Rather, the proliferation of parallel initiatives, across levels and regions, lends weight to the conclusion that contemporary arrangements for the global governance of AI are strongly decentralized ( Cihon et al. 2020a ).

A third dimension is the continuum from hard law to soft law . While domestic statutes and treaties may be described as hard law, soft law is associated with guidelines of conduct, recommendations, resolutions, standards, opinions, ethical principles, declarations, guidelines, board decisions, codes of conduct, negotiated agreements, and a large number of additional normative mechanisms ( Abbott and Snidal 2000 ; Wahlgren 2022 ). Even though such soft documents may initially have been drafted as non-legal texts, they may in actual practice acquire considerable strength in structuring international relations ( Orakhelashvili 2019 ). While some initiatives to regulate AI classify as hard law, including the EU’s AI Act, Burri (2017 ) suggests that AI governance is likely to be dominated by “supersoft law,” noting that there are currently numerous processes underway creating global standards outside traditional international law-making fora. In a phenomenon that might be described as “bottom-up law-making” ( Koven Levit 2017 ), states and IOs are bypassed, creating norms that defy traditional categories of international law ( Burri 2017 ).

A fourth dimension concerns private versus public regulation . The concept of private regulation overlaps partly with substance understood as soft law, to the extent that private actors develop non-binding guidelines ( Wahlgren 2022 ). Significant harmonization of standards may be developed by private standardization bodies, such as the IEEE ( Ebers 2022 ). Public authorities may regulate the responsibility of manufacturers through tort law and product liability law ( Greenstein 2022 ). Even though contracts are originally matters between private parties, some contractual matters may still be regulated and enforced by law ( Ubena 2022 ).

A fifth dimension relates to the division between military and non-military regulation . Several policymakers and scholars describe how military AI is about to escalate into a strategic arms race between major powers such as the United States and China, similar to the nuclear arms race during the Cold War (cf. Petman 2017 ; Thompson and Bremmer 2018 ; Maas 2019a ). The process in the CCW Group of Governmental Experts on the regulation of LAWS is probably the largest single negotiation on AI ( Maas 2019b ) next to the negotiations on the EU AI Act. The zero-sum logic that appears to exist between states in the area of national security, prompting a military AI arms race, may not be applicable to the same extent to non-military applications of AI, potentially enabling a clearer focus on realizing positive-sum gains through regulation.

These five dimensions can provide guidance as researchers take up the task of mapping and categorizing global AI regulation. While the evidence is preliminary, in its present form, the global governance of AI must be understood as combining horizontal and vertical elements, predominantly leaning toward soft law, being heavily decentralized, primarily public in nature, and mixing military and non-military regulation. This multi-faceted and non-hierarchical nature of global AI governance suggests that it is best characterized as a regime complex , or a “larger web of international rules and regimes” ( Alter and Meunier 2009 , 13; Keohane and Victor 2011 ) rather than as a single, discrete regime.

If global AI governance can be understood as a regime complex, which some researchers already claim ( Cihon et al. 2020a ), future scholarship should look for theoretical and methodological inspiration in research on regime complexity in other policy fields. This research has found that regime complexes are characterized by path dependence, as existing rules shape the formulation of new rules; venue shopping, as actors seek to steer regulatory efforts to the fora most advantageous to their interests; and legal inconsistencies, as rules emerge from fractious and overlapping negotiations in parallel processes ( Raustiala and Victor 2004 ). Scholars have also considered the design of regime complexes ( Eilstrup-Sangiovanni and Westerwinter 2021 ), institutional overlap among bodies in regime complexes ( Haftel and Lenz 2021 ), and actors’ forum-shopping within regime complexes ( Verdier 2022 ). Establishing whether these patterns and dynamics are key features also of the AI regime complex stands out as an important priority in future research.

Explaining AI Governance

As our understanding of the empirical patterns of global AI governance grows, a natural next step is to turn to explanatory questions. How may we explain the emerging global governance of AI? What accounts for variation in governance arrangements and how do they compare with those in other policy fields, such as environment, security, or trade? Political science and IR offer a plethora of useful theoretical tools that can provide insights into the global governance of AI. However, at the same time, the novelty of AI as a governance challenge raises new questions that may require novel or refined theories. Thus far, existing research on the global governance of AI has been primarily concerned with descriptive tasks and largely fallen short in engaging with explanatory questions.

We illustrate the potential of general theories to help explain global AI governance by pointing to three broad explanatory perspectives in IR ( Martin and Simmons 2012 )—power, interests, and ideas—which have served as primary sources of theorizing on global governance arrangements in other policy fields. These perspectives have conventionally been associated with the paradigmatic theories of realism, liberalism, and constructivism, respectively, but like much of the contemporary IR discipline, we prefer to formulate them as non-paradigmatic sources for mid-level theorizing of more specific phenomena (cf. Lake 2013 ). We focus our discussion on how accounts privileging power, interests, and ideas have explained the origins and designs of IOs and how they may help us explain wider patterns of global AI governance. We then discuss how theories of non-state actors and regime complexity, in particular, offer promising avenues for future research into the global governance of AI. Research fields like science and technology studies (e.g., Jasanoff 2016 ) or the political economy of international cooperation (e.g., Gilpin 1987 ) can provide additional theoretical insights, but these literatures are not discussed in detail here.

A first broad explanatory perspective is provided by power-centric theories, privileging the role of major states, capability differentials, and distributive concerns. While conventional realism emphasizes how states’ concern for relative gains impedes substantive international cooperation, viewing IOs as epiphenomenal reflections of underlying power relations ( Mearsheimer 1994 ), developed power-oriented theories have highlighted how powerful states seek to design regulatory contexts that favor their preferred outcomes ( Gruber 2000 ) or shape the direction of IOs using informal influence ( Stone 2011 ; Dreher et al. 2022 ).

In research on global AI governance, power-oriented perspectives are likely to prove particularly fruitful in investigating how great-power contestation shapes where and how the technology will be regulated. Focusing on the major AI powerhouses, scholars have started to analyze the contrasting regulatory strategies and policies of the United States, China, and the EU, often emphasizing issues of strategic competition, military balance, and rivalry ( Kania 2017 ; Horowitz et al. 2018 ; Payne 2018 , 2021 ; Johnson 2019 ; Jensen et al. 2020 ). Here, power-centric theories could help understand the apparent emphasis on military AI in both the United States and China, as witnessed by the recent establishment of a US National Security Commission on AI and China’s ambitious plans of integrating AI into its military forces ( Ding 2018 ). The EU, for its part, is negotiating the comprehensive AI Act, seeking to use its market power to set a European standard for AI that subsequently can become the global standard, as it previously did with its GDPR law on data protection and privacy ( Schmitt 2021 ). Given the primacy of these three actors in AI development, their preferences and outlook regarding regulatory solutions will remain a key research priority.

Power-based accounts are also likely to provide theoretical inspiration for research on AI governance in the domain of security and military competition. Some scholars are seeking to assess the implications of AI for strategic rivalries, and their possible regulation, by drawing on historical analogies ( Leung 2019 ; see also Drezner 2019 ). Observing that, from a strategic standpoint, military AI exhibits some similarities to the problems posed by nuclear weapons, researchers have examined whether lessons from nuclear arms control have applicability in the domain of AI governance. For example, Maas (2019a ) argues that historical experience suggests that the proliferation of military AI can potentially be slowed down via institutionalization, while Zaidi and Dafoe (2021 ), in a study of the Baruch Plan for Nuclear Weapons, contend that fundamental strategic obstacles—including mistrust and fear of exploitation by other states—need to be overcome to make regulation viable. This line of investigation can be extended by assessing other historical analogies, such as the negotiations that led to the Strategic Arms Limitation Talks (SALT) in 1972 or more recent efforts to contain the spread of nuclear weapons, where power-oriented factors have shown continued analytical relevance (e.g., Ruzicka 2018 ).

A second major explanatory approach is provided by the family of theoretical accounts that highlight how international cooperation is shaped by shared interests and functional needs ( Keohane 1984 ; Martin 1992 ). A key argument in rational functionalist scholarship is that states are likely to establish IOs to overcome barriers to cooperation—such as information asymmetries, commitment problems, and transaction costs—and that the design of these institutions will reflect the underlying problem structure, including the degree of uncertainty and the number of involved actors (e.g., Koremenos et al. 2001 ; Hawkins et al. 2006 ; Koremenos 2016 ).

Applied to the domain of AI, these approaches would bring attention to how the functional characteristics of AI as a governance problem shape the regulatory response. They would also emphasize the investigation of the distribution of interests and the possibility of efficiency gains from cooperation around AI governance. The contemporary proliferation of partnerships and initiatives on AI governance points to the suitability of this theoretical approach, and research has taken some preliminary steps, surveying state interests and their alignment (e.g., Campbell 2019 ; Radu 2021 ). However, a systematic assessment of how the distribution of interests would explain the nature of emerging governance arrangements, both in the aggregate and at the constituent level, has yet to be undertaken.

A third broad explanatory perspective is provided by theories emphasizing the role of history, norms, and ideas in shaping global governance arrangements. In contrast to accounts based on power and interests, this line of scholarship, often drawing on sociological assumptions and theory, focuses on how institutional arrangements are embedded in a wider ideational context, which itself is subject to change. This perspective has generated powerful analyses of how societal norms influence states’ international behavior (e.g., Acharya and Johnston 2007 ), how norm entrepreneurs play an active role in shaping the origins and diffusion of specific norms (e.g., Finnemore and Sikkink 1998 ), and how IOs socialize states and other actors into specific norms and behaviors (e.g., Checkel 2005 ).

Examining the extent to which domestic and societal norms shape discussions on global governance arrangements stands out as a particularly promising area of inquiry. Comparative research on national ethical standards for AI has already indicated significant cross-country convergence, indicating a cluster of normative principles that are likely to inspire governance frameworks in many parts of the world (e.g., Jobin et al. 2019 ). A closely related research agenda concerns norm entrepreneurship in AI governance. Here, preliminary findings suggest that civil society organizations have played a role in advocating norms relating to fundamental rights in the formulation of EU AI policy and other processes ( Ulnicane 2021 ). Finally, once AI governance structures have solidified further, scholars can begin to draw on norms-oriented scholarship to design strategies for the analysis of how those governance arrangements may play a role in socialization.

In light of the particularities of AI and its political landscape, we expect that global governance scholars will be motivated to refine and adapt these broad theoretical perspectives to address new questions and conditions. For example, considering China’s AI sector-specific resources and expertise, power-oriented theories will need to grapple with questions of institutional creation and modification occurring under a distribution of power that differs significantly from the Western-centric processes that underpin most existing studies. Similarly, rational functionalist scholars will need to adapt their tools to address questions of how the highly asymmetric distribution of AI capabilities—in particular between producers, which are few, concentrated, and highly resourced, and users and subjects, which are many, dispersed, and less resourced—affects the formation of state interests and bargaining around institutional solutions. For their part, norm-oriented theories may need to be refined to capture the role of previously understudied sources of normative and ideational content, such as formal and informal networks of computer programmers, which, on account of their expertise, have been influential in setting the direction of norms surrounding several AI technologies.

We expect that these broad theoretical perspectives will continue to inspire research on the global governance of AI, in particular for tailored, mid-level theorizing in response to new questions. However, a fully developed research agenda will gain from complementing these theories, which emphasize particular independent variables (power, interests, and norms), with theories and approaches that focus on particular issues, actors, and phenomena. There is an abundance of theoretical perspectives that can be helpful in this regard, including research on the relationship between science and politics ( Haas 1992 ; Jasanoff 2016 ), the political economy of international cooperation ( Gilpin 1987 ; Frieden et al. 2017 ), the complexity of global governance ( Raustiala and Victor 2004 ; Eilstrup-Sangiovanni and Westerwinter 2021 ), and the role of non-state actors ( Risse 2012 ; Tallberg et al. 2013 ). We focus here on the latter two: theories of regime complexity, which have grown to become a mainstream approach in global governance scholarship, as well as theories of non-state actors, which provide powerful tools for understanding how private organizations influence regulatory processes. Both literatures hold considerable promise in advancing scholarship of AI global governance beyond its current state.

As concluded above, the current structure of global AI governance fits the description of a regime complex. Thus, approaching AI governance through this theoretical lens, understanding it as a larger web of rules and regulations, can open new avenues of research (see Maas 2021 for a pioneering effort). One priority is to analyze the AI regime complex in terms of core dimensions, such as scale, diversity, and density ( Eilstrup-Sangiovanni and Westerwinter 2021 ). Pointing to the density of this regime complex, existing studies have suggested that global AI governance is characterized by a high degree of fragmentation ( Schmitt 2021 ), which has motivated assessments of the possibility of greater centralization ( Cihon et al. 2020b ). Another area of research is to examine the emergence of legal inconsistencies and tensions, likely to emerge because of the diverging preferences of major AI players and the tendency of self-interest actors to forum-shop when engaging within a regime complex. Finally, given that the AI regime complex exists in a very early state, it provides researchers with an excellent opportunity to trace the origins and evolution of this form of governance structure from the outset, thus providing a good case for both theory development and novel empirical applications.

If theories of regime complexity can shine a light on macro-level properties of AI governance, other theoretical approaches can guide research into micro-level dynamics and influences. Recognizing that non-state actors are central in both AI development and its emergent regulation, researchers should find inspiration in theories and tools developed to study the role and influence of non-state actors in global governance (for overviews, see Risse 2012 ; Jönsson and Tallberg forthcoming ). Drawing on such work will enable researchers to assess to what extent non-state actor involvement in the AI regime complex differs from previous experiences in other international regimes. It is clear that large tech companies, like Google, Meta, and Microsoft, have formed regulatory preferences and that their monetary resources and technological expertise enable them to promote these interests in legislative and bureaucratic processes. For example, the Partnership on AI (PAI), a multistakeholder organization with more than 50 members, includes American tech companies at the forefront of AI development and fosters research on issues of AI ethics and governance ( Schmitt 2021 ). Other non-state actors, including civil society watchdog organizations, like the Civil Liberties Union for Europe, have been vocal in the negotiations of the EU AI Act, further underlining the relevance of this strand of research.

When investigating the role of non-state actors in the AI regime complex, research may be guided by four primary questions. A first question concerns the interests of non-state actors regarding alternative AI global governance architectures. Here, a survey by Chavannes et al. (2020 ) on possible regulatory approaches to LAWS suggests that private companies developing AI applications have interests that differ from those of civil society organizations. Others have pointed to the role of actors rooted in research and academia who have sought to influence the development of AI ethics guidelines ( Zhu 2022 ). A second question is to what extent the regulatory institutions and processes are accessible to the aforementioned non-state actors in the first place. Are non-state actors given formal or informal opportunities to be substantively involved in the development of new global AI rules? Research points to a broad and comprehensive opening up of IOs over the past two decades ( Tallberg et al. 2013 ) and, in the domain of AI governance, early indications are that non-state actors have been granted access to several multilateral processes, including in the OECD and the EU (cf. Niklas and Dencik 2021 ). A third question concerns actual participation: Are non-state actors really making use of the opportunities to participate, and what determines the patterns of participation? In this vein, previous research has suggested that the participation of non-state actors is largely dependent on their financial resources ( Uhre 2014 ) or the political regime of their home country ( Hanegraaff et al. 2015 ). In the context of AI governance, this raises questions about if and how the vast resource disparities and divergent interests between private tech corporations and civil society organizations may bias patterns of participation. There is, for instance, research suggesting that private companies are contributing to a practice of ethics washing by committing to nonbinding ethical guidelines while circumventing regulation ( Wagner 2018 ; Jobin et al. 2019 ; Rességuier and Rodrigues 2020 ). Finally, a fourth question is to what extent, and how, non-state actors exert influence on adopted AI rules. Existing scholarship suggests that non-state actors typically seek to shape the direction of international cooperation via lobbying ( Dellmuth and Tallberg 2017 ), while others have argued that non-state actors use participation in international processes largely to expand or sustain their own resources ( Hanegraaff et al. 2016 ).

The previous section suggested that emerging global initiatives to regulate AI amount to a regime complex and that an empirical approach could help to map and explain these regulatory developments. In this section, we move beyond positive empirical questions to consider the normative concerns at stake in the global governance of AI. We argue that normative theorizing is needed both for assessing how well existing arrangements live up to ideals such as democracy and justice and for evaluating how best to specify what these ideals entail for the global governance of AI.

Ethical values frequently highlighted in the context of AI governance include transparency, inclusion, accountability, participation, deliberation, fairness, and beneficence ( Floridi et al. 2018 ; Jobin et al. 2019 ). A normative perspective suggests several ways in which to theorize and analyze such values in relation to the global governance of AI. One type of normative analysis focuses on application, that is, on applying an existing normative theory to instances of AI governance, assessing how well such regulatory arrangements realize their principles (similar to how political theorists have evaluated whether global governance lives up to standards of deliberation; see Dryzek 2011 ; Steffek and Nanz 2008 ). Such an analysis could also be pursued more narrowly by using a certain normative theory to assess the implications of AI technologies, for instance, by approaching the problem of algorithmic bias based on notions of fairness or justice ( Vredenburgh 2022 ). Another type of normative analysis moves from application to justification, analyzing the structure of global AI governance with the aim of theory construction. In this type of analysis, the goal is to construe and evaluate candidate principles for these regulatory arrangements in order to arrive at the best possible (most justified) normative theory. In this case, the theorist starts out from a normative ideal broadly construed (concept) and arrives at specific principles (conception).

In the remainder of this section, we will point to the promises of analyzing global AI governance based on the second approach. We will focus specifically on the normative ideals of justice and democracy. While many normative ideals could serve as focal points for an analysis of the AI domain, democracy and justice appear particularly central for understanding the normative implications of the governance of AI. Previous efforts to deploy political philosophy to shed light on normative aspects of global governance point to the promise of this focus (e.g., Caney 2005 , 2014 ; Buchanan 2013 ). It is also natural to focus on justice and democracy given that many of the values emphasized in AI ethics and existing ethics guidelines are analytically close to justice and democracy. Our core argument will be that normative research needs to be attentive to how these ideals would be best specified in relation to both the procedures and outcomes of the global governance of AI.

AI Ethics and the Normative Analysis of Global AI Governance

Although there is a rich literature on moral or ethical aspects related to specific AI applications, investigations into normative aspects of global AI governance are surprisingly sparse (for exceptions, see Müller 2020 ; Erman and Furendal 2022a , 2022b ). Researchers have so far focused mostly on normative and ethical questions raised by AI considered as a tool, enabling, for example, autonomous weapons systems ( Sparrow 2007 ) and new forms of political manipulation ( Susser et al. 2019 ; Christiano 2021 ). Some have also considered AI as a moral agent of its own, focusing on how we could govern, or be governed by, a hypothetical future artificial general intelligence ( Schwitzgebel and Garza 2015 ; Livingston and Risse 2019 ; cf. Tasioulas 2019 ; Bostrom et al. 2020 ; Erman and Furendal 2022a ). Examples such as these illustrate that there is, by now, a vibrant field of “AI ethics” that aims to consider normative aspects of specific AI applications.

As we have shown above, however, initiatives to regulate AI beyond the nation-state have become increasingly common, and they are often led by IOs, multinational companies, private standardization bodies, and civil society organizations. These developments raise normative issues that require a shift from AI ethics in general to systematic analyses of the implications of global AI governance. It is crucial to explore these normative dimensions of how AI is governed, since how AI is governed invokes key normative questions pertaining to the ideals that ought to be met.

Apart from attempts to map or describe the central norms in the existing global governance of AI (cf. Jobin et al.), most normative analyses of the global governance of AI can be said to have proceeded in two different ways. The dominant approach is to employ an outcome-based focus ( Dafoe 2018 ; Winfield et al. 2019 ; Taeihagh 2021 ), which starts by identifying a potential problem or promise created by AI technology and then seeks to identify governance mechanisms or principles that can minimize risks or make a desired outcome more likely. This approach can be contrasted with a procedure-based focus, which attaches comparatively more weight to how governance processes happen in existing or hypothetical regulatory arrangements. It recognizes that there are certain procedural aspects that are important and might be overlooked by an analysis that primarily assesses outcomes.

The benefits of this distinction become apparent if we focus on the ideals of justice and democracy. Broadly construed, we understand justice as an ideal for how to distribute benefits and burdens—specifying principles that determine “who owes what to whom”—and democracy as an ideal for collective decision-making and the exercise of political power—specifying principles that determine “who has political power over whom” ( Barry 1991 ; Weale 1999 ; Buchanan and Keohane 2006 ; Christiano 2008 ; Valentini 2012 , 2013 ). These two ideals can be analyzed with a focus on procedure or outcome, producing four fruitful avenues of normative research into global AI governance. First, justice could be understood as a procedural value or as a distributive outcome. Second, and likewise, democracy could be a feature of governance processes or an outcome of those processes. Below, we discuss existing research from the standpoint of each of these four avenues. We conclude that there is great potential for novel insights if normative theorists consider the relatively overlooked issues of outcome aspects of justice and procedural aspects of democracy in the global governance of AI.

Procedural and Outcome Aspects of Justice

Discussions around the implications of AI applications on justice, or fairness, are predominantly concerned with procedural aspects of how AI systems operate. For instance, ever since the problem of algorithmic bias—i.e., the tendency that AI-based decision-making reflects and exacerbates existing biases toward certain groups—was brought to public attention, AI ethicists have offered suggestions of why this is wrong, and AI developers have sought to construct AI systems that treat people “fairly” and thus produce “justice.” In this context, fairness and justice are understood as procedural ideals, which AI decision-making frustrates when it fails to treat like cases alike, and instead systematically treats individuals from different groups differently ( Fazelpour and Danks 2021 ; Zimmermann and Lee-Stronach 2022 ). Paradigmatic examples include automated predictions about recidivism among prisoners that have impacted decisions about people’s parole and algorithms used in recruitment that have systematically favored men over women ( Angwin et al. 2016 ; O'Neil 2017 ).

However, the emerging global governance of AI also has implications for how the benefits and burdens of AI technology are distributed among groups and states—i.e., outcomes ( Gilpin 1987 ; Dreher and Lang 2019 ). Like the regulation of earlier technological innovations ( Krasner 1991 ; Drezner 2019 ), AI governance may not only produce collective benefits, but also favor certain actors at the expense of others ( Dafoe 2018 ; Horowitz 2018 ). For instance, the concern about AI-driven automation and its impact on employment is that those who lose their jobs because of AI might carry a disproportionately large share of the negative externalities of the technology without being compensated through access to its benefits (cf. Korinek and Stiglitz 2019 ; Erman and Furendal 2022a ). Merely focusing on justice as a procedural value would overlook such distributive effects created by the diffusion of AI technology.

Moreover, this example illustrates that since AI adoption may produce effects throughout the global economy, regulatory efforts will have to go beyond issues relating to the technology itself. Recognizing the role of outcomes of AI governance entails that a broad range of policies need to be pursued by existing and emerging governance regimes. The global trade regime, for instance, may need to be reconsidered in order for the distribution of positive and negative externalities of AI technology to be just. Suggestions include pursuing policies that can incentivize certain kinds of AI technology or enable the profits gained by AI developers to be shared more widely (cf. Floridi et al. 2018 ; Erman and Furendal 2022a ).

In sum, with regard to outcome aspects of justice, theories are needed to settle which benefits and burdens created by global AI adoption ought to be fairly distributed and why (i.e., what the “site” and “scope” of AI justice are) (cf. Gabriel 2022 ). Similarly, theories of procedural aspects should look beyond individual applications of AI technology and ask whether a fairer distribution of influence over AI governance may help produce more fair outcomes, and if so how. Extending existing theories of distributive justice to the realm of global AI governance may put many of their central assumptions in a new light.

Procedural and Outcome Aspects of Democracy

Normative research could also fruitfully shed light on how emerging AI governance should be analyzed in relation to the ideal of democracy, such as what principles or criteria of democratic legitimacy are most defensible. It could be argued, for instance, that the decision process must be open to democratic influence for global AI governance to be democratically legitimate ( Erman and Furendal 2022b ). Here, normative theory can explain why it matters from the standpoint of democracy whether the affected public has had a say—either directly through open consultation or indirectly through representation—in formulating the principles that guide AI governance. The nature of the emerging AI regime complex—where prominent roles are held by multinational companies and private standard-setting bodies—suggests that it is far from certain that the public will have this kind of influence.

Importantly, it is likely that democratic procedures will take on different shapes in global governance compared to domestic politics ( Dahl 1999 ; Scholte 2011 ). A viable democratic theory must therefore make sense of how the unique properties of global governance raise issues or require solutions that are distinct from those in the domestic context. For example, the prominent influence of non-state actors, including the large tech corporations developing cutting-edge AI technology, suggests that it is imperative to ask whether different kinds of decision-making may require different normative standards and whether different kinds of actors may have different normative status in such decision-making arrangements.

Initiatives from non-state actors, such as the tech company-led PAI discussed above, often develop their own non-coercive ethics guidelines. Such documents may seek effects similar to coercively upheld regulation, such as the GDPR or the EU AI Act. For example, both Google and the EU specify that AI should not reinforce biases ( High-Level Expert Group on Artificial Intelligence 2019 ; Google 2022 ). However, from the perspective of democratic legitimacy, it may matter extensively which type of entity adopts AI regulations and on what grounds those decision-making entities have the authority to issue AI regulations ( Erman and Furendal 2022b ).

Apart from procedural aspects, a satisfying democratic theory of global AI governance will also have to include a systematic analysis of outcome aspects. Important outcome aspects of democracy include accountability and responsiveness. Accountability may be improved, for example, by instituting mechanisms to prevent corruption among decision-makers and to secure public access to governing documents, and responsiveness may be improved by strengthening the discursive quality of global decision processes, for instance, by involving international NGOs and civil movements that give voice to marginalized groups in society. With regard to tracing citizens’ preferences, some have argued that democratic decision-making can be enhanced by AI technology that tracks what people want and consistently reach “better” decisions than human decision-makers (cf. König and Wenzelburger 2022 ). Apart from accountability and responsiveness, other relevant outcome aspects of democracy include, for example, the tendency to promote conflict resolution, improve the epistemic quality of decisions, and dignity and equality among citizens.

In addition, it is important to analyze how procedural and outcome concerns are related. This issue is often neglected, which again can be illustrated by the ethics guidelines from IOs, such as the OECD Principles on Artificial Intelligence and the UNESCO Recommendation on Ethics of AI. Such documents often stress the importance of democratic values and principles, such as transparency, accountability, participation, and deliberation. Yet they typically treat these values as discrete and rarely explain how they are interconnected ( Jobin et al. 2019 ; Schiff et al. 2020 ; Hagendorff 2020 , 103). Democratic theory can fruitfully step in to explain how the ideal of “the rule by the people” includes two sides that are intimately connected. First, there is an access side of political power, where those affected should have a say in the decision-making, which might require participation, deliberation, and political equality. Second, there is an exercise side of political power, where those very decisions should apply in appropriate ways, which in turn might require effectiveness, transparency, and accountability. In addition to efforts to map and explain norms and values in the global governance of AI, theories of democratic AI governance can hence help explain how these two aspects are connected (cf. Erman 2020 ).

In sum, the global governance of AI raises a number of issues for normative research. We have identified four promising avenues, focused on procedural and outcome aspects of justice and democracy in the context of global AI governance. Research along these four avenues can help to shed light on the normative challenges facing the global governance of AI and the key values at stake, as well as provide the impetus for novel theories on democratic and just global AI governance.

This article has charted a new agenda for research into the global governance of AI. While existing scholarship has been primarily descriptive or policy-oriented, we propose an agenda organized around theory-driven positive and normative questions. To this end, we have outlined two broad analytical perspectives on the global governance of AI: an empirical approach, aimed at conceptualizing and explaining global AI governance; and a normative approach, aimed at developing and applying ideals for appropriate global AI governance. Pursuing these empirical and normative approaches can help to guide future scholarship on the global governance of AI toward critical questions, core concepts, and promising theories. At the same time, exploring AI as a regulatory issue provides an opportunity to further develop these general analytical approaches as they confront the particularities of this important area of governance.

We conclude this article by highlighting the key takeaways from this research agenda for future scholarship on empirical and normative dimensions of the global governance of AI. First, research is required to identify where and how AI is becoming globally governed . Mapping and conceptualizing the emerging global governance of AI is a first necessary step. We argue that research may benefit from considering the variety of ways in which new regulation may come about, from the reinterpretation of existing rules and the extension of prevailing sectoral governance to the negotiation of entirely new frameworks. In addition, we suggest that scholarship may benefit from considering how global AI governance may be conceptualized in terms of key analytical dimensions, such as horizontal–vertical, centralized–decentralized, and formal–informal.

Second, research is necessary to explain why AI is becoming globally governed in particular ways . Having mapped global AI governance, we need to account for the factors that drive and shape these regulatory processes and arrangements. We argue that political science and IR offer a variety of theoretical tools that can help to explain the global governance of AI. In particular, we highlight the promise of theories privileging the role of power, interests, ideas, regime complexes, and non-state actors, but also recognize that research fields such as science and technology studies and political economy can yield additional theoretical insights.

Third, research is needed to identify what normative ideals global AI governance ought to meet . Moving from positive to normative issues, a first critical question pertains to the ideals that should guide the design of appropriate global AI governance. We argue that normative theory provides the tools necessary to engage with this question. While normative theory can suggest several potential principles, we believe that it may be especially fruitful to start from the ideals of democracy and justice, which are foundational and recurrent concerns in discussions about political governing arrangements. In addition, we suggest that these two ideals are relevant both for the procedures by which AI regulation is adopted and for the outcomes of such regulation.

Fourth, research is required to evaluate how well global AI governance lives up to these normative ideals . Once appropriate normative ideals have been selected, we can assess to what extent and how existing arrangements conform to these principles. We argue that previous research on democracy and justice in global governance offers a model in this respect. A critical component of such research is the integration of normative and empirical research: normative research for elucidating how normative ideals would be expressed in practice, and empirical research for analyzing data on whether actual arrangements live up to those ideals.

In all, the research agenda that we outline should be of interest to multiple audiences. For students of political science and IR, it offers an opportunity to apply and refine concepts and theories in a novel area of global governance of extensive future importance. For scholars of AI, it provides an opportunity to understand how political actors and considerations shape the conditions under which AI applications may be developed and used. For policymakers, it presents an opportunity to learn about evolving regulatory practices and gaps, interests shaping emerging arrangements, and trade-offs to be confronted in future efforts to govern AI at the global level.

A previous version of this article was presented at the Global and Regional Governance workshop at Stockholm University. We are grateful to Tim Bartley, Niklas Bremberg, Lisa Dellmuth, Felicitas Fritzsche, Faradj Koliev, Rickard Söder, Carl Vikberg, Johanna von Bahr, and three anonymous reviewers for ISR for insightful comments and suggestions. The research for this article was funded by the WASP-HS program of the Marianne and Marcus Wallenberg Foundation (Grant no. MMW 2020.0044).

We use “global governance” to refer to regulatory processes beyond the nation-state, whether on a global or regional level. While states and IOs often are central to these regulatory processes, global governance also involves various types of non-state actors ( Rosenau 1999 ).

Abbott Kenneth W. , and Snidal Duncan . 2000 . “ Hard and Soft Law in International Governance .” International Organization . 54 ( 3 ): 421 – 56 .

Google Scholar

Acemoglu Daron , and Restrepo Pascual . 2020 . “ The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand .” Cambridge Journal of Regions, Economy and Society . 13 ( 1 ): 25 – 35 .

Acharya Amitav , and Johnston Alistair Iain . 2007 . “ Conclusion: Institutional Features, Cooperation Effects, and the Agenda for Further Research on Comparative Regionalism .” In Crafting Cooperation: Regional International Institutions in Comparative Perspective , edited by Acharya Amitav , Johnston Alistair Iain , 244 – 78 .. Cambridge : Cambridge University Press .

Google Preview

Alter Karen J. , and Meunier Sophie . 2009 . “ The Politics of International Regime Complexity .” Perspectives on Politics . 7 ( 1 ): 13 – 24 .

Angwin Julia , Larson Jeff , Mattu Surya , and Kirchner Lauren . 2016 . “ Machine Bias .” ProPublica , May 23 . Internet (last accessed August 25, 2023): https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing .

Barry Brian . 1991 . “ Humanity and Justice in Global Perspective .” In Liberty and Justice , edited by Barry Brian . Oxford : Clarendon .

Berk Richard A . 2021 . “ Artificial Intelligence, Predictive Policing, and Risk Assessment for Law Enforcement .” Annual Review of Criminology . 4 ( 1 ): 209 – 37 .

Bostrom Nick , Dafoe Allan , and Flynn Carrick . 2020 . “ Public Policy and Superintelligent AI: A Vector Field Approach .” In Ethics of Artificial Intelligence , edited by Liao S. Matthew , 293 – 326 .. Oxford : Oxford University Press .

Buchanan Allen , and Keohane Robert O. . 2006 . “ The Legitimacy of Global Governance Institutions .” Ethics & International Affairs . 20 (4) : 405 – 37 .

Buchanan Allen . 2013 . The Heart of Human Rights . Oxford : Oxford University Press .

Burrell Jenna . 2016 . “ How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms .” Big Data & Society . 3 ( 1 ): 1 – 12 .. https://doi.org/10.1177/2053951715622512 .

Burri Thomas . 2017 . “ International Law and Artificial Intelligence .” In German Yearbook of International Law , vol. 60 , 91 – 108 .. Berlin : Duncker and Humblot .

Butcher James , and Beridze Irakli . 2019 . “ What is the State of Artificial Intelligence Governance Globally?” . The RUSI Journal . 164 ( 5-6 ): 88 – 96 .

Büthe Tim , Djeffal Christian , Lütge Christoph , Maasen Sabine , and von Ingersleben-Seip Nora . 2022 . “ Governing AI—Attempting to Herd Cats? Introduction to the Special Issue on the Governance of Artificial Intelligence .” Journal of European Public Policy . 29 ( 11 ): 1721 – 52 .

Campbell Thomas A . 2019 . Artificial Intelligence: An Overview of State Initiatives . Evergreen, CO : FutureGrasp .

Caney Simon . 2005 . “ Cosmopolitan Justice, Responsibility, and Global Climate Change .” Leiden Journal of International Law . 18 ( 4 ): 747 – 75 .

Caney Simon . 2014 . “ Two Kinds of Climate Justice: Avoiding Harm and Sharing Burdens .” Journal of Political Philosophy . 22 ( 2 ): 125 – 49 .

Chavannes Esther , Klonowska Klaudia , and Sweijs Tim . 2020 . Governing Autonomous Weapon Systems: Expanding the Solution Space, From Scoping to Applying . The Hague : The Hague Center for Strategic Studies .

Checkel Jeffrey T . 2005 . “ International Institutions and Socialization in Europe: Introduction and Framework .” International organization . 59 ( 4 ): 801 – 26 .

Christiano Thomas . 2008 . The Constitution of Equality . Oxford : Oxford University Press .

Christiano Thomas . 2021 . “ Algorithms, Manipulation, and Democracy .” Canadian Journal of Philosophy . 52 ( 1 ): 109 – 124 .. https://doi.org/10.1017/can.2021.29 .

Cihon Peter , Maas Matthijs M. , and Kemp Luke . 2020a . “ Fragmentation and the Future: Investigating Architectures for International AI Governance .” Global Policy . 11 ( 5 ): 545 – 56 .

Cihon Peter , Maas Matthijs M. , and Kemp Luke . 2020b . “ Should Artificial Intelligence Governance Be Centralised? Design Lessons from History .” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society , 228 – 34 . New York, NY: ACM .

Council of Europe . 2023 . “ AI Initiatives ,” accessed 16 June 2023, AI initiatives (coe.int).

Dafoe Allan . 2018 . AI Governance: A Research Agenda . Oxford: Governance of AI Program , Future of Humanity Institute, University of Oxford . www.fhi.ox.ac.uk/govaiagenda .

Dahl Robert . 1999 . “ Can International Organizations Be Democratic: A Skeptic's View .” In Democracy's Edges , edited by Shapiro Ian , Hacker-Córdon Casiano , 19 – 36 .. Cambridge : Cambridge University Press .

Dellmuth Lisa M. , and Tallberg Jonas . 2017 . “ Advocacy Strategies in Global Governance: Inside versus Outside Lobbying .” Political Studies . 65 ( 3 ): 705 – 23 .

Ding Jeffrey . 2018 . Deciphering China's AI Dream: The Context, Components, Capabilities and Consequences of China's Strategy to Lead the World in AI . Oxford: Centre for the Governance of AI , Future of Humanity Institute, University of Oxford .

Dreher Axel , and Lang Valentin . 2019 . “ The Political Economy of International Organizations .” In The Oxford Handbook of Public Choice , Volume 2, edited by Congleton Roger O. , Grofman Bernhard , Voigt Stefan . Oxford : Oxford University Press .

Dreher Axel , Lang Valentin , Rosendorff B. Peter , and Vreeland James R. . 2022 . “ Bilateral or Multilateral? International Financial Flows and the Dirty Work-Hypothesis .” The Journal of Politics . 84 ( 4 ): 1932 – 1946 .

Drezner Daniel W . 2019 . “ Technological Change and International Relations .” International Relations . 33 ( 2 ): 286 – 303 .

Dryzek John . 2011 . “ Global Democratization: Soup, Society, or System? ” Ethics & International Affairs , 25 ( 2 ): 211 – 234 .

Ebers Martin . 2022 . “ Explainable AI in the European Union: An Overview of the Current Legal Framework(s) .” In Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence , edited by Lianne Colonna and Stanley Greenstein . Stockholm: The Swedish Law and Informatics Institute, Stockholm University .

Eilstrup-Sangiovanni Mette , and Westerwinter Oliver . 2021 . “ The Global Governance Complexity Cube: Varieties of Institutional Complexity in Global Governance .” Review of International Organizations . 17 (2): 233 – 262 .

Erman Eva , and Furendal Markus . 2022a . “ The Global Governance of Artificial Intelligence: Some Normative Concerns .” Moral Philosophy & Politics . 9 (2): 267−291. https://www.degruyter.com/document/doi/10.1515/mopp-2020-0046/html .

Erman Eva , and Furendal Markus . 2022b . “ Artificial Intelligence and the Political Legitimacy of Global Governance .” Political Studies . https://journals.sagepub.com/doi/full/10.1177/00323217221126665 .

Erman Eva . 2020 . “ A Function-Sensitive Approach to the Political Legitimacy of Global Governance .” British Journal of Political Science . 50 ( 3 ): 1001 – 24 .

Fazelpour Sina , and Danks David . 2021 . “ Algorithmic Bias: Senses, Sources, Solutions .” Philosophy Compass . 16 ( 8 ): e12760.

Finnemore Martha , and Sikkink Kathryn . 1998 . “ International Norm Dynamics and Political Change .” International Organization . 52 ( 4 ): 887 – 917 .

Floridi Luciano , Cowls Josh , Beltrametti Monica , Chatila Raja , Chazerand Patrice , Dignum Virginia , Luetge Christoph et al.  2018 . “ AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations .” Minds and Machines . 28 ( 4 ): 689 – 707 .

Frey Carl Benedikt . 2019 . The Technology Trap: Capital, Labor, and Power in the Age of Automation . Princeton, NJ : Princeton University Press .

Frieden Jeffry , Lake David A. , and Lawrence Broz J. 2017 . International Political Economy: Perspectives on Global Power and Wealth . Sixth Edition. New York, NY : W.W. Norton .

Future of Life Institute , 2023 . “ Pause Giant AI Experiments: An Open Letter .” Accessed June 13, 2023. https://futureoflife.org/open-letter/pause-giant-ai-experiments/ .

Gabriel Iason , 2022 . “ Toward a Theory of Justice for Artificial Intelligence .” Daedalus , 151 ( 2 ): 218 – 31 .

Gilpin Robert . 1987 . The Political Economy of International Relations . Princeton, NJ : Princeton University Press .

Google . 2022 . “ Artificial Intelligence at Google: Our Principles .” Internet (last accessed August 25, 2023): https://ai.google/principles/ .

Greenstein Stanley . 2022 . “ Liability in the Era of Artificial Intelligence .” In Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence , edited by Colonna Lianne , Greenstein Stanley . Stockholm: The Swedish Law and Informatics Institute, Stockholm University .

Gruber Lloyd . 2000 . Ruling the World . Princeton, NJ : Princeton University Press .

Haas Peter . 1992 . “ Introduction: Epistemic Communities and International Policy Coordination .” International Organization . 46 ( 1 ): 1 – 36 .

Haftel Yoram Z. , and Lenz Tobias . 2021 . “ Measuring Institutional Overlap in Global Governance .” Review of International Organizations . 17(2) : 323 – 347 .

Hagendorff Thilo . 2020 . “ The Ethics of AI Ethics: an Evaluation of Guidelines .” Minds and Machines . 30 ( 1 ): 99 – 120 .

Hanegraaff Marcel , Beyers Jan , and De Bruycker Iskander . 2016 . “ Balancing Inside and Outside Lobbying: The Political Strategy of Lobbyists at Global Diplomatic Conferences .” European Journal of Political Research . 55 ( 3 ): 568 – 88 .

Hanegraaff Marcel , Braun Caelesta , De Bièvre Dirk , and Beyers Jan . 2015 . “ The Domestic and Global Origins of Transnational Advocacy: Explaining Lobbying Presence During WTO Ministerial Conferences .” Comparative Political Studies . 48 : 1591 – 621 .

Hawkins Darren G. , Lake David A. , Nielson Daniel L. , Tierney Michael J. Eds. 2006 . Delegation and Agency in International Organizations . Cambridge : Cambridge University Press .

Heikkilä Melissa . 2022a . “ AI: Decoded. IoT Under Fire—Defining AI?—Meta's New AI Supercomputer .” Accessed June 5, 2022, https://www.politico.eu/newsletter/ai-decoded/iot-under-fire-defining-ai-metas-new-ai-supercomputer-2 /.

Heikkilä Melissa 2022b . “ AI: Decoded. A Dutch Algorithm Scandal Serves a Warning to Europe—The AI Act won't Save Us .” Accessed June 5, 2022, https://www.politico.eu/newsletter/ai-decoded/a-dutch-algorithm-scandal-serves-a-warning-to-europe-the-ai-act-wont-save-us-2/ .

Henriques-Gomes Luke . 2020 . “ Robodebt: Government Admits It Will Be Forced to Refund $550 m under Botched Scheme .” The Guardian . sec. Australia news . Internet (last accessed August 25, 2023): https://www.theguardian.com/australia-news/2020/mar/27/robodebt-government-admits-it-will-be-forced-to-refund-550m-under-botched-scheme .

Hernandez Joe . 2021 . “ A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says .” National Public Radio . Internet (las accessed August 25, 2023): https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d .

High-Level Expert Group on Artificial Intelligence . 2019 . Ethics Guidelines for Trustworthy AI . Brussels: European Commission . https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai .

Horowitz Michael . 2018 . “ Artificial Intelligence, International Competition, and the Balance of Power .” Texas National Security Review . 1 ( 3 ): 37 – 57 .

Horowitz Michael C. , Allen Gregory C. , Kania Elsa B. , Scharre Paul . 2018 . Strategic Competition in an Era of Artificial Intelligence . Washington D.C. : Center for a New American Security .

Hu Krystal . 2023 . ChatGPT Sets Record for Fastest-Growing User Base—Analyst Note. Reuters , February 2, 2023, sec. Technology , Accessed June 12, 2023, https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ .

Jasanoff Sheila . 2016 . The Ethics of Invention: Technology and the Human Future . New York : Norton .

Jensen Benjamin M. , Whyte Christopher , and Cuomo Scott . 2020 . “ Algorithms at War: the Promise, Peril, and Limits of Artificial Intelligence .” International Studies Review . 22 ( 3 ): 526 – 50 .

Jobin Anna , Ienca Marcello , and Vayena Effy . 2019 . “ The Global Landscape of AI Ethics Guidelines .” Nature Machine Intelligence . 1 ( 9 ): 389 – 99 .

Johnson J. 2019 . “ Artificial intelligence & Future Warfare: Implications for International Security .” Defense & Security Analysis . 35 ( 2 ): 147 – 69 .

Jönsson Christer , and Tallberg Jonas . Forthcoming. “Opening up to Civil Society: Access, Participation, and Impact .” In Handbook on Governance in International Organizations , edited by Edgar Alistair . Cheltenham : Edward Elgar Publishing .

Kania E. B . 2017 . Battlefield singularity. Artificial Intelligence, Military Revolution, and China's Future Military Power . Washington D.C.: CNAS .

Keohane Robert O . 1984 . After Hegemony . Princeton, NJ : Princeton University Press .

Keohane Robert O. , and Victor David G. . 2011 . “ The Regime Complex for Climate Change .” Perspectives on Politics . 9 ( 1 ): 7 – 23 .

König Pascal D. and Georg Wenzelburger 2022 . “ Between Technochauvinism and Human-Centrism: Can Algorithms Improve Decision-Making in Democratic Politics? ” European Political Science , 21 ( 1 ): 132 – 49 .

Koremenos Barbara , Lipson Charles , and Snidal Duncan . 2001 . “ The Rational Design of International Institutions .” International Organization . 55 ( 4 ): 761 – 99 .

Koremenos Barbara . 2016 . The Continent of International Law: Explaining Agreement Design . Cambridge : Cambridge University Press .

Korinek Anton and Stiglitz Joseph E. 2019 . “ Artificial Intelligence and Its Implications for Income Distribution and Unemployment ” In The Economics of Artificial Intelligence: An Agenda . edited by Agrawal A. , Gans J. and Goldfarb A. . University of Chicago Press . :.

Koven Levit Janet . 2007 . “ Bottom-Up International Lawmaking: Reflections on the New Haven School of International Law .” Yale Journal of International Law . 32 : 393 – 420 .

Krasner Stephen D . 1991 . “ Global Communications and National Power: Life on the Pareto Frontier .” World Politics . 43 ( 3 ): 336 – 66 .

Kunz Martina , and hÉigeartaigh Seán Ó . 2020 . “ Artificial Intelligence and Robotization .” In The Oxford Handbook on the International Law of Global Security , edited by Geiss Robin , Melzer Nils . Oxford : Oxford University Press .

Lake David. A . 2013 . “ Theory is Dead, Long Live Theory: The End of the Great Debates and the rise of eclecticism in International Relations .” European Journal of International Relations . 19 ( 3 ): 567 – 87 .

Leung Jade . 2019 . “ Who Will Govern Artificial Intelligence?” . Learning from the History of Strategic Politics in Emerging Technologies . Doctoral dissertation . Oxford: University of Oxford .

Livingston Steven , and Mathias Risse . 2019 , “ The Future Impact of Artificial Intelligence on Humans and Human Rights .” Ethics & International Affairs . 33 ( 2 ): 141 – 58 .

Maas Matthijs M . 2019a . “ How Viable is International Arms Control for Military Artificial Intelligence? Three Lessons from Nuclear Weapons .” Contemporary Security Policy . 40 ( 3 ): 285 – 311 .

Maas Matthijs M . 2019b . “ Innovation-proof Global Governance for Military Artificial Intelligence? How I Learned to Stop Worrying, and Love the Bot ,” Journal of International Humanitarian Legal Studies . 10 ( 1 ): 129 – 57 .

Maas Matthijs M . 2021 . Artificial Intelligence Governance under Change: Foundations, Facets, Frameworks . PhD dissertation . Copenhagen: University of Copenhagen .

Martin Lisa L . 1992 . “ Interests, Power, and Multilateralism .” International Organization . 46 ( 4 ): 765 – 92 .

Martin Lisa L. , and Simmons Beth A. . 2012 . “ International Organizations and Institutions .” In Handbook of International Relations , edited by Carlsnaes Walter , Risse Thomas , Simmons Beth A. . London : SAGE .

McCarthy John , Minsky Marvin L. , Rochester Nathaniel , and Shannon Claude E . 1955 . “ A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence .” AI Magazine . 27 ( 4 ): 12 – 14 (reprint) .

Mearsheimer John J. . 1994 . “ The False Promise of International Institutions .” International Security , 19 ( 3 ): 5 – 49 .

Metz Cade . 2022 . “ Lawsuit Takes Aim at the Way a.I. Is Built .” The New York Times , November 23, Accessed June 21, 2023. https://www.nytimes.com/2022/11/23/technology/copilot-microsoft-ai-lawsuit.html . June 21, 2023 .

Misuraca Gianluca , van Noordt Colin 2022 . “ Artificial Intelligence for the Public Sector: Results of Landscaping the Use of AI in Government Across the European Union .” Government Information Quarterly . 101714 . https://doi.org/10.1016/j.giq.2022.101714 .

Müller Vincent C . 2020 . “ Ethics of Artificial Intelligence and Robotics .” In Stanford Encyclopedia of Philosophy , edited by Zalta Edward N. Internet (last accessed August 25, 2023): https://plato.stanford.edu/archives/fall2020/entries/ethics-ai/ .

Niklas Jedrzen , Dencik Lina . 2021 . “ What Rights Matter? Examining the Place of Social Rights in the EU's Artificial Intelligence Policy Debate .” Internet Policy Review . 10 ( 3 ): 1 – 29 .

OECD . 2021 . “ OECD AI Policy Observatory .” Accessed February 17, 2022. https://oecd.ai .

O'Neil Cathy . 2017 . Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy . UK : Penguin Books .

Orakhelashvili Alexander . 2019 . Akehurst's Modern Introduction to International Law , Eighth Edition . London : Routledge .

Payne K . 2021 . I, Warbot: The Dawn of Artificially Intelligent Conflict . Oxford: Oxford University Press .

Payne Kenneth . 2018 . “ Artificial Intelligence: a Revolution in Strategic Affairs?” . Survival . 60 ( 5 ): 7 – 32 .

Petman Jarna . 2017 . Autonomous Weapons Systems and International Humanitarian Law: ‘Out of the Loop’ . Helsinki : The Eric Castren Institute of International Law and Human Rights .

Powers Thomas M. , and Ganascia Jean-Gabriel . 2020 . “ The Ethics of the Ethics of AI .” In The Oxford Handbook of Ethics of AI , edited by Dubber Markus D. , Pasquale Frank , Das Sunit , 25 – 51 .. Oxford : Oxford University Press .

Rademacher Timo . 2019 . “ Artificial Intelligence and Law Enforcement .” In Regulating Artificial Intelligence , edited by Wischmeyer Thomas , Rademacher Timo , 225 – 54 .. Cham: Springer .

Radu Roxana . 2021 . “ Steering the Governance of Artificial Intelligence: National Strategies in Perspective .” Policy and Society . 40 ( 2 ): 178 – 93 .

Raustiala Kal and David G. Victor . 2004 .“ The Regime Complex for Plant Genetic Resources .” International Organization , 58 ( 2 ): 277 – 309 .

Rességuier Anaïs , and Rodrigues Rowena . 2020 . “ AI Ethics Should Not Remain Toothless! A Call to Bring Back the Teeth of Ethics .” Big Data & Society . 7 ( 2 ). https://doi.org/10.1177/2053951720942541 .

Risse Thomas . 2012 . “ Transnational Actors and World Politics .” In Handbook of International Relations , 2nd ed., edited by Carlsnaes Walter , Risse Thomas , Simmons Beth A. . London : Sage .

Roach Steven C. , and Eckert Amy , eds. 2020 . Moral Responsibility in Twenty-First-Century Warfare: Just War Theory and the Ethical Challenges of Autonomous Weapons Systems . Albany, NY : State University of New York .

Roberts Huw , Cowls Josh , Morley Jessica , Taddeo Mariarosaria , Wang Vincent , and Floridi Luciano . 2021 . “ The Chinese Approach to Artificial Intelligence: An Analysis of Policy, Ethics, and Regulation .” AI & Society . 36 ( 1 ): 59 – 77 .

Rosenau James N . 1999 . “ Toward an Ontology for Global Governance .” In Approaches to Global Governance Theory , edited by Hewson Martin , Sinclair Timothy J. , 287 – 301 .. Albany, NY : SUNY Press .

Rosert Elvira , and Sauer Frank . 2018 . Perspectives for Regulating Lethal Autonomous Weapons at the CCW: A Comparative Analysis of Blinding Lasers, Landmines, and LAWS . Paper prepared for the workshop “New Technologies of Warfare: Implications of Autonomous Weapons Systems for International Relations,” 5th EISA European Workshops in International Studies , Groningen , 6-9 June 2018 . Internet (last accessed August 25, 2023): https://www.academia.edu/36768452/Perspectives_for_Regulating_Lethal_Autonomous_Weapons_at_the_CCW_A_Comparative_Analysis_of_Blinding_Lasers_Landmines_and_LAWS

Rosert Elvira , and Sauer Frank . 2019 . “ Prohibiting Autonomous Weapons: Put Human Dignity First .” Global Policy . 10 ( 3 ): 370 – 5 .

Russell Stuart J. , and Norvig Peter . 2020 . Artificial Intelligence: A Modern Approach . Boston, MA : Pearson .

Ruzicka Jan . 2018 . “ Behind the Veil of Good Intentions: Power Analysis of the Nuclear Non-proliferation Regime .” International Politics . 55 ( 3 ): 369 – 85 .

Saif Hassan , Dickinson Thomas , Kastler Leon , Fernandez Miriam , and Alani Harith . 2017 . “ A Semantic Graph-Based Approach for Radicalisation Detection on Social Media .” ESWC 2017: The Semantic Web—Proceedings, Part 1 , 571 – 87 .. Cham : Springer .

Schiff Daniel , Justin Biddle , Jason Borenstein , and Kelly Laas . 2020 . “ What’s Next for AI Ethics, Policy, and Governance? A Global Overview .” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society . ACM , , .

Schmitt Lewin . 2021 . “ Mapping Global AI Governance: A Nascent Regime in a Fragmented Landscape .” AI and Ethics . 2 ( 2 ): 303 – 314 .

Scholte Jan Aart . ed. 2011 . Building Global Democracy? Civil Society and Accountable Global Governance . Cambridge : Cambridge University Press .

Schwitzgebel Eric , and Garza Mara . 2015 . “ A Defense of the Rights of Artificial Intelligences .” Midwest Studies In Philosophy . 39 ( 1 ): 98 – 119 .

Sparrow Robert . 2007 . “ Killer Robots .” Journal of Applied Philosophy . 24 ( 1 ): 62 – 77 .

Steffek Jens and Patrizia Nanz . 2008 . “ Emergent Patterns of Civil Society Participation in Global and European Governance ” In Civil Society Participation in European and Global Governance . edited by Jens Steffek , Claudia Kissling , and Patrizia Nanz Basingstoke: Palgrave Macmillan . 1–29.

Stone Randall. W . 2011 . Controlling Institutions: International Organizations and the Global Economy . Cambridge : Cambridge University Press .

Stop Killer Robots . 2023 . “ About Us.” . Accessed June 13, 2023, https://www.stopkillerrobots.org/about-us/ .

Susser Daniel , Roessler Beate , Nissenbaum Helen . 2019 . “ Technology, Autonomy, and Manipulation .” Internet Policy Review . 8 ( 2 ):. https://doi.org/10.14763/2019.2.1410 .

Taeihagh Araz . 2021 . “ Governance of Artificial Intelligence .” Policy and Society . 40 ( 2 ): 137 – 57 .

Tallberg Jonas , Sommerer Thomas , Squatrito Theresa , and Jönsson Christer . 2013 . The Opening Up of International Organizations . Cambridge : Cambridge University Press .

Tasioulas John . 2019 . “ First Steps Towards an Ethics of Robots and Artificial Intelligence .” The Journal of Practical Ethics . 7 ( 1 ): 61-95. https://doi.org/10.2139/ssrn.3172840 .

Thompson Nicholas , and Bremmer Ian . 2018. “ The AI Cold War that Threatens us all .” Wired, October 23. Internet (last accessed August 25, 2023): https://www.wired.com/story/ai-cold-war-china-coulddoom-us-all/ .

Trager Robert F. , and Luca Laura M. . 2022 . “ Killer Robots Are Here—And We Need to Regulate Them .” Foreign Policy, May 11 . Internet (last accessed August 25, 2023): https://foreignpolicy.com/2022/05/11/killer-robots-lethal-autonomous-weapons-systems-ukraine-libya-regulation/

Ubena John . 2022 . “ Can Artificial Intelligence be Regulated?” . Lessons from Legislative Techniques . In Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence , edited by Colonna Lianne , Greenstein Stanley . Stockholm: The Swedish Law and Informatics Institute , Stockholm University.

Uhre Andreas Nordang . 2014 . “ Exploring the Diversity of Transnational Actors in Global Environmental Governance .” Interest Groups & Advocacy . 3 ( 1 ): 59 – 78 .

Ulnicane Inga . 2021 . “ Artificial Intelligence in the European Union: Policy, Ethics and Regulation .” In The Routledge Handbook of European Integrations , edited by Hoerber Thomas , Weber Gabriel , Cabras Ignazio . London : Routledge .

Valentini Laura . 2013 . “ Justice, Disagreement and Democracy .” British Journal of Political Science . 43 ( 1 ): 177 – 99 .

Valentini Laura . 2012 . “ Assessing the Global Order: Justice, Legitimacy, or Political Justice?” . Critical Review of International Social and Political Philosophy . 15 ( 5 ): 593 – 612 .

Vredenburgh Kate . 2022 . “ Fairness .” In The Oxford Handbook of AI Governance , edited by Bullock Justin B. , Chen Yu-Che , Himmelreich Johannes , Hudson Valerie M. , Korinek Anton , Young Matthew M. , Zhang Baobao . Oxford : Oxford University Press .

Verdier Daniel . 2022 . “ Bargaining Strategies for Governance Complex Games .” The Review of International Organizations , 17 ( 2 ): 349 – 371 .

Wagner Ben . 2018 . “ Ethics as an Escape from Regulation. From “Ethics-washing” to Ethics-shopping? .” In Being Profiled: Cogitas Ergo Sum. 10 Years of ‘Profiling the European Citizen , edited by Bayamiloglu Emre , Baraliuc Irina , Janssens Liisa , Hildebrandt Mireille Amsterdam : Amsterdam University Press .

Wahlgren Peter . 2022 . “ How to Regulate AI?” In Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence , edited by Colonna Lianne , Greenstein Stanley . Stockholm: The Swedish Law and Informatics Institute, Stockholm University .

Weale Albert . 1999 . Democracy . New York : St Martin's Press .

Winfield Alan F. , Michael Katina , Pitt Jeremy , and Evers Vanessa . 2019 . “ Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems .” Proceedings of the IEEE . 107 ( 3 ): 509 – 17 .

Zaidi Waqar , Dafoe Allan . 2021 . International Control of Powerful Technology: Lessons from the Baruch Plan for Nuclear Weapons . Working Paper 2021: 9 . Oxford : Centre for the Governance of AI .

Zhu J. 2022 . “ AI ethics with Chinese Characteristics? Concerns and preferred solutions in Chinese academia .” AI & Society . https://doi.org/10.1007/s00146-022-01578-w .

Zimmermann Annette , and Lee-Stronach Chad . 2022 . “ Proceed with Caution .” Canadian Journal of Philosophy . 52 ( 1 ): 6 – 25 .

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1468-2486
  • Print ISSN 1521-9488
  • Copyright © 2024 International Studies Association
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Research: Negotiating Is Unlikely to Jeopardize Your Job Offer

  • Einav Hart,
  • Julia Bear,
  • Zhiying (Bella) Ren

peer review in research ethics

A series of seven studies found that candidates have more power than they assume.

Job seekers worry about negotiating an offer for many reasons, including the worst-case scenario that the offer will be rescinded. Across a series of seven studies, researchers found that these fears are consistently exaggerated: Candidates think they are much more likely to jeopardize a deal than managers report they are. This fear can lead candidates to avoid negotiating altogether. The authors explore two reasons driving this fear and offer research-backed advice on how anxious candidates can approach job negotiations.

Imagine that you just received a job offer for a position you are excited about. Now what? You might consider negotiating for a higher salary, job flexibility, or other benefits , but you’re apprehensive. You can’t help thinking: What if I don’t get what I ask for? Or, in the worst-case scenario, what if the hiring manager decides to withdraw the offer?

peer review in research ethics

  • Einav Hart is an assistant professor of management at George Mason University’s Costello College of Business, and a visiting scholar at the Wharton School. Her research interests include conflict management, negotiations, and organizational behavior.
  • Julia Bear is a professor of organizational behavior at the College of Business at Stony Brook University (SUNY). Her research interests include the influence of gender on negotiation, as well as understanding gender gaps in organizations more broadly.
  • Zhiying (Bella) Ren is a doctoral student at the Wharton School of the University of Pennsylvania. Her research focuses on conversational dynamics in organizations and negotiations.

Partner Center

IMAGES

  1. Peer Review Process

    peer review in research ethics

  2. Ethical Issues in Peer Review

    peer review in research ethics

  3. Understand the peer review process

    peer review in research ethics

  4. Research Ethics: Definition, Principles and Advantages

    peer review in research ethics

  5. Write Esse: Peer reviewed research articles

    peer review in research ethics

  6. My Complete Guide to Academic Peer Review: Example Comments & How to

    peer review in research ethics

VIDEO

  1. Ethical Considerations in Research

  2. How to Be a Peer Reviewer, Sage Open: Journal

  3. THIS Got Through Peer Review?!

  4. University of Johannesburg & Elsevier present "SDGs and How to Use Them in Your Research"

  5. Ethical review for research with human participants at TU Eindhoven

  6. How to Be a Peer Reviewer Webinar

COMMENTS

  1. Research Ethics: Sage Journals

    Research Ethics is aimed at all readers and authors interested in ethical issues in the conduct of research, the regulation of research, the procedures and process of ethical review as well as broader ethical issues related to research such as scientific … | View full journal description. This journal is a member of the Committee on ...

  2. Peer Review

    Most organizations reviewing research have specific guidelines regarding confidentiality and conflicts of interest. In addition, many organizations and institutions have guidelines dealing explicitly with the responsibilities of peer reviewers, such as those of the American Chemical Society (2006), the Society for Neuroscience (1998, and the Council of Biology Editors (CBE Peer Review Retreat ...

  3. The role of peer review in ethics and research integrity

    Not much review of research integrity or ethics generally happens between original ethics approval at an institution, and the writing up of findings. In the interim, things are likely to have changed somewhat. Theoretically, researchers are meant to report significant changes in the direction of their research to funders and/or their institution.

  4. Ethics in peer review

    As a peer reviewer, it is important to be aware of the ethical impacts of the review process, and to consider them carefully when responding to a review invitation or completing a review. Broadly, ethical considerations for review fit into five main areas: 1. Conflicts of interest.

  5. Ethics and Responsibilities of Peer Reviewers to the Authors, Readers

    These judges remind me of peer reviewers for Vascular Specialist International (VSI), and I think the peer review process needs to be more sympathetic and encouraging, with a warm gaze, like the audition judges. The peer review system is considered an integral part of judging scientific medical journals, research funding, and academic ...

  6. Ethical guidelines for peer reviewers

    Peer reviewers play a central and critical part in the peer-review process, but too often come to the role without any guidance and unaware of their ethical obligations. These guidelines are intended to be applied across disciplines. COPE Guidelines are formal COPE policy and are intended to advise editors and publishers on expected publication ...

  7. Improving the process of research ethics review

    From the academic hallways to the literature, characterizations of REBs and the research ethics review process are seldom complimentary. While numerous criticisms have been levelled, it is the time to decision that is most consistently maligned [6,7,8,9,10,11].Factors associated with lengthy review time include incomplete or poorly completed applications [7, 12, 13], lack of administrative ...

  8. Ethics for Peer Reviewers

    As a reviewer, you have a crucial role in supporting research integrity in the peer review and publishing process. This guide outlines some of the basic ethical principles that you should expect to follow. These recommendations are based on the Committee on Publication Ethics (COPE) Ethical Guidelines for Peer Reviewers.

  9. Improving the process of research ethics review

    Upshur has previously noted that the contributions to research ethics such as Board membership and application review need to be accorded the same academic prestige as serving on peer review grant panels and editorial boards and undertaking manuscript reviews. In doing so, institutions will help to facilitate a culture of respect for, and ...

  10. Advancing ethics review practices in AI research

    Some have initiated new ethics review processes, integrated within peer review, which primarily facilitate a reflection on the potential risks and effects on society after the research is ...

  11. Ethics in peer review

    Ethics in peer review. Despite ongoing discussions as to its flaws and how to improve it, the peer review process remains the cornerstone in ensuring that published research is rigorous, reproducible and ethically sound. Cambridge University Press is a member of the Committee on Publication Ethics and is committed to promoting ethical review ...

  12. Peer review processes

    All peer review processes must be transparently described and well managed. Journals should provide training for editors and reviewers and have policies on diverse aspects of peer review, especially with respect to adoption of appropriate models of review and processes for handling conflicts of interest, appeals and disputes that may arise in peer review

  13. Elsevier Researcher Academy

    He explores the obligations of reviewers to provide an honest assessment of the work and highlight potential conflicts of interest. And, he reminds researchers they should only accept a review request if they have the time - and knowledge - required. He also examines who carries the burden of ensuring a manuscript is free of plagiarism ...

  14. What Is Ethics in Research and Why Is It Important?

    Moreover, only a fool would commit misconduct because science's peer review system and self-correcting mechanisms will eventually catch those who try to cheat the system. In any case, a course in research ethics will have little impact on "bad apples," one might argue.

  15. How do ethics and diversity, equity, and inclusion relate in

    Identifying scope and research questions. We addressed the research question, "How are ethics and DEI explicitly connected in peer-reviewed literature in engineering education and closely related fields?" We began this study with a focus on engineering, but we shifted to engineering education because of the education-focus of our data.

  16. Frontiers

    Theoretical research articles and reviews published in peer-reviewed journals until the end of 2020 were included in the study. The language of publications was English. ... The View of Synthetic Biology in the Field of Ethics: A Thematic Systematic Review Provisionally Accepted. Ayse Kurtoglu 1, 2* Abdullah Yıldız 2 Berna Arda 2.

  17. NIH announces the Simplified Framework for Peer Review

    The mission of NIH is to seek fundamental knowledge about the nature and behavior of living systems and to apply that knowledge to enhance health, lengthen life, and reduce illness and disability.In support of this mission, Research Project Grant (RPG) applications to support biomedical and behavioral research are evaluated for scientific and technical merit through the NIH peer review system.

  18. Ethics in Research and Publication

    For all studies involving individuals or medical records, approval from a duly appointed research ethics committee is necessary. ... Peer review is the method used to evaluate the quality of articles submitted to a journal. COPE has developed ethical guidelines for peer reviewers. The affiliation between the author, the editor, and the peer ...

  19. Welcome

    Research at the Centre for Biomedical Ethics and Law is organized along five interdisciplinary research lines: Ethical, legal and social aspects of predictive medicine and genomics. Ethical and legal aspects of care for older adults and end-of-life care. Ethical and legal issues in data sharing, artificial intelligence, E-health.

  20. Scarcity of Medical Ethics Research in Allergy and Immunology: A Review

    Medical ethics is relevant to the clinical practice of allergy and immunology regardless of the type of patient, disease state, or practice setting. When engaging in clinical care, performing research, or enacting policies on the accessibility and distribution of healthcare resources, physicians regularly make and justify decisions using the fundamental principles of medical ethics. Thus ...

  21. Principles of Clinical Ethics and Their Application to Practice

    An overview of ethics and clinical ethics is presented in this review. The 4 main ethical principles, that is beneficence, nonmaleficence, autonomy, and justice, are defined and explained. Informed consent, truth-telling, and confidentiality spring from the principle of autonomy, and each of them is discussed.

  22. A systematic review of workplace triggers of emotions in the healthcare

    Research outside healthcare [6,7,8,9] suggests that if, while making these decisions, healthcare staff experience strong emotions, this can influence their decisions and behavior. Research focusing on the role of emotion in patient safety is still limited [1, 2, 10] and fragmented . This may in part be because emotion research is complex.

  23. évaluation par les pairs (peer review) News, Research and Analysis

    Cet épisode est dédié à la notion de « peer review », l'évaluation par les pairs, un aspect fondamental de la production de la science. Comprendre ce procédé permet de distinguer qui ...

  24. Global Governance of Artificial Intelligence: Next Steps for Empirical

    There is, for instance, research suggesting that private companies are contributing to a practice of ethics washing by committing to nonbinding ethical guidelines while circumventing regulation (Wagner 2018; Jobin et al. 2019; Rességuier and Rodrigues 2020). Finally, a fourth question is to what extent, and how, non-state actors exert ...

  25. IET Smart Cities: Author Guidelines

    You should review the basic figure requirements for manuscripts for peer review, as well as the more detailed post-acceptance figure requirements. View Wiley's FAQs on supporting information. 5. Peer Review and Decisions. IET Smart Cities operates under a single-blind peer review model. Papers will only be sent to review if the Editor-in ...

  26. Research Ethics Review

    A peer-reviewed, open access journal in research ethics, scientific integrity, ethical review & research conduct. A peer-reviewed, open access journal in research ethics, scientific integrity, ethical review & research conduct. This website uses cookies to ensure you get the best experience.

  27. Research: Negotiating Is Unlikely to Jeopardize Your Job Offer

    Her research interests include conflict management, negotiations, and organizational behavior. Julia Bear is a professor of organizational behavior at the College of Business at Stony Brook ...