Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

First do no harm: An exploration of researchers’ ethics of conduct in Big Data behavioral studies

Roles Conceptualization, Data curation, Formal analysis, Methodology, Writing – original draft

* E-mail: [email protected]

Affiliation Institute for Biomedical Ethics, University of Basel, Basel, Switzerland

ORCID logo

Roles Formal analysis, Methodology, Supervision, Validation, Writing – original draft

Roles Validation, Writing – review & editing

Affiliation Division of Clinical Psychology and Psychotherapy, Faculty of Psychology, University of Basel, Basel, Switzerland

Roles Funding acquisition, Supervision, Validation, Writing – review & editing

  • Maddalena Favaretto, 
  • Eva De Clercq, 
  • Jens Gaab, 
  • Bernice Simone Elger

PLOS

  • Published: November 5, 2020
  • https://doi.org/10.1371/journal.pone.0241865
  • Reader Comments

Table 1

Research ethics has traditionally been guided by well-established documents such as the Belmont Report and the Declaration of Helsinki. At the same time, the introduction of Big Data methods, that is having a great impact in behavioral research, is raising complex ethical issues that make protection of research participants an increasingly difficult challenge. By conducting 39 semi-structured interviews with academic scholars in both Switzerland and United States, our research aims at exploring the code of ethics and research practices of academic scholars involved in Big Data studies in the fields of psychology and sociology to understand if the principles set by the Belmont Report are still considered relevant in Big Data research. Our study shows how scholars generally find traditional principles to be a suitable guide to perform ethical data research but, at the same time, they recognized and elaborated on the challenges embedded in their practical application. In addition, due to the growing introduction of new actors in scholarly research, such as data holders and owners, it was also questioned whether responsibility to protect research participants should fall solely on investigators. In order to appropriately address ethics issues in Big Data research projects, education in ethics, exchange and dialogue between research teams and scholars from different disciplines should be enhanced. In addition, models of consultancy and shared responsibility between investigators, data owners and review boards should be implemented in order to ensure better protection of research participants.

Citation: Favaretto M, De Clercq E, Gaab J, Elger BS (2020) First do no harm: An exploration of researchers’ ethics of conduct in Big Data behavioral studies. PLoS ONE 15(11): e0241865. https://doi.org/10.1371/journal.pone.0241865

Editor: Daniel Jeremiah Hurst, Rowan University School of Osteopathic Medicine, UNITED STATES

Received: July 22, 2020; Accepted: October 21, 2020; Published: November 5, 2020

Copyright: © 2020 Favaretto et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: The raw data and the transcripts related to the project cannot be openly released due to ethical constraints (such as easy re-identification of the participants and the sensitive nature of parts of the interviews). The main points of contact for fielding data access requests for this manuscript are: the Head of the Institute for Biomedical Ethics (Bernice Elger: [email protected] ), the corresponding author (Maddalena Favaretto: [email protected] ), and Anne-Christine Loschnigg ( [email protected] ). Data sharing is contingent on the data being handled appropriately by the data requester and in accordance with all applicable local requirements. Upon request, a data sharing agreement will be stipulated between the Institute for Biomedical Ethics and the one requesting the data that will state that: 1) The shared data must be deleted by the end of 2023 as stipulated in the recruitment email sent to the study participants designed in accordance to the project proposal of the NRP 75 sent to the Ethics Committee northwest/central Switzerland (EKNZ); 2) The people requesting the data agree to ensure its confidentiality, they should not attempt to re-identify the participants and the data should not be shared with any further third stakeholder not involved in the data sharing agreement signed between the Institute for Biomedical Ethics and those requesting the data; 3) The data will be shared only after the Institute for Biomedical Ethics has received specific written consent for data sharing from the study participants.

Funding: The funding for this study was provided by the Swiss National Science Foundation in the framework of the National Research Program “Big Data”, NRP 75 (Grant-No: 407540_167211, recipient: Prof. Bernice Simone Elger). We confirm that the Swiss National Science Foundation had no involvement in the study design, collection, analysis, and interpretation of data, the writing of the manuscript and the decision to submit the paper for publication.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Big Data methods have a great impact in behavioral sciences [ 1 – 3 ], but challenge the traditional interpretation and validity of research principles in psychology and sociology by raising new and unpredictable ethical concerns. Traditionally, research ethics have been guided by well-established reports and declarations such as the Belmont Report and the Declaration of Helsinki [ 4 – 6 ]. At the core of these documents are three fundamental principles– respect for persons , beneficence , and justice –and their related interpretations and practices, such as the acknowledgment of participants’ autonomous participation and the need to obtain informed consent, minimization of harm, risk benefit assessment, fairness in distribution and dissemination of research outcomes, and fair participant selection (e.g. to avoid additional burden to vulnerable populations) [ 7 ].

As data stemming from human interactions is more and more available to scholars, thanks to a) the increased distribution of technological devices, b) the growing use of digital services, and c) the implementation of new digital technologies [ 8 , 9 ], researchers and institutional bodies are confronted with novel ethical questions. These encompass harm, that might be caused by the linkage of publicly available datasets on research participants [ 10 ], the level of privacy users expect in digital platforms such as social media [ 11 ], the level of protection that investigators should ensure for the anonymity of their participants in research using sensing devices and tracking technologies [ 12 ], and the role of individuals in consenting in participating in large scale data studies [ 13 ].

Consent is one of the most challenged practices in data research. In this context subjects are often unaware of the fact that their data is collected and analyzed and lack the appropriate control over their data, preventing them the possibility to withdraw from a study, that allows for autonomous participation [ 14 , 15 ]. When it comes to the principle of beneficence , Big Data brings about issues with regard to the appropriate risk-benefit ratio for participants as it becomes more difficult for researchers to anticipate unintended harmful consequences [ 8 ]. For example, it is increasingly complicated to ensure anonymity of the participant as risks of re-identification abound in Big Data practices [ 12 ]. Finally, interventions and knowledge developed from Big Data research might benefit only part of the population thus creating issues of justice and fairness [ 10 ]; this is mainly due to the deepening of the digital divide between people who have access to digital resources and those who do not, on the basis of a significant number of demographic variables such as income, ethnicity, age, skills, geographical location and gender [ 10 , 16 ].

There is evidence that researchers and regulatory bodies are struggling to appropriately address these novel ethical questions raised by Big Data. For instance, a group of researchers based at Queen’s Mary University in the UK used a model of geographic profiling on a series of publicly available datasets in order to reveal the identity of famous British artist Banksy [ 17 ]. The study was criticized by scholars for being disrespectful of the privacy of a private citizen and their family and a deliberate violation of the artist’s right of and preference for remaining anonymous [ 18 ]. Another example is the now infamous case of the Emotional Contagion study. Using a specific software, a research team manipulated the News Feeds of 689,003 Facebook users in order investigate how “emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness” [ 19 ]. Ethics scholars and the public criticized this study because it was performed without obtaining the appropriate consent from Facebook users and it could have cause psychological harm by showing participants only negative feeds on their homepage [ 20 , 21 ].

Given these substantial challenges, it is legitimate to ask whether the principles set by the Belmont Report are still relevant for digital research practices. Scholars advocate for the construction of flexible guidelines and for the need to revise, reshape and update the guiding principles of research ethics in order to overcome the challenges raised in data research and provide adequate assistance to investigators [ 22 – 24 ].

As ethics governance of Big Data research is currently at debate, researchers’ own ethical attitudes influence significantly how ethical issues are presently dealt with. As researchers are experts on the technical details of their own research, it is also useful for research ethicists and members of ethical committees and Institutional Review Boards (IRB) to be knowledgeable of these attitudes. Therefore, this paper aims to explore the code of ethics and research practices of behavioral scientists involved in Big Data studies in the behavioral sciences in order to investigate perceived strategies to promote ethical and responsible conduct of Big Data research. We have conducted interviews with researchers in the fields of sociology and psychology from eminent universities both in Switzerland and the United States, where we asked them to share details about the type of strategies they develop to protect research participants in their projects; what ethical principles they apply to their projects; their opinion on how Big Data research should ideally be conducted and what ethical challenges they have faced in their research. The present study aims to contribute to the existing literature on the code of conduct of researchers involved in digital research in different countries and the value of traditional ethical principles [ 14 , 22 , 23 ] in order to contribute to the discussion around the construction of harmonized and applicable principles for Big Data studies. This manuscript aims at investigating the following research questions: 1) what are the ethical principles that can still be considered relevant for Big Data research in the behavioral sciences; 2) what are the challenges that Big data methods are posing to traditional ethical principles; 3) what are the investigators’ responsibilities and roles in reflecting upon strategies to protect research participants.

Material and methods

This study is part of a larger research project that investigated the ethical and regulatory challenges of Big Data research. We decided to focus on behavioral sciences, specifically phycology and sociology, for two main reasons. First, the larger research project aimed at investigating the challenges introduced by Big Data methods for regulatory bodies such as Research Ethics Committees (RECs) and Institutional Review Boards (IRBs) [ 25 ]. Both in Switzerland and the United States, Big Data research methods in these two fields are questioning the concept of human research subject–due to the increased distance and detachment between research subjects and investigators brought by digitalized means for data collection (e.g. social media profiles, data networks, transaction logs etc.) and analysis [ 18 ]. As a consequence current legislation in charge of regulating academic research, such as the Human Research Act (HRA) [ 26 ], the Federal Act of Data Protection [ 27 ] and the Common Rule [ 18 ], is being increasingly challenged. Second, especially in Switzerland, behavioral studies using Big Data methods are at the moment among the most underregulated types of research projects [ 26 , 28 , 29 ]. In fact, the current definition of human subject leaves many Big Data projects out of the scope of regulatory overview despite the possible ethical challenges they pose. For instance, according to the HRA research that involves anonymized data from research participants does not need ethics approval [ 26 ].

In addition, we selected Switzerland and the United States to recruit participants: Switzerland, where Big Data research is a quite recent phenomenon, was chosen because the study was designed, funded and conducted there. The United States were selected as a as a comparative sample, where advanced Big Data research has been taking place for several years in the academic environment, as evidenced by the numerous grants placed for Big Data research projects by federal institutions, such as the National Science Foundation (NSF) [ 30 , 31 ] and the National Institute of Health (NIH) [ 32 ].

For the purpose of our study we defined Big Data as an overarching umbrella term that designates a set of advanced digital techniques (e.g. data mining, neural networks, deep learning, artificial intelligence, natural language processing, profiling, scoring systems) that are increasingly used in research to analyze large datasets with the aim of revealing patterns, trends and associations about individuals, groups and society in general [ 33 ]. Within this definition we selected participants that conducted heterogeneous Big Data research projects: from internet-based research and social media studies, to aggregate analysis of corporate datasets, to behavioral research using sensing devices. Participant selection was based on their involvement in Big Data research and was conducted systematically by browsing the professional pages of all professors affiliated to the departments of psychology and sociology of all twelve Swiss Universities and the top ten American Universities according to the Times Higher Education University Ranking 2018. Other candidates were identified through snowballing. Through our systematic selection we also identified a consistent number of researchers with a background in data science that were involved in research projects in behavioral sciences (in sociology, psychology and similar fields) during the time of their interview. Since their profile matched the selection criteria, we included them in our sample.

We conducted 39 semi structured interviews with academic scholars involved in research projects that adopt Big Data methodologies. Twenty participants were from Swiss universities and 29 came from American institutions. They comprised of a majority of professors (n = 34) and a few senior researchers or postdocs (n = 5). Ethics approval was sought from the Ethics Committee northwest/central Switzerland (EKNZ) who deemed our study exempt. Oral informed consent was sought prior the start of each interview. Interviews were administered using a semi-structured interview guide developed, through consensus and discussion, after the research team had the time to familiarize with the literature and studies on Big Data research and data ethics. The questions explored topics like: ethical issues related to Big Data studies in the behavioral sciences; ethics of conduct with regards to Big Data research project; institutional regulatory practices; definition and understanding of the term Big Data; and opinions towards data driven studies ( Table 1 ).

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0241865.t001

Interviews were tape recorded and transcribed ad-verbatim. We subsequently transferred the transcripts into the qualitative software MAXQDA (version 2018) to support with data management and the analytic process [ 34 ]. Analysis of the dataset was done using thematic analysis [ 35 ]. The first four interviews were independently read and coded by two members of the research team in order to explore the thematic elements of the interviews. To ensure consistency during the analysis process, the two researchers subsequently confronted the preliminary open-ended coding and they developed an expanded coding scheme that was used for all of the remaining transcripts. Several themes relevant for this study were agreed upon during the coding sessions such as: a) responsibility and the role of the researcher in Big Data research; b) research standards for Big Data studies; c) attitudes towards the use of publicly available data; d) emerging ethical issues from Big Data studies. Since part of the data has already been published, we refer to a previous publication [ 33 ] for additional information on methodology, project design, data collection and data analysis.

Researcher’s code of ethics for Big Data studies was chosen as a topic to explore since participants, by identifying several ethical challenges related to Big Data, expressed concerns regarding the protection of the human subject in digital research and expressed shared strategies and opinions on how to ethically conduct Big Data studies. Consequently, all the interviews that were coded within the aforementioned topics were read again, analyzed and sorted into sub-topics. This phase was performed by the first author while the second author supervised this phase by checking for consistency and accuracy.

For this study we conducted 39 interviews with respectively 21 sociologists (9 from CH and 12 from the US), 11 psychologists (6 from CH and 5 from the US), and 7 data scientists (5 from CH and 2 from the US). Among them, 27 scholars (12 from CH and 21 from US) stated that they were working on Big Data research projects or on projects that involve Big Data methodologies, four participants (all from CH) noted that they were not involved in Big Data research and eight (7 from CH and one from the US) were unsure whether their research could be described or considered as Big Data research ( Table 2 ).

thumbnail

https://doi.org/10.1371/journal.pone.0241865.t002

Respondents, while discussing codes of ethics and ethical practices for Big Data research, both a) shared their personal strategies that they implemented in their own research projects to protect research subjects, and b) generally discussed the appropriate research practices to be implemented in Big Data research. Table 3 illustrates the type of Big Data our participants were working with at the time of the interview.

thumbnail

https://doi.org/10.1371/journal.pone.0241865.t003

Our analysis identified several themes and subthemes. They were then divided and analyzed within three major thematic clusters: a) ethical principles for Big Data research; b) challenges that Big Data is introducing for research principles; c) ethical reflection and responsibility in research. Table 4 reports the themes and subthemes that emerged from the interviews and their occurrence in the dataset. Representative anonymized quotes were taken from the interviews to further illustrate the reported results.

thumbnail

https://doi.org/10.1371/journal.pone.0241865.t004

Ethical principles for digital research

Belmont principles, beneficence and avoiding harm..

First, many of the respondents shared their opinions on what ethical guidelines and principles they consider important to conduct ethical research in the digital era. Table 5 illustrates the number of researchers that mentioned a specific ethical principle or research practice as relevant for Big Data research.

thumbnail

https://doi.org/10.1371/journal.pone.0241865.t005

Three of our participants, generally referred to the principles stated in the Belmont Report and the ones related to the Declaration of Helsinki.

I think the Belmont Report principles. The starting point so. . . .you know beneficence, respect for the individuals, justice… and applying those and they would take some work for how to apply those exactly or what it would mean translating to this context but that would be the starting point (P18, US–data science).

A common concern was minimization of harm for research participants and the importance of beneficence as prominent components of scholarly research.

And…on an ethical point of view… and I guess we should be careful that experiment doesn’t harm people or not offend people for example if it's about religion or something like that it can be tricky (P25, CH–psychology).

Beneficence, in the context of digital Big Data research, was sometimes associated with the possibility of giving back to the community as a sort of tradeoff for the inconvenience that research might cause to research participants. On this, P9, an American sociologist, shared:

I mean it's interesting that the ethical challenges that I faced… (pause) had more to do with whether I feel, for instance in working in the developing world…is it really beneficial to the people that I'm working with, I mean what I'm doing. You know I make heavy demands on these people so one of the ethical challenges that I face is, am I giving back enough to the community.

While another American scholar, a psychologist, was concerned about how to define acceptable risks in digital research and finding the right balance between benefit and risks for research projects.

P17: Expecting benefit from a study that should outweigh the respective risks. I mean, I think that's a pretty clear one. This is something I definitely I don't know the answer to and I'm curious about how much other people have thought about it. Because like what is an acceptable sort of variation in expected benefits and risks. Like, you could potentially say “on average my study is expected to deliver higher benefits than risks”… there's an open question of like, … some individuals might regardless suffer under your research or be hurt. Even if some others are benefitting in some sense.

For two researchers, respect for the participant and their personhood was deemed particularly important irrespective of the type of research conducted. P19, an American sociologist, commented:

What I would like to see is integrity and personhood of every single individual who is researched, whether they are dead or alive, that that be respected in a very fundamental way. And that is the case whether it's Big Data, and whether is interviews, archival, ethnographic, textual or what have you. And I think this is a permanent really deep tension in wissenshaftlich ( scientific research ) activities because we are treating the people as data. And that's a fundamental tension. And I think it would be deeply important to explicitly sanitize that tension from the get-go and to hang on to that personhood and the respect for that personhood.

Informed consent and transparency.

Consent was by far the most prominent practice that emerged from the interviews as three quarters of our participants mentioned it, equally distributed among American and Swiss researchers. Numerous scholars emphasized how informed consent is at the foundation of appropriate research practices. P2, a Swiss psychologist, noted:

But of course it's pretty clear to me informed consent is very important and it’s crucial that people know what it is what kind of data is collected and when they would have the possibility of saying no and so on. I think that’s pretty standard for any type of data. (…) I mean it all goes down to informed consent.

For a few of our participants, in the era of Big Data, it becomes not really a matter of consent but a matter of awareness. Since research with Big Data could theoretically be performed without the knowledge of the participant, research subjects at least have to be made aware that they are part of a research project as claimed by P38 a Swiss sociologist who said:

I think that everything comes down to the awareness of the subject about what is collected about them. I mean, we have collected data for ages, right? And I mean, before it was using pen and paper questionnaires, phone interviews or…there’s been data collection about private life of people for, I mean, since social science exists. So, I think the only difference now is the awareness.

Another practice that was considered fundamental by our participants was the right of participants to withdraw from a research study that, in turn, was translated in giving the participants more control over their data in the context of Big Data research. For example, while describing their study with social media, a Swiss sociologist (P38) explained that”the condition was that everybody who participated was actually able to look at his own data and decide to drop from the survey any time”. Another Swiss sociologist (P37), when describing a study design in which they asked participants to install an add-on on their browser to collect data on their Facebook interactions, underlined the importance of giving participants control over their data and to teach them how to manage them, in order to create a trust based exchange between them and the investigators:

And there you'd have to be sure that people…it's not just anonymizing them, people also need to have a control over their data, that's kind of very important because you need kind of an established trust between the research and its subjects as it were. So they would have the opportunity of uninstall the…if they're willing to take part, that's kind of the first step, and they would need to download that add-on and they'd also be instructed on how to uninstall the add-on at any point in time. They'd be also instructed on how to pause the gathering of their data at any point in time and then again also delete data that well…at first I thought it was a great study now I'm not so sure about, I want to delete everything I've ever collected.

The same researcher suggested to create regulations that ensure ownership of research data to participants in order to allow them to have actual power over their participation past the point of initial consent.

And legal parameters then should be constructed as such that it has to be transparent, that it guards the rights of the individual (…) in terms of having ownership of their data. Particularly if it's private data they agree to give away. And they become part of a research process that only ends where their say. And they can always withdraw the data at any point in time and not just at the beginning with agreeing or not agreeing to taking part in that. But also at different other points in time. So that i think the…you have to include them more throughout your research process. Which is more of a hassle, costs more money and more time, but in the end you kind of. . . .it makes it more transparent and perhaps it makes it more interesting for them as well and that would have kind of beneficial effects for the larger public I suppose.

In addition, transparency of motives and practices was also considered a fundamental principle for digital research. For instance, transparency was seen as a way for research participants to be fully informed about the research procedures and methods used by investigators. According to a few participants transparency is key to guarantee people’s trust the research system and to minimize their worry and reservations about participating in research studies. On this P14, an American psychologist, noted:

I think we need to have greater transparency and more. . . . You know our system, we have in the United States is that…well not a crisis, the problem that we face in the United States which you also face I'm sure, is that…you know, people have to believe that this is good stuff to do (participating in a study). And if they don't believe that this is good stuff to do then it's a problem. And so. . . .so I think that that. . . .and I think that the consent process is part of it but I think that the other part of it is that the investigators and the researchers, the investigators and the institutions, you know, need to be more transparent and more accountable and make the case that this is something worth doing and that they're being responsible about it.

A Swiss sociologist, P38, who described how they implemented transparency in their research project by giving control to participants over the data they were collecting on them, highlighted that the fear individuals might have towards digital and Big Data research might come from lack of information and understanding about what data investigators are collecting on them and how they are using it. In this sense transparency of practices not only ensures that more individuals trust the research systems, but it will also assist them in making a truly informed decision about their participation in a study.

And if I remember correctly the conditions were: transparency, so every subject had to have access to the full data that we were collecting. They had also the possibility to erase everything if they wanted to and to drop from the campaign. I guess it's about transparency. (…) So, I think this is key, so you need to be transparent about what kind of data you collect and why and maybe what will happen to the data. Because people are afraid of things they don't understand so the better they understand what's happening the more they would be actually. . . . not only they will be willing to participate but also the more they will put the line in the right place. So, this I agree, this I don't agree. But the less you understand the further away you put the line and you just want to be on the safe side. So, the better they understand the better they can draw the line at the right place, and say ok: this is not your business, this I'm willing to share with you.

In addition, one of our participants considered transparency to be an important value also between scholars from different research teams. According to this participant, open and transparent communication and exchange between research would help implement appropriate ethical norms for digital research. They shared:

But I think part of it is just having more transparency among researchers themselves. I think you need to have like more discussions like: here's what I'm doing…here's what I'm doing…just more sharing in general, I think, and more discussion. (…) People being more transparent on how they're doing their work would just create more norms around it. Because I think in many cases people don't know what other people have been doing. And that's part of the issues that, you know, it's like how do I apply these abstract standards to this case, I mean that can be though. But if you know what everybody is doing it makes a little bit easier. (P3-US, Sociologist)

On the other hand, however, a sociologist from Switzerland (P37), noted that the drive towards research transparency might become problematic for ensuring the anonymity of research participants as more information you share about research practices and methods the more possibilities of backtracking and re-identifying the participants to the study.

It’s problematic also because modern social science, or science anyway, has a strong and very good drive towards transparency. But transparency also means, that the more we become transparent the less we can guarantee anonymity (…) If you say: "well, we did a crawl study", people will ask "well, where are you starting, what are your seeds for the crawler?". And it's important to, you know, to be transparent in that respect.

Privacy and anonymity.

Respect for the privacy of research participants, and protection from possible identification, usually achieved through anonymization of data, were the second most mentioned standards to be considered while conducting Big Data research. P33, a Swiss sociologist, underlined how “If ever, then privacy has…like it’s never been more important than now”, since information about individuals is becoming increasingly available thanks to digital technologies, and how institutions now have a responsibility to ensure that such privacy is respected. A Swiss data scientist, P29, described the privacy aspect embedded in their research with social media and how their team is constantly developing strategies to ensure anonymity of research subjects. They told:

Yeah, there is a privacy aspect of course, that's the main concern, that you basically…if you’re able to reconstruct like the name of the person and then the age of the person, the address of the person, of course you can link it then to the partner of the person, right? If she or he has, they're sharing the same address. And then you can easily create the story out of that, right? And then this could be an issue but…again, like we try to reapply some kind of anonymization techniques. We have some people working mostly on that. There is a postdoc in our group who is working on anonymization techniques.

Similarly, an American researcher, P6 Sociologist, underlined how it should become a routine practice for every research project to consider and implement practices to protect human participants from possible re-identification:

In the social science world people have to be at least sensitive to the fact that they could be collecting data that allows for the deductive identification of individuals. And that probably…that should be a key focus of every proposal of how do you protect against that.

Challenges introduced by Big Data to research ethics and ethical principles

A consistent number of our researcher, on the other hand, recognized how Big Data research and methods are introducing numerous challenges related to the principles and practices they consider fundamental for ethical research and reflected upon the limits of the traditional ethical principles.

When discussing informed consent, participants noted that that it might not be the main standard to refer to when creating ethical frameworks for research practices as it cannot be ensured anymore in much digital research. For instance, P14, an American psychologist noted:

I think that that the kind of informed consent that we, you know, when we sign on to Facebook or Reddit or Twitter or whatever, you know, people have no idea of what that means and they don't have any idea of what they're agreeing to. And so, you know the idea that that can bear the entire weight of all this research is, I think…I think notification is really important, you can ask for consent but the idea that that can bear the whole weight for allowing people to do whatever/ researchers to do whatever they want, I think it's misguided.

Similarly, P18, an American scholar with a background in data science, felt that although there is still a place for informed consent in the digital era, this practice should be appropriately revisited and reconsidered as it cannot be applied anymore in the stricter sense, for instance when analyzing aggregated databases where personal identifiers are removed and it would be impossible to trace back the individual to ask them for consent. Data aggregation is the process of gathering data from multiple sources and presenting it in a summarized format. Through the process of data aggregation, data can be stripped from personal identifiers thus ensuring anonymization of the dataset and analyzing aggregate data should, theoretically not reveal personal information about the user. The participant shared:

Certainly, I think there is [space for informed consent in digital research]. And like I said I think we should require people to have informed consent about their data being used in aggregate analysis. And I think right now we do not have informed consent. (…) So, I think again, under the strictest interpretation even to consent to have one’s data involved in an aggregate analysis should involve that. But I don't know, short of that, what would be an acceptable tradeoff or level of treatment. Whether simply aggregating the analysis is good enough and if so what level of aggregation is necessary.

As for consent, many of our participants while recognizing the importance of privacy and anonymity, also reflected on some of the challenges that Big Data and digitalization of research are creating for these research standards. First, a few respondents highlighted how in digital research the risk of identification of participants is quite high as anonymized datasets could almost always be de-anonymized, especially if data is not adequately secured. On this, P1, an American sociologist explained:

I understand and recognize that there are limits to anonymization. And that under certain circumstances almost every anonymized dataset can be de-anonymized. That's what the research that shows us. I mean sometimes that requires significant effort and then you ask yourself would someone really invest like, you know, supercomputers to solve this problem to de-anonymize…

A Swiss sociologist (P38) described how anonymization practices towards the protection of the privacy of the research participant could, on the other hand, diminish the value of the data for research as anonymization would destroy some of the information the researcher is actually interested in.

You know, we cannot do much about it. So… there is a tendency now to anonymize the data but basically ehm…anonymization means destruction of information in the data. And sometimes the information that is destroyed is really the information we need…

Moreover, it was also claimed how digital practices in research are currently blurring the line between private and public spaces creating additional challenges for the protection of the privacy of the research participant and practices of informed consent. A few of our researchers highlighted how research subjects might have an expectation of privacy even in public digital spaces such as social media and public records. In this context, an American sociologist, P9, noted how participants could have a problem in allowing researchers to link together publicly available datasets as they would prefer information stemming from this linkage to remain private:

P9USR: Well because the question is…even if you have no expectation of privacy in your Twitter account, you know Twitter is public. And even if you have no expectation of privacy in terms of whether you voted or not, I don't know, in Italy maybe it's a public record whether if you show up at the pool or not. Right? I can go to the city government and see who voted in the last elections right? (…) So…who voted is listed or what political party they're member of is listed, is public information. But you might have expectation of privacy when it comes to linking those data. So even though you don't expect privacy in Twitter and you don't expect privacy in your voting records, maybe you don't like it when someone links those things together.

In addition, a sociologist, P19 from the US, noted how even with just linking information of some publicly available data, research subjects could be easily identified.

However, when one goes to the trouble of linking up some of the aspects of these publicly available sets it may make some individuals identifiable in a way that they haven't been before. Even though one is purely using publicly available data. So, you might say that it kind of falls into an intermediate zone. And raises practical and ethical questions on protection when working with publicly available data. I don't know how many other people you have interviewed who are working in this particular grey zone.

Two of our participants while describing personal strategies to handle matters of expectation of privacy and consent, discussed the increased blur between private and public spaces and how it is becoming increasingly contextual to adequately handle matters of privacy on social media.

P2USR: So, for example when I study journalists, I assume that their Tweets are public data just because Twitter is the main platform for journalists to kind of present their public and professional accomplishments and so I feel fine kind of using their tweets, like in the context of my research. I will say the same thing, about Facebook data for example. So, some of the journalists kind of… that I interviewed are… are not on Facebook anymore, but at the time we became friends on Facebook and there were postings and I… I wouldn't feel as comfortable, I wouldn't use their Facebook data. I just think that somehow besides the norms of the Facebook platform is that it's more private data, from…especially when it's not a public page so… But it's like… it's fuzzy.

Responsibility and ethical reflection in research

Due to the challenges introduced by digital methods, some of our participants elaborated on their opinions regarding the role of ethical reflection and their responsibility in addressing such challenges in order to ensure the protection of research participants.

Among them, some researches emphasized the importance for investigators to apply ethical standards to appropriately perform their research projects. However, a couple of them recognized how not all researchers might have the background and expertise to acknowledge the ethical issues stemming from their research projects or to be adequately familiar with ethical frameworks. On this, P12, an American sociologist, highlighted the importance of education in ethics for research practitioners:

I also want to re-emphasize that I think that as researchers in this field we need to have training in ethics because a lot of the work that we're doing (pause) you know can be on the border of infringing on people’s privacy.

In addition, self-reflection, ethical interrogation and evaluation about the appropriateness of certain research practices was a theme that emerged quite often during our interviews. For an American psychologist, P4, concerned about issues of consent in digital research, it is paramount that investigator begin to interrogate themselves upon what type of analysis would be ethically appropriate without explicit consent of participants.

And it is interesting by the way around Big Data because in many cases those data were generated by people who didn't sign any consent form. And they have their data used for research. Even (for the) secondary analysis of our own data the question is: what can you do without consent?

Similarly, P26, a sociologist from Switzerland, reflected upon the difficulties that researchers might encounter in evaluating what type of data investigators can consider unproblematic to collect and analyze even in digital public spaces, like social media:

Even though again, it's often not as clear cut, but I think if people make information public that is slightly different from when you are posting privately within a network and assume that the only people really seeing that are your friends. I see that this has its own limits as well because certain things…well A: something like a profile image I think is always by default public on Facebook…so… there you don't really have a choice to post it privately. I guess your only choice is not to change it ever. And then the other thing is that…I know it because I study (…) internet skills, I know a lot of people are not very skilled. So, there are a lot of instances where people don't realize they're posting publicly. So even if something is public you can't assume people had meant it to be public.

Moreover, reflection and evaluation of the intent behind a research study was considered important by P31, a Swiss data scientist, for ethical research in Big Data. The researcher recognized that this is difficult to put into practice as investigators with ill intent might lie about their motivations and you could have negative consequences even with the noblest of intents.

I find it really difficult to answer that. I would say, the first thing that comes to my mind is the evaluation of intent… rather than other technicality. And I think that's a lacking point. But also the reason why I don't give that answer immediately is like…intent is really difficult to probe… and it's probably for some people quite easy to know what is the accepted intent. And then I can of course give you a story that is quite acceptable to you. And also with good intent you can do evil things. So, it's difficult but I would say that discussion about the intent is very important. So that would be maybe for me a minimal requirement. At least in the discussions.

In this context, some scholars also discussed their perception regarding responsibility of protecting research participants in digital studies and the role investigators play in overcoming ethical issues.

For a few of them it was clear that the responsibility of protecting the data subjects should fall on the investigators themselves. For instance, an American scholar, P22 sociologist, while discussing the importance of creating an ethical framework for digital research that uses publicly available data of citizens shared:

So, I do think (the responsibility) it's on researchers (…) and I get frustrated sometimes when people say "well it's not up to us, if they post it there then it's public". It's like well it is up to us, it's literally our job, we do it all day, try to decide, you know, what people want known about them and what people don't. So, we should apply those same metrics here.

However, other researchers also pointed out how the introduction of digital technologies and digital methods for behavioral research is currently shifting the perceived responsibility scholars have. P16, an American sociologist, shared some concerns regarding the use of sensor devices for behavioral research and reflected on how much responsibility they, as investigators, have in assuring data protection of their research subjects since the data they work with is owned by the company that provided the device for data collection:

There's still seems to be this question about…whether. . . .what the Fitbit corporation is doing with those data and whether we as researchers should be concerned about that. We're asking people to wear Fitbits for a study. Or whether that's just a separate issue. And I don't know what the answer to that is, I just know that it seems like the type of question that it's going to come up over and over and over again.

One a similar note, P14, an American psychologist, noted that while researchers actually have a responsibility of preventing harm that might derive from data research, it should be a responsibility in part shared with data holders. They claimed:

Do I think that the holders of data have a responsibility to try to you know, try to prevent misuse of data? Yeah, I think they probably do. (…) I think there is a notion of stewardship there. Then I think that investigators also have an independent obligation to make sure to think about the data they're analyzing and trying to get and think about what they're using it for. So not to use data in order to harm other people or those kinds of things.

Finally, a few participants hinted at the fact that research ethics boards like Institutional Review Boards (IRBs) and Ethics Committees (ECs) should play a bigger role of responsibility in ensuring that investigators actually perform their research ethically. For instance, P16, an American sociologist, complained that IRBs do not provide adequate follow-up to researchers to ensure that they are appropriately following the approved research protocols.

There does seem to be kind of a big gap even in the existing system. Which is that a researcher proposes a project, the IRB hopefully works with the researcher and the project gets approved and there's very little follow-up and very little support for sort of making sure that the things that are laid out at the IRB actually in the proposal and the project protocol actually happen. And not that I don't believe that most researchers have good intensions to follow the rules and all of that but there are so many of kind of different projects and different pressures that things can slip by and there's… there's nobody.

As Big Data methodologies are becoming widespread in research, it is important to reach international consensus on whether and how traditional principles for research ethics, such as the ones described in the Belmont Report, are still relevant for the new ethical questions introduced by Big Data and internet research [ 22 , 23 ]. Our study offers a relevant contribution to this debate as it investigated the methodological strategies and code of ethics researchers from different jurisdictions—Swiss and American investigators—apply in their Big Data research projects. It is interesting to notice how, despite regional difference, participants shared very similar ethical priorities. This might be due to the international nature of academic research, where scholars share similar codes of ethics and apply similar strategies for the protection of research participants.

Our results point out that in their code of conduct, researchers mainly referred to the traditional ethical principles enshrined in the Belmont report and the Declaration of Helsinki, like respect for persons in the practice of informed consent, beneficence, minimization of harm through protection of privacy and anonymization, and justice. This finding shows that such principles are still considered relevant in behavioral sciences to address the ethical issues of Big Data research, despite the critique of some that rules designed for medical research cannot be applied in sociological research [ 36 ]. Even before the advent of Big Data, the practical implementation of the Belmont Report principles has never been an easy endeavor as they were originally conceived to be flexible to accommodate a wide range of different research settings and methods. However it has been argued that exactly this flexibility makes them the perfect framework in which investigators can “clarify trade-offs, suggest improvements to research designs, and enable researchers to explain their reasoning to each other and the public” in digital behavioral research [ 2 ].

Our study shows how scholars still place great importance on the practice of informed consent. They considered crucial that participants are appropriately notified of their research participation, are adequately informed about at least some of the details and procedures of the study, and are given the possibility to withdraw at any point in time. A recent study, however, has highlighted that there is currently no consensus among investigators on how to collect meaningful informed consent among participants in digital research [ 37 ]. Similarly, a few researchers from our study recognized that consent, although preferable in theory, might not be the most adequate practice to refer to when designing ethical frameworks. In the era of Big Data behavioral research, informed consent becomes an extremely complex practice that is intrinsically dependent on the context of the study and the type of Big Data used. For instance, in certain behavioral studies that analyze track data from devices related to a limited number of participants, it would be feasible to ask for consent prior to beginning of the study. However, recombination and reanalysis of the data, possibly across ecosystems far removed from the original source of the data, makes it very difficult to fully inform participants about the range of uses to which their data would be put through, the type of information that could emerge from the analysis of the data, and the unforeseeable harms that the disclosure of such information could cause [ 38 ]. In online studies and internet-mediated research, consent often amounts to an agreement to unread terms of service or a vague privacy policy provided by digital platforms [ 18 ]. Sometimes valid informed consent is not even required by official guidelines when the analyzed data can be considered ‘in the public domain’ [ 39 ], leaving participants unaware that research is performed on their data. It has been argued however that researchers should not just assume that public information is freely accessible for collection and research just because it is public. Researchers should take into consideration what the subject might have intended or desired regarding the possibility for their data to be used for research purposes [ 40 ]. At the same level, we can also argue that even when information is harvested with consent, the subject might a) not wish for their data to be analyzed or reused outside of the purview of the original research purpose and b) fail to understand what is the extent of the information that the analysis of the dataset might reveal about them.

Matzner and Ochs argue that practices of informed consent “are widely accepted since they cohere with notions of the individual that we have been trained to adopt for several centuries” [ 41 ], however they also emphasize how such notions are being altered and challenged by the openness and transience of data-analytics that prevent us from continuing to consider the subject and the researcher within a self-contained dynamic. Since respect for persons , in the form of informed consent, is just one of the principles that needs to be balanced when considering research ethics [ 42 ], it becomes of outmost importance to find the right balance between the perceived necessity of still ensuring consent from participants and the reality that such consent is sometimes impossible to obtain properly. Salganik [ 2 ], for instance, suggests that in the context of digital behavioral research rather than “informed consent for everything”, researchers should follow a more complex rule: “some form of consent for most things”. This means that, assuming informed consent is required, it should be evaluated on a case by case basis whether consent is a) practically feasible and b) actually necessary. This practice might however leave too much space to the discretion of the investigator who might not have the skills to appropriately evaluate the ethical facets of their research projects [ 43 ].

Next to consent, participants from our study also argued in favor of ensuring more control to participants over their own data. In the past years, in fact, it has been argued that individuals often lack the control to manage, protect and delete their data [ 20 , 28 ]. Strategies of dynamic consent could be considered a potential tool to address ethical issues related to consent in Big Data behavioral research. Dynamic consent, a model where online tools are developed to have individuals engage in decisions about how their personal information should be used and which allows them some degree of control over the use of their data, are currently mainly developed for biomedical Big Data research [ 44 , 45 ]. Additional research could be performed to investigate if such models can be translated and applied also for behavioral digital research.

Strictly linked to consent is the matter of privacy. Many researchers underlined the importance of respecting the privacy and anonymity of research participants to protect them from possible harm. At the same time, they also recognized the many challenges related to such practice. They highlighted the difficulty of ensuring complete anonymity of the data and prevent re-identification of participants in Big Data research, especially since high level of anonymization could cause the loss of essential information for the research project. The appropriate trade-off between ensuring maximum anonymization for participants while maintaining quality of the dataset is still hotly debated [ 12 ]. Growing research in data science strives towards developing data models to ensure maximum protection for participants [ 46 ]. On the other hand, our participants also referred to the current debate surrounding the private nature of personal data as opposed to publicly available data and how Big Data and digital technologies are blurring the line between private and public spheres. Some respondents expressed concern or reservation towards the analysis of publicly available data–especially without informed consent–as it could still be considered an infringement of the privacy of research participants and also cause them harm. This shows how researchers are well aware of the problems of considering privacy a binary concept (private vs public data) and that they are also willing to reflect upon strategies to protect the identity of participants even when handling publicly available data. According to Zook et al. [ 47 ], breaches of privacy are the main means by which Big Data can do harm as it might reveal sensitive information about people. Besides the already mentioned “Tagging Banksy” project [ 17 ], another distressing example is what happened in 2013, after the New York City Taxi & Limousine Commission released an anonymized dataset of 173 million individual cab rides–including the pickup and drop-off times, locations, fare and tip amount. Many researchers who freely accessed this database showed how easy it was to elaborate the dataset so that it revealed private information about the taxi-drivers, such as their religious belief, average income and even an estimation of their home address [ 48 ]. It becomes therefore increasingly crucial that investigators in the behavioral sciences recognize how privacy is contextual, situational and changes over time as it depends on multiple factors such as the context in which the data were created and obtained, and the expectations of those whose data is used [ 2 , 47 , 49 , 50 ]. For instance, as reported by one of our respondents, users might not have expectations of privacy on some publicly available information when taken singularly or separately–e.g. social media and voter data, but they might have privacy concerns on the information that the linkage of this data might reveal–e.g. who they voted for. This difficulty, if not impossibility, of defining a widespread single norm or rule for protecting privacy, shows again the intrinsic context dependency of Big Data studies, and highlights how researchers are increasingly called to critically evaluate their decisions on a case by case basis rather than by blindingly applying a common rule.

As new methods of data collection and analysis in behavioral sciences create controversy and appropriately balancing and evaluating ethical principles is becoming a source of difficult decisions for researchers [ 2 ], our participants underlined the importance of ethical reflection and education towards the appropriate development of research projects. They also recognized how investigators are called to critically reflect about the design of their studies and the consequences they might have for research participants [ 51 ]. However, as claimed by one of our participants, not all researchers, especially those coming from more technical disciplines like data science, might have the expertise and tools to proactively think about ethical issues when designing a research project [ 22 ] and might need additional guidance. We therefore argue that education in ethics, exchange and dialogue between research teams and scholars from different disciplines must be implemented. As suggested by Zook et al. [ 47 ] discussion and debate of ethical issues are an essential part of establishing a community of ethical practitioners and integrating ethical reflection into coursework and training can enable a bigger number of scholars to raise appropriate ethical questions when reviewing or developing a project.

Within the current discussion, we have seen how context-dependency, although never spelled out explicitly by our participants, becomes a major theme in the debate over ethical practices in Big Data studies. Our results have in fact highlighted that a one-size fits all approach to research ethics, or a definite overarching set of norms or rules to protect research participants, is not opportune to appropriately handle the multifaceted ethical issues of Big Data. The context-dependent nature of some of the ethical challenges of Big Data studies, such as consent and privacy, might require a higher level of flexibility together with a more situational and dialogic approach to research ethics [ 23 ]. For instance, the Association of Internet Researchers (AoIR) in the development of their Ethical Guidelines for Internet research agrees that the adequate process approach for ethical internet research is one that is reflective and dialogical “as it begins with reflection on own research practices and associated risks and is continuously discussed against the accumulated experience and ethical reflections of researchers in the field and existing studies carried out” [ 52 ]. As a consequence we argue that applying context specific assessments increases the chances of solving ethical issues and appropriately protecting research participants [ 53 ]. Many authors in the field are thus promoting methodological approaches that focus on contextually-driven decision-making for Big Data research. Zimmer, for example, suggests the application of contextual integrity’s decision heuristic on different research studies to appropriately assess the ethical impact of the study on the privacy of its participants and consequently overcome the conceptual gaps left by the Belmont Report for Big Data research ethics [ 50 ]. Similarly, Steinmann et al. [ 53 ] provide an heuristic tool in the form of a “privacy matrix” to assist researchers in the contextual assessment of their research projects.

But what should drive investigators’ ethical reflection and decision making? Despite the multifaceted challenges introduced by Big Data and digital research, we argue that the principles stated in the Belmont Report can still be considered a valuable guidance for academic investigators. As argued by Rothstein [ 28 ], we believe Big Data exceptionalism is no viable option and new challenges should not serve as a catalyst for abandoning foundational principles of research ethics. This is in line with the current best practices suggested by institutional bodies like the American Psychological Association (APA), that claim that the core ethical principles set by the Belmont report should be expanded to address the risks and benefits of today’s data [ 6 ]. Numerous research groups are striving towards the design of ethical frameworks in Big Data research that stay true to the foundational principles of research ethics, but at the same time accommodate the needs and changes introduced by Big Data methods. Steinmann et al. [ 53 ], for instance, suggest to consider five principles (non-maleficence, beneficence, justice, autonomy, and trust) as a well-defined pluralism of values that, by having clear and direct utility in designating practical strategies for protecting privacy, should guide researchers in the evaluation of their research projects. Xafis et al. [ 38 ], in the development of an ethical framework for Biomedical Big Data research, provide a set of 16 values relevant for many Big Data uses divided in substantive values (such as justice, public benefit, solidarity or minimization of harm) and procedural values (accountability, consistency, transparency and trustworthiness) that should be used by investigators to identify and solve ethical issues within their research project. Vitak et al. [ 22 ] recommend the implementation of the principle of transparency, intended as a flexible principle that finds application in different ethical components related both to intent of research (what you are doing with data and why) and practice (how you’re getting the data–informed consent (disclosing purpose and potential use) and how you are processing the data–data anonymity). Also, according to some of our participants, enhancement of transparency in research practices would be positive on different levels. First, it would assist participants in trusting the research system and minimize their worry about participating in research studies; in addition, enhanced transparency between research teams would assist in building up the knowledge to face the ethical issues that emerge in heterogeneous research projects. Although the principle of transparency is becoming increasingly embedded in research practices as something highly recommended, there is still some uncertainty regarding how this principle would actually translate in practice, in order to overcome challenges posed to ethical practices like consent. At the moment much of the debate on transparency mainly focuses on the implementation of algorithmic transparency with Big Data [ 54 ], more research should focus on how put research transparency in practice

Finally, a very relevant theme that our participants reflected upon, that it is rarely addressed by the current literature on Big Data studies, was the topic of responsibility. Some of our respondents in fact interrogated themselves whether the introduction of digital technologies and methods implies a shift of responsibility in protecting research participants. Although all those who discussed responsibility admitted that at least part of it should definitely fall on investigators themselves, some pointed that also other actors involved in Big Data research could share some of this responsibility such as data holders, data owners–in case of the use of corporate data. Digital research has in fact changed the traditional research subject/investigator dynamic [ 18 ] by introducing other factors/actors in the process (social media platforms, private firms etc.) and therefore raises ethical challenges for which researchers do not always have the necessary skills to either anticipate or face [ 25 , 43 ]. To the best of our knowledge, it seems that this aspect of responsibility has not yet entered the ethics debate. This might be due to the practical difficulties that such a debate would necessarily imply such as communication, coordination and compromise between stakeholders with very different goals and interests at stake [ 55 , 56 ]. However, our results show that there are relevant questions and issues that should be further addressed such as: who should bear the responsibility of protecting the research subject in Big Data studies? How much should data owners, data holders, ethics committees and even users be involved in sharing such responsibility? We believe that academic investigators should not bear all the responsibility of the ethical design of research projects alone, or singularly confront themselves with the ethical implications of digital research [ 57 ]. At the moment, models of consultancy between ethics committees and researchers are advocated to assist investigators foresee ethical issues [ 25 , 43 ]. These models, together with the implementation of sustainable and transparent collaboration/partnership with data holders and owners [ 58 ], could assist the creation of appropriate paradigms of shared responsibility that could definitely play a significant role in the development of ethically sound research projects.

Limitations

First, since our respondents were mainly from the fields of psychology and sociology, the study might have overlooked the perspectives of other relevant fields for human subject research that make use of Big Data methodologies (e.g., medicine, nursing sciences, geography, urban planning, computer science, linguistics, etc.). In addition, the findings of this study are based on a small sample of researchers from only two countries that share similar ethical norms and values. For these reasons, the findings from this analysis are not generalizable globally. Future research that takes into account additional disciplines and different countries might contribute to delivering a more comprehensive understanding of the opinions and attitudes of researchers. Finally, a limitation must be acknowledged regarding the definition of Big Data used for this study. Using the term Big Data as an umbrella term prevented us from undertaking a more nuanced analysis of the different types of data used by our participants and their specific characteristics (for instance the different ethical challenges posed by online social media data as compared to sensor data obtained with the consent of the participants). In our discussion we referred to the contextual dependency of the ethical issues of Big Data and the necessity of a continuous ethical reflection that assesses the specific nuances of the different types of Big Data in heterogeneous research projects. However we already recognized the risks of conceptualizing Big Data as a broad overarching concept [ 33 ]. As a consequence, we believe that future research on Big Data ethics will benefit from a deconstruction of the term into its different constituents in order to provide a more nuanced analysis of the topic.

This study investigated the code of ethics and the research strategies that researchers apply when performing Big Data research in the behavioral sciences and it also illustrates some of the challenges scholars encounter in practically applying ethical principles and practices. Our results point out how researchers find the traditional principles of the Belmont Report to be a suitable guide to perform ethical data research. At the same time, they also recognized how Big Data methods and practices are increasingly challenging such principles. Consent and protection of privacy were considered still paramount practices in research. However, they were also considered the most challenged practices since digitalization of research has blurred the boundary between “public and private” and made obtaining consent from participants impossible in certain cases.

Based the results and discussion of our study, we suggest three key items that future research and policymaking should focus on:

  • Development of research ethics frameworks that stay true to the principles of the Belmont Report but also accommodate the context dependent nature of the ethical issues of Big Data research;
  • Implementation of education in ethical reasoning and training in ethics for investigators from diversified curricula: from social science and psychology to more technical fields such as data science and informatics;
  • Design of models of consultancy and shared responsibility between the different stakeholders involved in the research endeavor (e.g. investigators, data owners and review boards) in order to enhance protection of research participants.

Supporting information

S1 file. interview guide..

Semi structured interview guide that illustrates the main questions and themes that the researchers asked to the participants (questions relevant for this study are highlighted in yellow).

https://doi.org/10.1371/journal.pone.0241865.s001

  • View Article
  • Google Scholar
  • 2. Salganik MJ. Bit by bit: Social research in the digital age. Princeton: Princeton University Press; 2019. https://doi.org/10.1002/sim.7973 pmid:30259528
  • PubMed/NCBI
  • 6. Paxton A. The Belmont Report in the Age of Big Data: Ethics at the Intersection of Psychological Science and Data Science. In: Sang Eun Woo LT, and Proctor Robert W., editor. Big data methods for psychological research: New horizons and challenges: American Psychological Association; 2020.
  • 16. Hargittai E, editor Whose data traces, whose voices? Inequality in online participation and why it matters for recommendation systems research. Proceedings of the 13th ACM Conference on Recommender Systems; 2019.
  • 22. Vitak J, Shilton K, Ashktorab Z, editors. Beyond the Belmont principles: Ethical challenges, practices, and beliefs in the online data research community. Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing; 2016.
  • 24. Markham A, Buchanan E. Ethical decision-making and internet research: Version 2.0. recommendations from the AoIR ethics working committee. Available online: aoir org/reports/ethics2 pdf. 2012.
  • 26. Research with human subjects. A manual for practitioners. Bern: Swiss Academy of Medical Sciences (SAMS); 2015.
  • 30. National Science Foundation. Core Techniques and Technologies for Advancing Big Data Science & Engineering (BIGDATA) (NSF-12-499) 2012. Available from: https://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf12499 (Accessed July 2019).
  • 31. National Science Foundation. Critical Techniques and Technologies for Advancing Big Data Science & Engineering (BIGDATA) (NSF-14-543) 2014. Available from: https://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf14543&org=NSF (Accessed July 2019).
  • 32. National Institute of Health. Big Data to Knowledge 2019. Available from: https://commonfund.nih.gov/bd2k (Accessed November 19, 2019).
  • 34. Guest G, MacQueen KM, Namey EE. Applied thematic analysis: Sage Publications; 2011.
  • 37. Shilton K, Sayles S, editors. " We Aren't All Going to Be on the Same Page about Ethics": Ethical Practices and Challenges in Research on Digital and Social Media. 2016 49th Hawaii International Conference on System Sciences (HICSS); 2016: IEEE.
  • 39. British Psychological Society. Ethics Guidelines for Internet-mediated Research 2017. Available from: www.bps.org.uk/publications/policy-and-guidelines/research-guidelines-policy-documents/research-guidelines-poli (Accessed September 2020).
  • 41. Matzner T, Ochs C. Sorting Things Out Ethically, Privacy as a Research Issue beyond the Individual In: Zimmer M, Kinder-Kurlanda K, editors. Internet Research Ethics for the Social Age. Oxford: Peter Lang; 2017.
  • 48. Franceschi-Bicchierai L. Redditor cracks anonymous data trove to pinpoint Muslim cab drivers 2015. Available from: https://mashable.com/2015/01/28/redditor-muslim-cab-drivers/#0_uMsT8dnPqP (Accessed June 2020).
  • 49. Nissenbaum H. Privacy in context: Technology, policy, and the integrity of social life: Stanford University Press; 2009. https://doi.org/10.1007/s00259-009-1337-0 pmid:20033153
  • 51. Goel V. As Data Overflows Online, Researchers Grapple With Ethics 2014. Available from: https://www.nytimes.com/2014/08/13/technology/the-boon-of-online-data-puts-social-science-in-a-quandary.html (Accessed May 2020).
  • 52. Franzke AS, Bechmann A, Zimmer M, Ess C, Researchers" AoI. Internet Research: Ethical Guidelines 3.0 2019. Available from: https://aoir.org/reports/ethics3.pdf (Accessed July 2020).
  • 53. Steinmann M, Matei SA, Collmann J. A theoretical framework for ethical reflection in big data research. In: Collmann J, Matei SA, editors. Ethical Reasoning in Big Data. Switzerland: Springer; 2016. p. 11–27.
  • 54. Rader E, Cotter K, Cho J, editors. Explanations as mechanisms for supporting algorithmic transparency. Proceedings of the 2018 CHI conference on human factors in computing systems; 2018.

Articles on Research ethics

Displaying 1 - 20 of 84 articles.

articles on research ethics

How corporate involvement in psychedelic research could threaten public safety

Elena Koning , Queen's University, Ontario ; Elisa Brietzke , Queen's University, Ontario , and Marco Solmi , L’Université d’Ottawa/University of Ottawa

articles on research ethics

The San people of southern Africa: where ethics codes for researching indigenous people could fail them

Stasja Koot , Wageningen University

articles on research ethics

When research study materials don’t speak their participants’ language, data can get lost in translation

Sonia Colina , University of Arizona

articles on research ethics

We won’t always have to use animals for medical research. Here’s what we can do instead

Greg Williams , CSIRO and Laura Anne Thomas , CSIRO

articles on research ethics

San and Khoe skeletons: how a South African university sought to restore dignity and redress the past

Victoria Gibbon , University of Cape Town

articles on research ethics

Promising assisted reproductive technologies come with ethical, legal and social challenges – a developmental biologist and a bioethicist discuss IVF, abortion and the mice with two dads

Keith Latham , Michigan State University and Mary Faith Marshall , University of Virginia

articles on research ethics

Researchers can learn a lot with your genetic information, even when you skip survey questions – yesterday’s mode of informed consent doesn’t quite fit today’s biobank studies

Robbee Wedow , Purdue University

articles on research ethics

You shed DNA everywhere you go – trace samples in the water, sand and air are enough to identify who you are, raising ethical questions about privacy

Jenny Whilde , University of Florida and Jessica Alice Farrell , University of Florida

articles on research ethics

I used to work at Google and now I’m an AI researcher. Here’s why slowing down AI development is wise

Rodolfo Ocampo , UNSW Sydney

articles on research ethics

Human genome editing offers tantalizing possibilities – but without clear guidelines, many ethical questions still remain

André O. Hudson , Rochester Institute of Technology and Gary Skuse , Rochester Institute of Technology

articles on research ethics

Can we ethically justify harming animals for research?

Julian Koplin , Monash University

articles on research ethics

What is ethical animal research? A scientist and veterinarian explain

Lana Ruvolo Grasser , National Institutes of Health and Rachelle Stammen , Emory University

articles on research ethics

‘Gain of function’ research can create experimental viruses. In light of COVID, it should be more strictly regulated – or banned

Colin D. Butler , Australian National University

articles on research ethics

New ‘ethics guidance’ for top science journals aims to root out harmful research – but can it succeed?

Cordelia Fine , The University of Melbourne

articles on research ethics

What’s next for ancient DNA studies after Nobel Prize honors groundbreaking field of paleogenomics

Mary Prendergast , Rice University

articles on research ethics

Five steps every researcher should take to ensure participants are not harmed and are fully heard

Jasper Knight , University of the Witwatersrand

articles on research ethics

African ubuntu can deepen how research is done

Obaa Akua Konadu-Osei , Stellenbosch University

articles on research ethics

Uncovering the genetic basis of mental illness requires data and tools that aren’t just based on white people – this international team is collecting DNA samples around the globe

Hailiang Huang , Harvard University

articles on research ethics

Researchers should be assessed on quality not quantity: here’s how

Lyn Horn , University of Cape Town and Lex Bouter , Vrije Universiteit Amsterdam

articles on research ethics

How a South African community’s request for its genetic data raises questions about ethical and equitable research

Dana Al-Hindi , University of California, Davis and Brenna Henn , University of California, Davis

Related Topics

  • Clinical trials
  • Human research ethics
  • Medical research
  • Researchers
  • Science research
  • Scientific research

Top contributors

articles on research ethics

Adjunct professor, The University of Western Australia

articles on research ethics

Professor, School of Economics, UNSW Sydney

articles on research ethics

Associate Professor of Law, Bond University

articles on research ethics

Associate Professor, School of Arts and Media, UNSW Sydney

articles on research ethics

Associate Professor in Biological Anthropology, Division of Clinical Anatomy and Biological Anthropology, University of Cape Town

articles on research ethics

Associate Professor of Bioethics, University of Portsmouth

articles on research ethics

Professor of Ethics, Keele University

articles on research ethics

Associate Professor, School of Law, University of Canberra

articles on research ethics

Senior Lecturer in Medical Ethics and Law, St George's, University of London

articles on research ethics

Senior Lecturer in Ethics and Professionalism, University of Adelaide

articles on research ethics

Professor of Marine Science & Founder/Director of Blue Carbon Lab, Deakin University

articles on research ethics

Honorary Professor, Australian National University

articles on research ethics

Associate professor, University of Sydney

articles on research ethics

Distinguished Professor of Marine Science, King Abdullah University of Science and Technology

articles on research ethics

Former Foundation Professor of Animal Welfare, University of Queensland, Curtin University

  • X (Twitter)
  • Unfollow topic Follow topic
  • Original article
  • Open access
  • Published: 13 July 2021

Assisting you to advance with ethics in research: an introduction to ethical governance and application procedures

  • Shivadas Sivasubramaniam 1 ,
  • Dita Henek Dlabolová 2 ,
  • Veronika Kralikova 3 &
  • Zeenath Reza Khan 3  

International Journal for Educational Integrity volume  17 , Article number:  14 ( 2021 ) Cite this article

17k Accesses

12 Citations

4 Altmetric

Metrics details

Ethics and ethical behaviour are the fundamental pillars of a civilised society. The focus on ethical behaviour is indispensable in certain fields such as medicine, finance, or law. In fact, ethics gets precedence with anything that would include, affect, transform, or influence upon individuals, communities or any living creatures. Many institutions within Europe have set up their own committees to focus on or approve activities that have ethical impact. In contrast, lesser-developed countries (worldwide) are trying to set up these committees to govern their academia and research. As the first European consortium established to assist academic integrity, European Network for Academic Integrity (ENAI), we felt the importance of guiding those institutions and communities that are trying to conduct research with ethical principles. We have established an ethical advisory working group within ENAI with the aim to promote ethics within curriculum, research and institutional policies. We are constantly researching available data on this subject and committed to help the academia to convey and conduct ethical behaviour. Upon preliminary review and discussion, the group found a disparity in understanding, practice and teaching approaches to ethical applications of research projects among peers. Therefore, this short paper preliminarily aims to critically review the available information on ethics, the history behind establishing ethical principles and its international guidelines to govern research.

The paper is based on the workshop conducted in the 5th International conference Plagiarism across Europe and Beyond, in Mykolas Romeris University, Lithuania in 2019. During the workshop, we have detailed a) basic needs of an ethical committee within an institution; b) a typical ethical approval process (with examples from three different universities); and c) the ways to obtain informed consent with some examples. These are summarised in this paper with some example comparisons of ethical approval processes from different universities. We believe this paper will provide guidelines on preparing and training both researchers and research students in appropriately upholding ethical practices through ethical approval processes.

Introduction

Ethics and ethical behaviour (often linked to “responsible practice”) are the fundamental pillars of a civilised society. Ethical behaviour with integrity is important to maintain academic and research activities. It affects everything we do, and gets precedence with anything that would include/affect, transform, or impact upon individuals, communities or any living creatures. In other words, ethics would help us improve our living standards (LaFollette, 2007 ). The focus on ethical behaviour is indispensable in certain fields such as medicine, finance, or law, but is also gaining recognition in all disciplines engaged in research. Therefore, institutions are expected to develop ethical guidelines in research to maintain quality, initiate/own integrity and above all be transparent to be successful by limiting any allegation of misconduct (Flite and Harman, 2013 ). This is especially true for higher education organisations that promote research and scholarly activities. Many European institutions have developed their own regulations for ethics by incorporating international codes (Getz, 1990 ). The lesser developed countries are trying to set up these committees to govern their academia and research. World Health Organization has stated that adhering to “ ethical principles … [is central and important]... in order to protect the dignity, rights and welfare of research participants ” (WHO, 2021 ). Ethical guidelines taught to students can help develop ethical researchers and members of society who uphold values of ethical principles in practice.

As the first European-wide consortium established to assist academic integrity (European Network for Academic Integrity – ENAI), we felt the importance of guiding those institutions and communities that are trying to teach, research, and include ethical principles by providing overarching understanding of ethical guidelines that may influence policy. Therefore, we set up an advisory working group within ENAI in 2018 to support matters related to ethics, ethical committees and assisting on ethics related teaching activities.

Upon preliminary review and discussion, the group found a disparity in understanding, practice and teaching approaches to ethical applications among peers. This became the premise for this research paper. We first carried out a literature survey to review and summarise existing ethical governance (with historical perspectives) and procedures that are already in place to guide researchers in different discipline areas. By doing so, we attempted to consolidate, document and provide important steps in a typical ethical application process with example procedures from different universities. Finally, we attempted to provide insights and findings from practical workshops carried out at the 5th International Conference Plagiarism across Europe and Beyond, in Mykolas Romeris University, Lithuania in 2019, focussing on:

• highlighting the basic needs of an ethical committee within an institution,

• discussing and sharing examples of a typical ethical approval process,

• providing guidelines on the ways to teach research ethics with some examples.

We believe this paper provides guidelines on preparing and training both researchers and research students in appropriately upholding ethical practices through ethical approval processes.

Background literature survey

Responsible research practice (RRP) is scrutinised by the aspects of ethical principles and professional standards (WHO’s Code of Conduct for responsible Research, 2017). The Singapore statement on research integrity (The Singapore Statement on Research integrity, 2010) has provided an internationally acceptable guidance for RRP. The statement is based on maintaining honesty, accountability, professional courtesy in all aspects of research and maintaining fairness during collaborations. In other words, it does not simply focus on the procedural part of the research, instead covers wider aspects of “integrity” beyond the operational aspects (Israel and Drenth, 2016 ).

Institutions should focus on providing ethical guidance based on principles and values reflecting upon all aspects/stages of research (from the funding application/project development stage upto or beyond project closing stage). Figure  1 summarizes the different aspects/stages of a typical research and highlights the needs of RRP in compliance with ethical governance at each stage with examples (the figure is based on Resnik, 2020 ; Žukauskas et al., 2018 ; Anderson, 2011 ; Fouka and Mantzorou, 2011 ).

figure 1

Summary of the enabling ethical governance at different stages of research. Note that it is imperative for researchers to proactively consider the ethical implications before, during and after the actual research process. The summary shows that RRP should be in line with ethical considerations even long before the ethical approval stage

Individual responsibilities to enhance RRP

As explained in Fig.  1 , a successfully governed research should consider ethics at the planning stages prior to research. Many international guidance are compatible in enforcing/recommending 14 different “responsibilities” that were first highlighted in the Singapore Statement (2010) for researchers to follow and achieve competency in RRP. In order to understand the purpose and the expectation of these ethical guidelines, we have carried out an initial literature survey on expected individual responsibilities. These are summarised in Table  1 .

By following these directives, researchers can carry out accountable research by maximising ethical self-governance whilst minimising misconducts. In our own experiences of working with many researchers, their focus usually revolves around ethical “clearance” rather than behaviour. In other words, they perceive this as a paper exercise rather than trying to “own” ethical behaviour in everything they do. Although the ethical principles and responsibilities are explicitly highlighted in the majority of international guidelines [such as UK’s Research Governance Policy (NICE, 2018 ), Australian Government’s National Statement on Ethical Conduct in Human Research (Difn website a - National Statement on Ethical Conduct in Human Research (NSECHR), 2018 ), the Singapore Statement (2010) etc.]; and the importance of holistic approach has been argued in ethical decision making, many researchers and/or institutions only focus on ethics linked to the procedural aspects.

Studies in the past have also highlighted inconsistencies in institutional guidelines pointing to the fact that these inconsistencies may hinder the predicted research progress (Desmond & Dierickx 2021 ; Alba et al., 2020 ; Dellaportas et al., 2014 ; Speight 2016 ). It may also be possible that these were and still are linked to the institutional perceptions/expectations or the pre-empting contextual conditions that are imposed by individual countries. In fact, it is interesting to note many research organisations and HE institutions establish their own policies based on these directives.

Research governance - origins, expectations and practices

Ethical governance in clinical medicine helps us by providing a structure for analysis and decision-making. By providing workable definitions of benefits and risks as well as the guidance for evaluating/balancing benefits over risks, it supports the researchers to protect the participants and the general population.

According to the definition given by National Institute of Clinical care Excellence, UK (NICE 2018 ), “ research governance can be defined as the broad range of regulations, principles and standards of good practice that ensure high quality research ”. As stated above, our literature-based research survey showed that most of the ethical definitions are basically evolved from the medical field and other disciplines have utilised these principles to develop their own ethical guidance. Interestingly, historical data show that the medical research has been “self-governed” or in other words implicated by the moral behaviour of individual researchers (Fox 2017 ; Shaw et al., 2005 ; Getz, 1990 ). For example, early human vaccination trials conducted in 1700s used the immediate family members as test subjects (Fox, 2017 ). Here the moral justification might have been the fact that the subjects who would have been at risk were either the scientists themselves or their immediate families but those who would reap the benefits from the vaccination were the general public/wider communities. However, according to the current ethical principles, this assumption is entirely not acceptable.

Historically, ambiguous decision-making and resultant incidences of research misconduct have led to the need for ethical research governance in as early as the 1940’s. For instance, the importance of an international governance was realised only after the World War II, when people were astonished to note the unethical research practices carried out by Nazi scientists. As a result of this, in 1947 the Nuremberg code was published. The code mainly focussed on the following:

Informed consent and further insisted the research involving humans should be based on prior animal work,

The anticipated benefits should outweigh the risk,

Research should be carried out only by qualified scientists must conduct research,

Avoiding physical and mental suffering and.

Avoiding human research that would result in which death or disability.

(Weindling, 2001 ).

Unfortunately, it was reported that many researchers in the USA and elsewhere considered the Nuremberg code as a document condemning the Nazi atrocities, rather than a code for ethical governance and therefore ignored these directives (Ghooi, 2011 ). It was only in 1964 that the World Medical Association published the Helsinki Declaration, which set the stage for ethical governance and the implementation of the Institutional Review Board (IRB) process (Shamoo and Irving, 1993 ). This declaration was based on Nuremberg code. In addition, the declaration also paved the way for enforcing research being conducted in accordance with these guidelines.

Incidentally, the focus on research/ethical governance gained its momentum in 1974. As a result of this, a report on ethical principles and guidelines for the protection of human subjects of research was published in 1979 (The Belmont Report, 1979 ). This report paved the way to the current forms of ethical governance in biomedical and behavioural research by providing guidance.

Since 1994, the WHO itself has been providing several guidance to health care policy-makers, researchers and other stakeholders detailing the key concepts in medical ethics. These are specific to applying ethical principles in global public health.

Likewise, World Organization for Animal Health (WOAH), and International Convention for the Protection of Animals (ICPA) provide guidance on animal welfare in research. Due to this continuous guidance, together with accepted practices, there are internationally established ethical guidelines to carry out medical research. Our literature survey further identified freely available guidance from independent organisations such as COPE (Committee of Publication Ethics) and ALLEA (All European Academics) which provide support for maintaining research ethics in other fields such as education, sociology, psychology etc. In reality, ethical governance is practiced differently in different countries. In the UK, there is a clinical excellence research governance, which oversees all NHS related medical research (Mulholland and Bell, 2005 ). Although, the governance in other disciplines is not entirely centralised, many research funding councils and organisations [such as UKRI (UK-Research and Innovation; BBSC (Biotechnology and Biological Sciences Research Council; MRC (Medical Research Council); EPSRC (Economic and Social Research Council)] provide ethical governance and expect institutional adherence and monitoring. They expect local institutional (i.e. university/institutional) research governance for day-to-day monitoring of the research conducted within the organisation and report back to these funding bodies, monthly or annually (Department of Health, 2005). Likewise, there are nationally coordinated/regulated ethics governing bodies such as the US Office for Human Research Protections (US-OHRP), National Institute of Health (NIH) and the Canadian Institutes for Health Research (CIHR) in the USA and Canada respectively (Mulholland and Bell, 2005 ). The OHRP in the USA formally reviews all research activities involving human subjects. On the other hand, in Canada, CIHR works with the Natural Sciences and Engineering Research Council (NSERC), and the Social Sciences and Humanities Research Council (SSHRC). They together have produced a Tri-Council Policy Statement (TCPS) (Stephenson et al., 2020 ) as ethical governance. All Canadian institutions are expected to adhere to this policy for conducting research. As for Australia, the research is governed by the Australian code for the responsible conduct of research (2008). It identifies the responsibilities of institutions and researchers in all areas of research. The code has been jointly developed by the National Health and Medical Research Council (NHMRC), the Australian Research Council (ARC) and Universities Australia (UA). This information is summarized in Table  2 .

Basic structure of an institutional ethical advisory committee (EAC)

The WHO published an article defining the basic concepts of an ethical advisory committee in 2009 (WHO, 2009 - see above). According to this, many countries have established research governance and monitor the ethical practice in research via national and/or regional review committees. The main aims of research ethics committees include reviewing the study proposals, trying to understand the justifications for human/animal use, weighing the merits and demerits of the usage (linking to risks vs. potential benefits) and ensuring the local, ethical guidelines are followed Difn website b - Enago academy Importance of Ethics Committees in Scholarly Research, 2020 ; Guide for Research Ethics - Council of Europe, 2014 ). Once the research has started, the committee needs to carry out periodic surveillance to ensure the institutional ethical norms are followed during and beyond the study. They may also be involved in setting up and/or reviewing the institutional policies.

For these aspects, IRB (or institutional ethical advisory committee - IEAC) is essential for local governance to enhance best practices. The advantage of an IRB/EEAC is that they understand the institutional conditions and can closely monitor the ongoing research, including any changes in research directions. On the other hand, the IRB may be overly supportive to accept applications, influenced by the local agenda for achieving research excellence, disregarding ethical issues (Kotecha et al., 2011 ; Kayser-Jones, 2003 ) or, they may be influenced by the financial interests in attracting external funding. In this respect, regional and national ethics committees are advantageous to ensure ethical practice. Due to their impartiality, they would provide greater consistency and legitimacy to the research (WHO, 2009 ). However, the ethical approval process of regional and national ethics committees would be time consuming, as they do not have the local knowledge.

As for membership in the IRBs, most of the guidelines [WHO, NICE, Council of Europe, (2012), European Commission - Facilitating Research Excellence in FP7 ( 2013 ) and OHRP] insist on having a variety of representations including experts in different fields of research, and non-experts with the understanding of local, national/international conflicts of interest. The former would be able to understand/clarify the procedural elements of the research in different fields; whilst the latter would help to make neutral and impartial decisions. These non-experts are usually not affiliated to the institution and consist of individuals representing the broader community (particularly those related to social, legal or cultural considerations). IRBs consisting of these varieties of representation would not only be in a position to understand the study procedures and their potential direct or indirect consequences for participants, but also be able to identify any community, cultural or religious implications of the study.

Understanding the subtle differences between ethics and morals

Interestingly, many ethical guidelines are based on society’s moral “beliefs” in such a way that the words “ethics”‘and “morals” are reciprocally used to define each other. However, there are several subtle differences between them and we have attempted to compare and contrast them herein. In the past, many authors have interchangeably used the words “morals”‘and “ethics”‘(Warwick, 2003 ; Kant, 2018 ; Hazard, GC (Jr)., 1994 , Larry, 1982 ). However, ethics is linked to rules governed by an external source such as codes of conduct in workplaces (Kuyare et al., 2014 ). In contrast, morals refer to an individual’s own principles regarding right and wrong. Quinn ( 2011 ) defines morality as “ rules of conduct describing what people ought and ought not to do in various situations … ” while ethics is “... the philosophical study of morality, a rational examination into people’s moral beliefs and behaviours ”. For instance, in a case of parents demanding that schools overturn a ban on use of corporal punishment of children by schools and teachers (Children’s Rights Alliance for England, 2005 ), the parents believed that teachers should assume the role of parent in schools and use corporal or physical punishment for children who misbehaved. This stemmed from their beliefs and what they felt were motivated by “beliefs of individuals or groups”. For example, recent media highlights about some parents opposing LGBT (Lesbian, Gay, Bisexual, and Transgender) education to their children (BBC News, 2019 ). One parent argued, “Teaching young children about LGBT at a very early stage is ‘morally’ wrong”. She argued “let them learn by themselves as they grow”. This behaviour is linked to and governed by the morals of an ethnic community. Thus, morals are linked to the “beliefs of individuals or group”. However, when it comes to the LGBT rights these are based on ethical principles of that society and governed by law of the land. However, the rights of children to be protected from “inhuman and degrading” treatment is based on the ethical principles of the society and governed by law of the land. Individuals, especially those who are working in medical or judicial professions have to follow an ethical code laid down by their profession, regardless of their own feelings, time or preferences. For instance, a lawyer is expected to follow the professional ethics and represent a defendant, despite the fact that his morals indicate the defendant is guilty.

In fact, we as a group could not find many scholarly articles clearly comparing or contrasting ethics with morals. However, a table presented by Surbhi ( 2015 ) (Difn website c ) tries to differentiate these two terms (see Table  3 ).

Although Table 3 gives some insight on the differences between these two terms, in practice many use these terms as loosely as possible mainly because of their ambiguity. As a group focussed on the application of these principles, we would recommend to use the term “ethics” and avoid “morals” in research and academia.

Based on the literature survey carried out, we were able to identify the following gaps:

there is some disparity in existing literature on the importance of ethical guidelines in research

there is a lack of consensus on what code of conduct should be followed, where it should be derived from and how it should be implemented

The mission of ENAI’s ethical advisory working group

The Ethical Advisory Working Group of ENAI was established in 2018 to promote ethical code of conduct/practice amongst higher educational organisations within Europe and beyond (European Network for Academic Integrity, 2018 ). We aim to provide unbiased advice and consultancy on embedding ethical principles within all types of academic, research and public engagement activities. Our main objective is to promote ethical principles and share good practice in this field. This advisory group aims to standardise ethical norms and to offer strategic support to activities including (but not exclusive to):

● rendering advice and assistance to develop institutional ethical committees and their regulations in member institutions,

● sharing good practice in research and academic ethics,

● acting as a critical guide to institutional review processes, assisting them to maintain/achieve ethical standards,

● collaborating with similar bodies in establishing collegiate partnerships to enhance awareness and practice in this field,

● providing support within and outside ENAI to develop materials to enhance teaching activities in this field,

● organising training for students and early-career researchers about ethical behaviours in form of lectures, seminars, debates and webinars,

● enhancing research and dissemination of the findings in matters and topics related to ethics.

The following sections focus on our suggestions based on collective experiences, review of literature provided in earlier sections and workshop feedback collected:

a) basic needs of an ethical committee within an institution;

b) a typical ethical approval process (with examples from three different universities); and

c) the ways to obtain informed consent with some examples. This would give advice on preparing and training both researchers and research students in appropriately upholding ethical practices through ethical approval processes.

Setting up an institutional ethical committee (ECs)

Institutional Ethical Committees (ECs) are essential to govern every aspect of the activities undertaken by that institute. With regards to higher educational organisations, this is vital to establish ethical behaviour for students and staff to impart research, education and scholarly activities (or everything) they do. These committees should be knowledgeable about international laws relating to different fields of studies (such as science, medicine, business, finance, law, and social sciences). The advantages and disadvantages of institutional, subject specific or common (statutory) ECs are summarised in Fig.  2 . Some institutions have developed individual ECs linked to specific fields (or subject areas) whilst others have one institutional committee that overlooks the entire ethical behaviour and approval process. There is no clear preference between the two as both have their own advantages and disadvantages (see Fig. 2 ). Subject specific ECs are attractive to medical, law and business provisions, as it is perceived the members within respective committees would be able to understand the subject and therefore comprehend the need of the proposed research/activity (Kadam, 2012 ; Schnyder et al., 2018 ). However, others argue, due to this “ specificity ”, the committee would fail to forecast the wider implications of that application. On the other hand, university-wide ECs would look into the wider implications. Yet they find it difficult to understand the purpose and the specific applications of that research. Not everyone understands dynamics of all types of research methodologies, data collection, etc., and therefore there might be a chance of a proposal being rejected merely because the EC could not understand the research applications (Getz, 1990 ).

figure 2

Summary of advantages and disadvantages of three different forms of ethical committees

[N/B for Fig. 2 : Examples of different types of ethical application procedures and forms used were discussed with the workshop attendees to enhance their understanding of the differences. GDPR = General Data Protection Regulation].

Although we recommend a designated EC with relevant professional, academic and ethical expertise to deal with particular types of applications, the membership (of any EC) should include some non-experts who would represent the wider community (see above). Having some non-experts in EC would not only help the researchers to consider explaining their research in layperson’s terms (by thinking outside the box) but also would ensure efficiency without compromising participants/animal safety. They may even help to address the common ethical issues outside research culture. Some UK universities usually offer this membership to a clergy, councillor or a parliamentarian who does not have any links to the institutions. Most importantly, it is vital for any EC members to undertake further training in addition to previous experience in the relevant field of research ethics.

Another issue that raises concerns is multi-centre research, involving several institutions, where institutionalised ethical approvals are needed from each partner. In some cases, such as clinical research within the UK, a common statutory EC called National Health Services (NHS) Research Ethics Committee (NREC) is in place to cover research ethics involving all partner institutions (NHS, 2018 ). The process of obtaining approval from this type of EC takes time, therefore advanced planning is needed.

Ethics approval forms and process

During the workshop, we discussed some anonymised application forms obtained from open-access sources for qualitative and quantitative research as examples. Considering research ethics, for the purpose of understanding, we arbitrarily divided this in two categories; research based on (a) quantitative and (b) qualitative methodologies. As their name suggests their research approach is extremely different from each other. The discussion elicited how ECs devise different types of ethical application form/questions. As for qualitative research, these are often conducted as “face-to-face” interviews, which would have implications on volunteer anonymity.

Furthermore, discussions posited when the interviews are replaced by on-line surveys, they have to be administered through registered university staff to maintain confidentiality. This becomes difficult when the research is a multi-centre study. These types of issues are also common in medical research regarding participants’ anonymity, confidentially, and above all their right to withdraw consent to be involved in research.

Storing and protecting data collected in the process of the study is also a point of consideration when applying for approval.

Finally, the ethical processes of invasive (involving human/animals) and non-invasive research (questionnaire based) may slightly differ from one another. Following research areas are considered as investigations that need ethical approval:

research that involves human participants (see below)

use of the ‘products’ of human participants (see below)

work that potentially impacts on humans (see below)

research that involves animals

In addition, it is important to provide a disclaimer even if an ethical approval is deemed unnecessary. Following word cloud (Fig.  3 ) shows the important variables that need to be considered at the brainstorming stage before an ethical application. It is worth noting the importance of proactive planning predicting the “unexpected” during different phases of a research project (such as planning, execution, publication, and future directions). Some applications (such as working with vulnerable individuals or children) will require safety protection clearance (such as DBS - Disclosure and Barring Service, commonly obtained from the local police). Please see section on Research involving Humans - Informed consents for further discussions.

figure 3

Examples of important variables that need to be considered for an ethical approval

It is also imperative to report or re-apply for ethical approval for any minor or major post-approval changes to original proposals made. In case of methodological changes, evidence of risk assessments for changes and/or COSHH (Control of Substances Hazardous to Health Regulations) should also be given. Likewise, any new collaborative partners or removal of researchers should also be notified to the IEAC.

Other findings include:

in case of complete changes in the project, the research must be stopped and new approval should be seeked,

in case of noticing any adverse effects to project participants (human or non-human), these should also be notified to the committee for appropriate clearance to continue the work, and

the completion of the project must also be notified with the indication whether the researchers may restart the project at a later stage.

Research involving humans - informed consents

While discussing research involving humans and based on literature review, findings highlight the human subjects/volunteers must willingly participate in research after being adequately informed about the project. Therefore, research involving humans and animals takes precedence in obtaining ethical clearance and its strict adherence, one of which is providing a participant information sheet/leaflet. This sheet should contain a full explanation about the research that is being carried out and be given out in lay-person’s terms in writing (Manti and Licari 2018 ; Hardicre 2014 ). Measures should also be in place to explain and clarify any doubts from the participants. In addition, there should be a clear statement on how the participants’ anonymity is protected. We provide below some example questions below to help the researchers to write this participant information sheet:

What is the purpose of the study?

Why have they been chosen?

What will happen if they take part?

What do they have to do?

What happens when the research stops?

What if something goes wrong?

What will happen to the results of the research study?

Will taking part be kept confidential?

How to handle “vulnerable” participants?

How to mitigate risks to participants?

Many institutional ethics committees expect the researchers to produce a FAQ (frequently asked questions) in addition to the information about research. Most importantly, the researchers also need to provide an informed consent form, which should be signed by each human participant. The five elements identified that are needed to be considered for an informed consent statement are summarized in Fig.  4 below (slightly modified from the Federal Policy for the Protection of Human Subjects ( 2018 ) - Diffn website c ).

figure 4

Five basic elements to consider for an informed consent [figure adapted from Diffn website c ]

The informed consent form should always contain a clause for the participant to withdraw their consent at any time. Should this happen all the data from that participant should be eliminated from the study without affecting their anonymity.

Typical research ethics approval process

In this section, we provide an example flow chart explaining how researchers may choose the appropriate application and process, as highlighted in Fig.  5 . However, it is imperative to note here that these are examples only and some institutions may have one unified application with separate sections to demarcate qualitative and quantitative research criteria.

figure 5

Typical ethical approval processes for quantitative and qualitative research. [N/B for Fig. 5 - This simplified flow chart shows that fundamental process for invasive and non-invasive EC application is same, the routes and the requirements for additional information are slightly different]

Once the ethical application is submitted, the EC should ensure a clear approval procedure with distinctly defined timeline. An example flow chart showing the procedure for an ethical approval was obtained from University of Leicester as open-access. This is presented in Fig.  6 . Further examples of the ethical approval process and governance were discussed in the workshop.

figure 6

An example ethical approval procedures conducted within University of Leicester (Figure obtained from the University of Leicester research pages - Difn website d - open access)

Strategies for ethics educations for students

Student education on the importance of ethics and ethical behaviour in research and scholarly activities is extremely essential. Literature posits in the area of medical research that many universities are incorporating ethics in post-graduate degrees but when it comes to undergraduate degrees, there is less appetite to deliver modules or even lectures focussing on research ethics (Seymour et al., 2004 ; Willison and O’Regan, 2007 ). This may be due to the fact that undergraduate degree structure does not really focus on research (DePasse et al., 2016 ). However, as Orr ( 2018 ) suggested, institutions should focus more on educating all students about ethics/ethical behaviour and their importance in research, than enforcing punitive measures for unethical behaviour. Therefore, as an advisory committee, and based on our preliminary literature survey and workshop results, we strongly recommend incorporating ethical education within undergraduate curriculum. Looking at those institutions which focus on ethical education for both under-and postgraduate courses, their approaches are either (a) a lecture-based delivery, (b) case study based approach or (c) a combined delivery starting with a lecture on basic principles of ethics followed by generating a debate based discussion using interesting case studies. The combined method seems much more effective than the other two as per our findings as explained next.

As many academics who have been involved in teaching ethics and/or research ethics agree, the underlying principles of ethics is often perceived as a boring subject. Therefore, lecture-based delivery may not be suitable. On the other hand, a debate based approach, though attractive and instantly generates student interest, cannot be effective without students understanding the underlying basic principles. In addition, when selecting case studies, it would be advisable to choose cases addressing all different types of ethical dilemmas. As an advisory group within ENAI, we are in the process of collating supporting materials to help to develop institutional policies, creating advisory documents to help in obtaining ethical approvals, and teaching materials to enhance debate-based lesson plans that can be used by the member and other institutions.

Concluding remarks

In summary, our literature survey and workshop findings highlight that researchers should accept that ethics underpins everything we do, especially in research. Although ethical approval is tedious, it is an imperative process in which proactive thinking is essential to identify ethical issues that might affect the project. Our findings further lead us to state that the ethical approval process differs from institution to institution and we strongly recommend the researchers to follow the institutional guidelines and their underlying ethical principles. The ENAI workshop in Vilnius highlighted the importance of ethical governance by establishing ECs, discussed different types of ECs and procedures with some examples and highlighted the importance of student education to impart ethical culture within research communities, an area that needs further study as future scope.

Declarations

The manuscript was entirely written by the corresponding author with contributions from co-authors who have also taken part in the delivery of the workshop. Authors confirm that the data supporting the findings of this study are available within the article. We can also confirm that there are no potential competing interests with other organisations.

Availability of data and materials

Authors confirm that the data supporting the findings of this study are available within the article.

Abbreviations

ALL European academics

Australian research council

Biotechnology and biological sciences research council

Canadian institutes for health research

Committee of publication ethics

Ethical committee

European network of academic integrity

Economic and social research council

International convention for the protection of animals

institutional ethical advisory committee

Institutional review board

Immaculata university of Pennsylvania

Lesbian, gay, bisexual, and transgender

Medical research council)

National health services

National health services nih national institute of health (NIH)

National institute of clinical care excellence

National health and medical research council

Natural sciences and engineering research council

National research ethics committee

National statement on ethical conduct in human research

Responsible research practice

Social sciences and humanities research council

Tri-council policy statement

World Organization for animal health

Universities Australia

UK-research and innovation

US office for human research protections

Alba S, Lenglet A, Verdonck K, Roth J, Patil R, Mendoza W, Juvekar S, Rumisha SF (2020) Bridging research integrity and global health epidemiology (BRIDGE) guidelines: explanation and elaboration. BMJ Glob Health 5(10):e003237. https://doi.org/10.1136/bmjgh-2020-003237

Article   Google Scholar  

Anderson MS (2011) Research misconduct and misbehaviour. In: Bertram Gallant T (ed) Creating the ethical academy: a systems approach to understanding misconduct and empowering change in higher education. Routledge, pp 83–96

BBC News. (2019). Birmingham school LGBT LESSONS PROTEST investigated. March 8, 2019. Retrieved February 14, 2021, available online. URL: https://www.bbc.com/news/uk-england-birmingham-47498446

Children’s Rights Alliance for England. (2005). R (Williamson and others) v Secretary of State for Education and Employment. Session 2004–05. [2005] UKHL 15. Available Online. URL: http://www.crae.org.uk/media/33624/R-Williamson-and-others-v-Secretary-of-State-for-Education-and-Employment.pdf

Council of Europe. (2014). Texts of the Council of Europe on bioethical matters. Available Online. https://www.coe.int/t/dg3/healthbioethic/Texts_and_documents/INF_2014_5_vol_II_textes_%20CoE_%20bio%C3%A9thique_E%20(2).pdf

Dellaportas S, Kanapathippillai S, Khan, A and Leung, P. (2014). Ethics education in the Australian accounting curriculum: a longitudinal study examining barriers and enablers. 362–382. Available Online. URL: https://doi.org/10.1080/09639284.2014.930694 , 23, 4, 362, 382

DePasse JM, Palumbo MA, Eberson CP, Daniels AH (2016) Academic characteristics of orthopaedic surgery residency applicants from 2007 to 2014. JBJS 98(9):788–795. https://doi.org/10.2106/JBJS.15.00222

Desmond H, Dierickx K (2021) Research integrity codes of conduct in Europe: understanding the divergences. https://doi.org/10.1111/bioe.12851

Difn website a - National Statement on Ethical Conduct in Human Research (NSECHR). (2018). Available Online. URL: https://www.nhmrc.gov.au/about-us/publications/australian-code-responsible-conduct-research-2018

Difn website b - Enago academy Importance of Ethics Committees in Scholarly Research (2020, October 26). Available online. URL: https://www.enago.com/academy/importance-of-ethics-committees-in-scholarly-research/

Difn website c - Ethics vs Morals - Difference and Comparison. Retrieved July 14, 2020. Available online. URL: https://www.diffen.com/difference/Ethics_vs_Morals

Difn website d - University of Leicester. (2015). Staff ethics approval flowchart. May 1, 2015. Retrieved July 14, 2020. Available Online. URL: https://www2.le.ac.uk/institution/ethics/images/ethics-approval-flowchart/view

European Commission - Facilitating Research Excellence in FP7 (2013) https://ec.europa.eu/research/participants/data/ref/fp7/89888/ethics-for-researchers_en.pdf

European Network for Academic Integrity. (2018). Ethical advisory group. Retrieved February 14, 2021. Available online. URL: http://www.academicintegrity.eu/wp/wg-ethical/

Federal Policy for the Protection of Human Subjects. (2018). Retrieved February 14, 2021. Available Online. URL: https://www.federalregister.gov/documents/2017/01/19/2017-01058/federal-policy-for-the-protection-of-human-subjects#p-855

Flite, CA and Harman, LB. (2013). Code of ethics: principles for ethical leadership Perspect Health Inf Mana; 10(winter): 1d. PMID: 23346028

Fouka G, Mantzorou M (2011) What are the major ethical issues in conducting research? Is there a conflict between the research ethics and the nature of nursing. Health Sci J 5(1) Available Online. URL: https://www.hsj.gr/medicine/what-are-the-major-ethical-issues-in-conducting-research-is-there-a-conflict-between-the-research-ethics-and-the-nature-of-nursing.php?aid=3485

Fox G (2017) History and ethical principles. The University of Miami and the Collaborative Institutional Training Initiative (CITI) Program URL  https://silo.tips/download/chapter-1-history-and-ethical-principles # (Available Online)

Getz KA (1990) International codes of conduct: An analysis of ethical reasoning. J Bus Ethics 9(7):567–577

Ghooi RB (2011) The nuremberg code–a critique. Perspect Clin Res 2(2):72–76. https://doi.org/10.4103/2229-3485.80371

Hardicre, J. (2014) Valid informed consent in research: an introduction Br J Nurs 23(11). https://doi.org/10.12968/bjon.2014.23.11.564 , 567

Hazard, GC (Jr). (1994). Law, morals, and ethics. Yale law school legal scholarship repository. Faculty Scholarship Series. Yale University. Available Online. URL: https://digitalcommons.law.yale.edu/cgi/viewcontent.cgi?referer=https://www.google.com/&httpsredir=1&article=3322&context=fss_papers

Israel, M., & Drenth, P. (2016). Research integrity: perspectives from Australia and Netherlands. In T. Bretag (Ed.), Handbook of academic integrity (pp. 789–808). Springer, Singapore. https://doi.org/10.1007/978-981-287-098-8_64

Kadam R (2012) Proactive role for ethics committees. Indian J Med Ethics 9(3):216. https://doi.org/10.20529/IJME.2012.072

Kant I (2018) The metaphysics of morals. Cambridge University Press, UK https://doi.org/10.1017/9781316091388

Kayser-Jones J (2003) Continuing to conduct research in nursing homes despite controversial findings: reflections by a research scientist. Qual Health Res 13(1):114–128. https://doi.org/10.1177/1049732302239414

Kotecha JA, Manca D, Lambert-Lanning A, Keshavjee K, Drummond N, Godwin M, Greiver M, Putnam W, Lussier M-T, Birtwhistle R (2011) Ethics and privacy issues of a practice-based surveillance system: need for a national-level institutional research ethics board and consent standards. Can Fam physician 57(10):1165–1173.  https://europepmc.org/article/pmc/pmc3192088

Kuyare, MS., Taur, SR., Thatte, U. (2014). Establishing institutional ethics committees: challenges and solutions–a review of the literature. Indian J Med Ethics. https://doi.org/10.20529/IJME.2014.047

LaFollette, H. (2007). Ethics in practice (3rd edition). Blackwell

Larry RC (1982) The teaching of ethics and moral values in teaching. J High Educ 53(3):296–306. https://doi.org/10.1080/00221546.1982.11780455

Manti S, Licari A (2018) How to obtain informed consent for research. Breathe (Sheff) 14(2):145–152. https://doi.org/10.1183/20734735.001918

Mulholland MW, Bell J (2005) Research Governance and Research Funding in the USA: What the academic surgeon needs to know. J R Soc Med 98(11):496–502. https://doi.org/10.1258/jrsm.98.11.496

National Institute of Health (NIH) Ethics in Clinical Research. n.d. Available Online. URL: https://clinicalcenter.nih.gov/recruit/ethics.html

NHS (2018) Flagged Research Ethics Committees. Retrieved February 14, 2021. Available online. URL: https://www.hra.nhs.uk/about-us/committees-and-services/res-and-recs/flagged-research-ethics-committees/

NICE (2018) Research governance policy. Retrieved February 14, 2021. Available online. URL: https://www.nice.org.uk/Media/Default/About/what-we-do/science-policy-and-research/research-governance-policy.pdf

Orr, J. (2018). Developing a campus academic integrity education seminar. J Acad Ethics 16(3), 195–209. https://doi.org/10.1007/s10805-018-9304-7

Quinn, M. (2011). Introduction to Ethics. Ethics for an Information Age. 4th Ed. Ch 2. 53–108. Pearson. UK

Resnik. (2020). What is ethics in Research & why is it Important? Available Online. URL: https://www.niehs.nih.gov/research/resources/bioethics/whatis/index.cfm

Schnyder S, Starring H, Fury M, Mora A, Leonardi C, Dasa V (2018) The formation of a medical student research committee and its impact on involvement in departmental research. Med Educ Online 23(1):1. https://doi.org/10.1080/10872981.2018.1424449

Seymour E, Hunter AB, Laursen SL, DeAntoni T (2004) Establishing the benefits of research experiences for undergraduates in the sciences: first findings from a three-year study. Sci Educ 88(4):493–534. https://doi.org/10.1002/sce.10131

Shamoo AE, Irving DN (1993) Accountability in research using persons with mental illness. Account Res 3(1):1–17. https://doi.org/10.1080/08989629308573826

Shaw, S., Boynton, PM., and Greenhalgh, T. (2005). Research governance: where did it come from, what does it mean? Research governance framework for health and social care, 2nd ed. London: Department of Health. https://doi.org/10.1258/jrsm.98.11.496 , 98, 11, 496, 502

Book   Google Scholar  

Speight, JG. (2016) Ethics in the university |DOI: https://doi.org/10.1002/9781119346449 scrivener publishing LLC

Stephenson GK, Jones GA, Fick E, Begin-Caouette O, Taiyeb A, Metcalfe A (2020) What’s the protocol? Canadian university research ethics boards and variations in implementing tri-Council policy. Can J Higher Educ 50(1)1): 68–81

Surbhi, S. (2015). Difference between morals and ethics [weblog]. March 25, 2015. Retrieved February 14, 2021. Available Online. URL: http://keydifferences.com/difference-between-morals-and-ethics.html

The Belmont Report (1979). Ethical Principles and Guidelines for the Protection of Human Subjects of Research. The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. Retrieved February 14, 2021. Available online. URL: https://www.hhs.gov/ohrp/sites/default/files/the-belmont-report-508c_FINAL.pdf

The Singapore Statement on Research Integrity. (2020). Nicholas Steneck and Tony Mayer, Co-chairs, 2nd World Conference on Research Integrity; Melissa Anderson, Chair, Organizing Committee, 3rd World Conference on Research Integrity. Retrieved February 14, 2021. Available online. URL: https://wcrif.org/documents/327-singapore-statement-a4size/file

Warwick K (2003) Cyborg morals, cyborg values, cyborg ethics. Ethics Inf Technol 5(3):131–137. https://doi.org/10.1023/B:ETIN.0000006870.65865.cf

Weindling P (2001) The origins of informed consent: the international scientific commission on medical war crimes, and the Nuremberg code. Bull Hist Med 75(1):37–71. https://doi.org/10.1353/bhm.2001.0049

WHO. (2009). Research ethics committees Basic concepts for capacity-building. Retrieved February 14, 2021. Available online. URL: https://www.who.int/ethics/Ethics_basic_concepts_ENG.pdf

WHO. (2021). Chronological list of publications. Retrieved February 14, 2021. Available online. URL: https://www.who.int/ethics/publications/year/en/

Willison, J. and O’Regan, K. (2007). Commonly known, commonly not known, totally unknown: a framework for students becoming researchers. High Educ Res Dev 26(4). 393–409. https://doi.org/10.1080/07294360701658609

Žukauskas P, Vveinhardt J, and Andriukaitienė R. (2018). Research Ethics In book: Management Culture and Corporate Social Responsibility Eds Jolita Vveinhardt IntechOpenEditors DOI: https://doi.org/10.5772/intechopen.70629 , 2018

Download references

Acknowledgements

Authors wish to thank the organising committee of the 5th international conference named plagiarism across Europe and beyond, in Vilnius, Lithuania for accepting this paper to be presented in the conference.

Not applicable as this is an independent study, which is not funded by any internal or external bodies.

Author information

Authors and affiliations.

School of Human Sciences, University of Derby, DE22 1, Derby, GB, UK

Shivadas Sivasubramaniam

Department of Informatics, Mendel University in Brno, Zemědělská, 1665, Brno, Czechia

Dita Henek Dlabolová

Centre for Academic Integrity in the UAE, Faculty of Engineering & Information Sciences, University of Wollongong in Dubai, Dubai, UAE

Veronika Kralikova & Zeenath Reza Khan

You can also search for this author in PubMed   Google Scholar

Contributions

The manuscript was entirely written by the corresponding author with contributions from co-authors who have equally contributed to presentation of this paper in the 5th international conference named plagiarism across Europe and beyond, in Vilnius, Lithuania. Authors have equally contributed for the information collection, which were then summarised as narrative explanations by the Corresponding author and Dr. Zeenath Reza Khan. Then checked and verified by Dr. Dlabolova and Ms. Králíková. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Shivadas Sivasubramaniam .

Ethics declarations

Competing interests.

We can also confirm that there are no potential competing interest with other organisations.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Sivasubramaniam, S., Dlabolová, D.H., Kralikova, V. et al. Assisting you to advance with ethics in research: an introduction to ethical governance and application procedures. Int J Educ Integr 17 , 14 (2021). https://doi.org/10.1007/s40979-021-00078-6

Download citation

Received : 17 July 2020

Accepted : 25 April 2021

Published : 13 July 2021

DOI : https://doi.org/10.1007/s40979-021-00078-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Higher education
  • Ethical codes
  • Ethics committee
  • Post-secondary education
  • Institutional policies
  • Research ethics

International Journal for Educational Integrity

ISSN: 1833-2595

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

articles on research ethics

  • Browse All Articles
  • Newsletter Sign-Up

articles on research ethics

  • 30 Apr 2024

When Managers Set Unrealistic Expectations, Employees Cut Ethical Corners

Corporate misconduct has grown in the past 30 years, with losses often totaling billions of dollars. What businesses may not realize is that misconduct often results from managers who set unrealistic expectations, leading decent people to take unethical shortcuts, says Lynn S. Paine.

articles on research ethics

  • 23 Apr 2024
  • Cold Call Podcast

Amazon in Seattle: The Role of Business in Causing and Solving a Housing Crisis

In 2020, Amazon partnered with a nonprofit called Mary’s Place and used some of its own resources to build a shelter for women and families experiencing homelessness on its campus in Seattle. Yet critics argued that Amazon’s apparent charity was misplaced and that the company was actually making the problem worse. Paul Healy and Debora Spar explore the role business plays in addressing unhoused communities in the case “Hitting Home: Amazon and Mary’s Place.”

articles on research ethics

  • 15 Apr 2024

Struggling With a Big Management Decision? Start by Asking What Really Matters

Leaders must face hard choices, from cutting a budget to adopting a strategy to grow. To make the right call, they should start by following their own “true moral compass,” says Joseph Badaracco.

articles on research ethics

  • 26 Mar 2024

How Do Great Leaders Overcome Adversity?

In the spring of 2021, Raymond Jefferson (MBA 2000) applied for a job in President Joseph Biden’s administration. Ten years earlier, false allegations were used to force him to resign from his prior US government position as assistant secretary of labor for veterans’ employment and training in the Department of Labor. Two employees had accused him of ethical violations in hiring and procurement decisions, including pressuring subordinates into extending contracts to his alleged personal associates. The Deputy Secretary of Labor gave Jefferson four hours to resign or be terminated. Jefferson filed a federal lawsuit against the US government to clear his name, which he pursued for eight years at the expense of his entire life savings. Why, after such a traumatic and debilitating experience, would Jefferson want to pursue a career in government again? Harvard Business School Senior Lecturer Anthony Mayo explores Jefferson’s personal and professional journey from upstate New York to West Point to the Obama administration, how he faced adversity at several junctures in his life, and how resilience and vulnerability shaped his leadership style in the case, "Raymond Jefferson: Trial by Fire."

articles on research ethics

  • 02 Jan 2024

Should Businesses Take a Stand on Societal Issues?

Should businesses take a stand for or against particular societal issues? And how should leaders determine when and how to engage on these sensitive matters? Harvard Business School Senior Lecturer Hubert Joly, who led the electronics retailer Best Buy for almost a decade, discusses examples of corporate leaders who had to determine whether and how to engage with humanitarian crises, geopolitical conflict, racial justice, climate change, and more in the case, “Deciding When to Engage on Societal Issues.”

articles on research ethics

  • 12 Dec 2023

Can Sustainability Drive Innovation at Ferrari?

When Ferrari, the Italian luxury sports car manufacturer, committed to achieving carbon neutrality and to electrifying a large part of its car fleet, investors and employees applauded the new strategy. But among the company’s suppliers, the reaction was mixed. Many were nervous about how this shift would affect their bottom lines. Professor Raffaella Sadun and Ferrari CEO Benedetto Vigna discuss how Ferrari collaborated with suppliers to work toward achieving the company’s goal. They also explore how sustainability can be a catalyst for innovation in the case, “Ferrari: Shifting to Carbon Neutrality.” This episode was recorded live December 4, 2023 in front of a remote studio audience in the Live Online Classroom at Harvard Business School.

articles on research ethics

  • 11 Dec 2023
  • Research & Ideas

Doing Well by Doing Good? One Industry’s Struggle to Balance Values and Profits

Few companies wrestle with their moral mission and financial goals like those in journalism. Research by Lakshmi Ramarajan explores how a disrupted industry upholds its values even as the bottom line is at stake.

articles on research ethics

  • 27 Nov 2023

Voting Democrat or Republican? The Critical Childhood Influence That's Tough to Shake

Candidates might fixate on red, blue, or swing states, but the neighborhoods where voters spend their teen years play a key role in shaping their political outlook, says research by Vincent Pons. What do the findings mean for the upcoming US elections?

articles on research ethics

  • 21 Nov 2023

The Beauty Industry: Products for a Healthy Glow or a Compact for Harm?

Many cosmetics and skincare companies present an image of social consciousness and transformative potential, while profiting from insecurity and excluding broad swaths of people. Geoffrey Jones examines the unsightly reality of the beauty industry.

articles on research ethics

  • 09 Nov 2023

What Will It Take to Confront the Invisible Mental Health Crisis in Business?

The pressure to do more, to be more, is fueling its own silent epidemic. Lauren Cohen discusses the common misperceptions that get in the way of supporting employees' well-being, drawing on case studies about people who have been deeply affected by mental illness.

articles on research ethics

  • 07 Nov 2023

How Should Meta Be Governed for the Good of Society?

Julie Owono is executive director of Internet Sans Frontières and a member of the Oversight Board, an outside entity with the authority to make binding decisions on tricky moderation questions for Meta’s companies, including Facebook and Instagram. Harvard Business School visiting professor Jesse Shapiro and Owono break down how the Board governs Meta’s social and political power to ensure that it’s used responsibly, and discuss the Board’s impact, as an alternative to government regulation, in the case, “Independent Governance of Meta’s Social Spaces: The Oversight Board.”

articles on research ethics

  • 24 Oct 2023

From P.T. Barnum to Mary Kay: Lessons From 5 Leaders Who Changed the World

What do Steve Jobs and Sarah Breedlove have in common? Through a series of case studies, Robert Simons explores the unique qualities of visionary leaders and what today's managers can learn from their journeys.

articles on research ethics

  • 03 Oct 2023
  • Research Event

Build the Life You Want: Arthur Brooks and Oprah Winfrey Share Happiness Tips

"Happiness is not a destination. It's a direction." In this video, Arthur C. Brooks and Oprah Winfrey reflect on mistakes, emotions, and contentment, sharing lessons from their new book.

articles on research ethics

  • 12 Sep 2023

Successful, But Still Feel Empty? A Happiness Scholar and Oprah Have Advice for You

So many executives spend decades reaching the pinnacles of their careers only to find themselves unfulfilled at the top. In the book Build the Life You Want, Arthur Brooks and Oprah Winfrey offer high achievers a guide to becoming better leaders—of their lives.

articles on research ethics

  • 10 Jul 2023
  • In Practice

The Harvard Business School Faculty Summer Reader 2023

Need a book recommendation for your summer vacation? HBS faculty members share their reading lists, which include titles that explore spirituality, design, suspense, and more.

articles on research ethics

  • 01 Jun 2023

A Nike Executive Hid His Criminal Past to Turn His Life Around. What If He Didn't Have To?

Larry Miller committed murder as a teenager, but earned a college degree while serving time and set out to start a new life. Still, he had to conceal his record to get a job that would ultimately take him to the heights of sports marketing. A case study by Francesca Gino, Hise Gibson, and Frances Frei shows the barriers that formerly incarcerated Black men are up against and the potential talent they could bring to business.

articles on research ethics

  • 04 Apr 2023

Two Centuries of Business Leaders Who Took a Stand on Social Issues

Executives going back to George Cadbury and J. N. Tata have been trying to improve life for their workers and communities, according to the book Deeply Responsible Business: A Global History of Values-Driven Leadership by Geoffrey Jones. He highlights three practices that deeply responsible companies share.

articles on research ethics

  • 14 Mar 2023

Can AI and Machine Learning Help Park Rangers Prevent Poaching?

Globally there are too few park rangers to prevent the illegal trade of wildlife across borders, or poaching. In response, Spatial Monitoring and Reporting Tool (SMART) was created by a coalition of conservation organizations to take historical data and create geospatial mapping tools that enable more efficient deployment of rangers. SMART had demonstrated significant improvements in patrol coverage, with some observed reductions in poaching. Then a new predictive analytic tool, the Protection Assistant for Wildlife Security (PAWS), was created to use artificial intelligence (AI) and machine learning (ML) to try to predict where poachers would be likely to strike. Jonathan Palmer, Executive Director of Conservation Technology for the Wildlife Conservation Society, already had a good data analytics tool to help park rangers manage their patrols. Would adding an AI- and ML-based tool improve outcomes or introduce new problems? Harvard Business School senior lecturer Brian Trelstad discusses the importance of focusing on the use case when determining the value of adding a complex technology solution in his case, “SMART: AI and Machine Learning for Wildlife Conservation.”

articles on research ethics

  • 14 Feb 2023

Does It Pay to Be a Whistleblower?

In 2013, soon after the US Securities and Exchange Commission (SEC) had started a massive whistleblowing program with the potential for large monetary rewards, two employees of a US bank’s asset management business debated whether to blow the whistle on their employer after completing an internal review that revealed undisclosed conflicts of interest. The bank’s asset management business disproportionately invested clients’ money in its own mutual funds over funds managed by other banks, letting it collect additional fees—and the bank had not disclosed this conflict of interest to clients. Both employees agreed that failing to disclose the conflict was a problem, but beyond that, they saw the situation very differently. One employee, Neel, perceived the internal review as a good-faith effort by senior management to identify and address the problem. The other, Akash, thought that the entire business model was problematic, even with a disclosure, and believed that the bank may have even broken the law. Should they escalate the issue internally or report their findings to the US Securities and Exchange Commission? Harvard Business School associate professor Jonas Heese discusses the potential risks and rewards of whistleblowing in his case, “Conflicts of Interest at Uptown Bank.”

articles on research ethics

  • 17 Jan 2023

Good Companies Commit Crimes, But Great Leaders Can Prevent Them

It's time for leaders to go beyond "check the box" compliance programs. Through corporate cases involving Walmart, Wells Fargo, and others, Eugene Soltes explores the thorny legal issues executives today must navigate in his book Corporate Criminal Investigations and Prosecutions.

SYSTEMATIC REVIEW article

The view of synthetic biology in the field of ethics: a thematic systematic review provisionally accepted.

  • 1 Ankara University, Türkiye
  • 2 Department of Medical History and Ethics, School of Medicine, Ankara University, Ankara, Türkiye, Türkiye

The final, formatted version of the article will be published soon.

Synthetic biology is designing and creating biological tools and systems for useful purposes. It uses knowledge from biology, such as biotechnology, molecular biology, biophysics, biochemistry, bioinformatics, and other disciplines, such as engineering, mathematics, computer science, and electrical engineering. It is recognized as both a branch of science and technology. The scope of synthetic biology ranges from modifying existing organisms to gain new properties to creating a living organism from non-living components. Synthetic biology has many applications in important fields such as energy, chemistry, medicine, environment, agriculture, national security, and nanotechnology. The development of synthetic biology also raises ethical and social debates. This article aims to identify the place of ethics in synthetic biology. In this context, the theoretical ethical debates on synthetic biology from the 2000s to 2020, when the development of synthetic biology was relatively faster, were analyzed using the systematic review method. Based on the results of the analysis, the main ethical problems related to the field, problems that are likely to arise, and suggestions for solutions to these problems are included. The data collection phase of the study included a literature review conducted according to protocols, including planning, screening, selection and evaluation. The analysis and synthesis process was carried out in the next stage, and the main themes related to synthetic biology and ethics were identified. Searches were conducted in Web of Science, Scopus, PhilPapers and MEDLINE databases. Theoretical research articles and reviews published in peer-reviewed journals until the end of 2020 were included in the study. The language of publications was English. According to preliminary data, 1453 publications were retrieved from the four databases. Considering the inclusion and exclusion criteria, 58 publications were analyzed in the study. Ethical debates on synthetic biology have been conducted on various issues. In this context, the ethical debates in this article were examined under five themes: the moral status of synthetic biology products, synthetic biology and the meaning of life, synthetic biology and metaphors, synthetic biology and knowledge, and expectations, concerns, and problem solving: risk versus caution.

Keywords: Synthetic Biology, Ethics, Bioethics, Systematic review, Technology ethics, Responsible research and innovation

Received: 08 Mar 2024; Accepted: 10 May 2024.

Copyright: © 2024 Kurtoglu, Yıldız and Arda. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: PhD. Ayse Kurtoglu, Ankara University, Ankara, Türkiye

People also looked at

  • Search Menu
  • Author Guidelines
  • Submission Site
  • Open Access
  • About International Studies Review
  • About the International Studies Association
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Ai: a global governance challenge, empirical perspectives, normative perspectives, acknowledgement, conflict of interest.

  • < Previous

The Global Governance of Artificial Intelligence: Next Steps for Empirical and Normative Research

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Jonas Tallberg, Eva Erman, Markus Furendal, Johannes Geith, Mark Klamberg, Magnus Lundgren, The Global Governance of Artificial Intelligence: Next Steps for Empirical and Normative Research, International Studies Review , Volume 25, Issue 3, September 2023, viad040, https://doi.org/10.1093/isr/viad040

  • Permissions Icon Permissions

Artificial intelligence (AI) represents a technological upheaval with the potential to change human society. Because of its transformative potential, AI is increasingly becoming subject to regulatory initiatives at the global level. Yet, so far, scholarship in political science and international relations has focused more on AI applications than on the emerging architecture of global AI regulation. The purpose of this article is to outline an agenda for research into the global governance of AI. The article distinguishes between two broad perspectives: an empirical approach, aimed at mapping and explaining global AI governance; and a normative approach, aimed at developing and applying standards for appropriate global AI governance. The two approaches offer questions, concepts, and theories that are helpful in gaining an understanding of the emerging global governance of AI. Conversely, exploring AI as a regulatory issue offers a critical opportunity to refine existing general approaches to the study of global governance.

La inteligencia artificial (IA) representa una revolución tecnológica que tiene el potencial de poder cambiar la sociedad humana. Debido a este potencial transformador, la IA está cada vez más sujeta a iniciativas reguladoras a nivel global. Sin embargo, hasta ahora, el mundo académico en el área de las ciencias políticas y las relaciones internacionales se ha centrado más en las aplicaciones de la IA que en la arquitectura emergente de la regulación global en materia de IA. El propósito de este artículo es esbozar una agenda para la investigación sobre la gobernanza global en materia de IA. El artículo distingue entre dos amplias perspectivas: por un lado, un enfoque empírico, destinado a mapear y explicar la gobernanza global en materia de IA y, por otro lado, un enfoque normativo, destinado a desarrollar y a aplicar normas para una gobernanza global adecuada de la IA. Los dos enfoques ofrecen preguntas, conceptos y teorías que resultan útiles para comprender la gobernanza global emergente en materia de IA. Por el contrario, el hecho de estudiar la IA como si fuese una cuestión reguladora nos ofrece una oportunidad de gran relevancia para poder perfeccionar los enfoques generales existentes en el estudio de la gobernanza global.

L'intelligence artificielle (IA) constitue un bouleversement technologique qui pourrait bien changer la société humaine. À cause de son potentiel transformateur, l'IA fait de plus en plus l'objet d'initiatives réglementaires au niveau mondial. Pourtant, jusqu'ici, les chercheurs en sciences politiques et relations internationales se sont davantage concentrés sur les applications de l'IA que sur l’émergence de l'architecture de la réglementation mondiale de l'IA. Cet article vise à exposer les grandes lignes d'un programme de recherche sur la gouvernance mondiale de l'IA. Il fait la distinction entre deux perspectives larges : une approche empirique, qui vise à représenter et expliquer la gouvernance mondiale de l'IA; et une approche normative, qui vise à mettre au point et appliquer les normes d'une gouvernance mondiale de l'IA adéquate. Les deux approches proposent des questions, des concepts et des théories qui permettent de mieux comprendre l’émergence de la gouvernance mondiale de l'IA. À l'inverse, envisager l'IA telle une problématique réglementaire présente une opportunité critique d'affiner les approches générales existantes de l’étude de la gouvernance mondiale.

Artificial intelligence (AI) represents a technological upheaval with the potential to transform human society. It is increasingly viewed by states, non-state actors, and international organizations (IOs) as an area of strategic importance, economic competition, and risk management. While AI development is concentrated to a handful of corporations in the United States, China, and Europe, the long-term consequences of AI implementation will be global. Autonomous weapons will have consequences for armed conflicts and power balances; automation will drive changes in job markets and global supply chains; generative AI will affect content production and challenge copyright systems; and competition around the scarce hardware needed to train AI systems will shape relations among both states and businesses. While the technology is still only lightly regulated, state and non-state actors are beginning to negotiate global rules and norms to harness and spread AI’s benefits while limiting its negative consequences. For example, in the past few years, the United Nations Educational, Scientific and Cultural Organization (UNESCO) adopted recommendations on the ethics of AI, the European Union (EU) negotiated comprehensive AI legislation, and the Group of Seven (G7) called for developing global technical standards on AI.

Our purpose in this article is to outline an agenda for research into the global governance of AI. 1 Advancing research on the global regulation of AI is imperative. The rules and arrangements that are currently being developed to regulate AI will have a considerable impact on power differentials, the distribution of economic value, and the political legitimacy of AI governance for years to come. Yet there is currently little systematic knowledge on the nature of global AI regulation, the interests influential in this process, and the extent to which emerging arrangements can manage AI’s consequences in a just and democratic manner. While poised for rapid expansion, research on the global governance of AI remains in its early stages (but see Maas 2021 ; Schmitt 2021 ).

This article complements earlier calls for research on AI governance in general ( Dafoe 2018 ; Butcher and Beridze 2019 ; Taeihagh 2021 ; Büthe et al. 2022 ) by focusing specifically on the need for systematic research into the global governance of AI. It submits that global efforts to regulate AI have reached a stage where it is necessary to start asking fundamental questions about the characteristics, sources, and consequences of these governance arrangements.

We distinguish between two broad approaches for studying the global governance of AI: an empirical perspective, informed by a positive ambition to map and explain AI governance arrangements; and a normative perspective, informed by philosophical standards for evaluating the appropriateness of AI governance arrangements. Both perspectives build on established traditions of research in political science, international relations (IR), and political philosophy, and offer questions, concepts, and theories that are helpful as we try to better understand new types of governance in world politics.

We argue that empirical and normative perspectives together offer a comprehensive agenda of research on the global governance of AI. Pursuing this agenda will help us to better understand characteristics, sources, and consequences of the global regulation of AI, with potential implications for policymaking. Conversely, exploring AI as a regulatory issue offers a critical opportunity to further develop concepts and theories of global governance as they confront the particularities of regulatory dynamics in this important area.

We advance this argument in three steps. First, we argue that AI, because of its economic, political, and social consequences, presents a range of governance challenges. While these challenges initially were taken up mainly by national authorities, recent years have seen a dramatic increase in governance initiatives by IOs. These efforts to regulate AI at global and regional levels are likely driven by several considerations, among them AI applications creating cross-border externalities that demand international cooperation and AI development taking place through transnational processes requiring transboundary regulation. Yet, so far, existing scholarship on the global governance of AI has been mainly descriptive or policy-oriented, rather than focused on theory-driven positive and normative questions.

Second, we argue that an empirical perspective can help to shed light on key questions about characteristics and sources of the global governance of AI. Based on existing concepts, the emerging governance architecture for AI can be described as a regime complex—a structure of partially overlapping and diverse governance arrangements without a clearly defined central institution or hierarchy. IR theories are useful in directing our attention to the role of power, interests, ideas, and non-state actors in the construction of this regime complex. At the same time, the specific conditions of AI governance suggest ways in which global governance theories may be usefully developed.

Third, we argue that a normative perspective raises crucial questions regarding the nature and implications of global AI governance. These questions pertain both to procedure (the process for developing rules) and to outcome (the implications of those rules). A normative perspective suggests that procedures and outcomes in global AI governance need to be evaluated in terms of how they meet relevant normative ideals, such as democracy and justice. How could the global governance of AI be organized to live up to these ideals? To what extent are emerging arrangements minimally democratic and fair in their procedures and outcomes? Conversely, the global governance of AI raises novel questions for normative theorizing, for instance, by invoking aims for AI to be “trustworthy,” “value aligned,” and “human centered.”

Advancing this agenda of research is important for several reasons. First, making more systematic use of social science concepts and theories will help us to gain a better understanding of various dimensions of the global governance of AI. Second, as a novel case of governance involving unique features, AI raises questions that will require us to further refine existing concepts and theories of global governance. Third, findings from this research agenda will be of importance for policymakers, by providing them with evidence on international regulatory gaps, the interests that have influenced current arrangements, and the normative issues at stake when developing this regime complex going forward.

The remainder of this article is structured in three substantive sections. The first section explains why AI has become a concern of global governance. The second section suggests that an empirical perspective can help to shed light on the characteristics and drivers of the global governance of AI. The third section discusses the normative challenges posed by global AI governance, focusing specifically on concerns related to democracy and justice. The article ends with a conclusion that summarizes our proposed agenda for future research on the global governance of AI.

Why does AI pose a global governance challenge? In this section, we answer this question in three steps. We begin by briefly describing the spread of AI technology in society, then illustrate the attempts to regulate AI at various levels of governance, and finally explain why global regulatory initiatives are becoming increasingly common. We argue that the growth of global governance initiatives in this area stems from AI applications creating cross-border externalities that demand international cooperation and from AI development taking place through transnational processes requiring transboundary regulation.

Due to its amorphous nature, AI escapes easy definition. Instead, the definition of AI tends to depend on the purposes and audiences of the research ( Russell and Norvig 2020 ). In the most basic sense, machines are considered intelligent when they can perform tasks that would require intelligence if done by humans ( McCarthy et al. 1955 ). This could happen through the guiding hand of humans, in “expert systems” that follow complex decision trees. It could also happen through “machine learning,” where AI systems are trained to categorize texts, images, sounds, and other data, using such categorizations to make autonomous decisions when confronted with new data. More specific definitions require that machines display a level of autonomy and capacity for learning that enables rational action. For instance, the EU’s High-Level Expert Group on AI has defined AI as “systems that display intelligent behaviour by analysing their environment and taking actions—with some degree of autonomy—to achieve specific goals” (2019, 1). Yet, illustrating the potential for conceptual controversy, this definition has been criticized for denoting both too many and too few technologies as AI ( Heikkilä 2022a ).

AI technology is already implemented in a wide variety of areas in everyday life and the economy at large. For instance, the conversational chatbot ChatGPT is estimated to have reached 100 million users just  two months after its launch at the end of 2022 ( Hu 2023 ). AI applications enable new automation technologies, with subsequent positive or negative effects on the demand for labor, employment, and economic equality ( Acemoglu and Restrepo 2020 ). Military AI is integral to lethal autonomous weapons systems (LAWS), whereby machines take autonomous decisions in warfare and battlefield targeting ( Rosert and Sauer 2018 ). Many governments and public agencies have already implemented AI in their daily operations in order to more efficiently evaluate welfare eligibility, flag potential fraud, profile suspects, make risk assessments, and engage in mass surveillance ( Saif et al. 2017 ; Powers and Ganascia 2020 ; Berk 2021 ; Misuraca and van Noordt 2022 , 38).

Societies face significant governance challenges in relation to the implementation of AI. One type of challenge arises when AI systems function poorly, such as when applications involving some degree of autonomous decision-making produce technical failures with real-world implications. The “Robodebt” scheme in Australia, for instance, was designed to detect mistaken social security payments, but the Australian government ultimately had to rescind 400,000 wrongfully issued welfare debts ( Henriques-Gomes 2020 ). Similarly, Dutch authorities recently implemented an algorithm that pushed tens of thousands of families into poverty after mistakenly requiring them to repay child benefits, ultimately forcing the government to resign ( Heikkilä 2022b ).

Another type of governance challenge arises when AI systems function as intended but produce impacts whose consequences may be regarded as problematic. For instance, the inherent opacity of AI decision-making challenges expectations on transparency and accountability in public decision-making in liberal democracies ( Burrell 2016 ; Erman and Furendal 2022a ). Autonomous weapons raise critical ethical and legal issues ( Rosert and Sauer 2019 ). AI applications for surveillance in law enforcement give rise to concerns of individual privacy and human rights ( Rademacher 2019 ). AI-driven automation involves changes in labor markets that are painful for parts of the population ( Acemoglu and Restrepo 2020 ). Generative AI upends conventional ways of producing creative content and raises new copyright and data security issues ( Metz 2022 ).

More broadly, AI presents a governance challenge due to its effects on economic competitiveness, military security, and personal integrity, with consequences for states and societies. In this respect, AI may not be radically different from earlier general-purpose technologies, such as the steam engine, electricity, nuclear power, and the internet ( Frey 2019 ). From this perspective, it is not the novelty of AI technology that makes it a pressing issue to regulate but rather the anticipation that AI will lead to large-scale changes and become a source of power for state and societal actors.

Challenges such as these have led to a rapid expansion in recent years of efforts to regulate AI at different levels of governance. The OECD AI Policy Observatory records more than 700 national AI policy initiatives from 60 countries and territories ( OECD 2021 ). Earlier research into the governance of AI has therefore naturally focused mostly on the national level ( Radu 2021 ; Roberts et al. 2021 ; Taeihagh 2021 ). However, a large number of governance initiatives have also been undertaken at the global level, and many more are underway. According to an ongoing inventory of AI regulatory initiatives by the Council of Europe, IOs overtook national authorities as the main source of such initiatives in 2020 ( Council of Europe 2023 ).  Figure 1 visualizes this trend.

Origins of AI governance initiatives, 2015–2022. Source: Council of Europe (2023).

Origins of AI governance initiatives, 2015–2022. Source : Council of Europe (2023 ).

According to this source, national authorities launched 170 initiatives from 2015 to 2022, while IOs put in place 210 initiatives during the same period. Over time, the share of regulatory initiatives emanating from IOs has thus grown to surpass the share resulting from national authorities. Examples of the former include the OECD Principles on Artificial Intelligence agreed in 2019, the UNESCO Recommendation on Ethics of AI adopted in 2021, and the EU’s ongoing negotiations on the EU AI Act. In addition, several governance initiatives emanate from the private sector, civil society, and multistakeholder partnerships. In the next section, we will provide a more developed characterization of these global regulatory initiatives.

Two concerns likely explain why AI increasingly is becoming subject to governance at the global level. First, AI creates externalities that do not follow national borders and whose regulation requires international cooperation. China’s Artificial Intelligence Development Plan, for instance, clearly states that the country is using AI as a leapfrog technology in order to enhance national competitiveness ( Roberts et al. 2021 ). Since states with less regulation might gain a competitive edge when developing certain AI applications, there is a risk that such strategies create a regulatory race to the bottom. International cooperation that creates a level playing field could thus be said to be in the interest of all parties.

Second, the development of AI technology is a cross-border process carried out by transnational actors—multinational firms in particular. Big tech corporations, such as Google, Meta, or the Chinese drone maker DJI, are investing vast sums into AI development. The innovations of hardware manufacturers like Nvidia enable breakthroughs but depend on complex global supply chains, and international research labs such as DeepMind regularly present cutting-edge AI applications. Since the private actors that develop AI can operate across multiple national jurisdictions, the efforts to regulate AI development and deployment also need to be transboundary. Only by introducing common rules can states ensure that AI businesses encounter similar regulatory environments, which both facilitates transboundary AI development and reduces incentives for companies to shift to countries with laxer regulation.

Successful global governance of AI could help realize many of the potential benefits of the technology while mitigating its negative consequences. For AI to contribute to increased economic productivity, for instance, there needs to be predictable and clear regulation as well as global coordination around standards that prevent competition between parallel technical systems. Conversely, a failure to provide suitable global governance could lead to substantial risks. The intentional misuse of AI technology may undermine trust in institutions, and if left unchecked, the positive and negative externalities created by automation technologies might fall unevenly across different groups. Race dynamics similar to those that arose around nuclear technology in the twentieth century—where technological leadership created large benefits—might lead international actors and private firms to overlook safety issues and create potentially dangerous AI applications ( Dafoe 2018 ; Future of Life Institute 2023 ). Hence, policymakers face the task of disentangling beneficial from malicious consequences and then foster the former while regulating the latter. Given the speed at which AI is developed and implemented, governance also risks constantly being one step behind the technological frontier.

A prime example of how AI presents a global governance challenge is the efforts to regulate military AI, in particular autonomous weapons capable of identifying and eliminating a target without the involvement of a remote human operator ( Hernandez 2021 ). Both the development and the deployment of military applications with autonomous capabilities transcend national borders. Multinational defense companies are at the forefront of developing autonomous weapons systems. Reports suggest that such autonomous weapons are now beginning to be used in armed conflicts ( Trager and Luca 2022 ). The development and deployment of autonomous weapons involve the types of competitive dynamics and transboundary consequences identified above. In addition, they raise specific concerns with respect to accountability and dehumanization ( Sparrow 2007 ; Stop Killer Robots 2023 ). For these reasons, states have begun to explore the potential for joint global regulation of autonomous weapons systems. The principal forum is the Group on Governmental Experts (GGE) within the Convention on Certain Conventional Weapons (CCW). Yet progress in these negotiations is slow as the major powers approach this issue with competing interests in mind, illustrating the challenges involved in developing joint global rules.

The example of autonomous weapons further illustrates how the global governance of AI raises urgent empirical and normative questions for research. On the empirical side, these developments invite researchers to map emerging regulatory initiatives, such as those within the CCW, and to explain why these particular frameworks become dominant. What are the principal characteristics of global regulatory initiatives in the area of autonomous weapons, and how do power differentials, interest constellations, and principled ideas influence those rules? On the normative side, these developments invite researchers to address key normative questions raised by the development and deployment of autonomous weapons. What are the key normative issues at stake in the regulation of autonomous weapons, both with respect to the process through which such rules are developed and with respect to the consequences of these frameworks? To what extent are existing normative ideals and frameworks, such as just war theory, applicable to the governing of military AI ( Roach and Eckert 2020 )? Despite the global governance challenge of AI development and use, research on this topic is still in its infancy (but see Maas 2021 ; Schmitt 2021 ). In the remainder of this article, we therefore present an agenda for research into the global governance of AI. We begin by outlining an agenda for positive empirical research on the global governance of AI and then suggest an agenda for normative philosophical research.

An empirical perspective on the global governance of AI suggests two main questions: How may we describe the emerging global governance of AI? And how may we explain the emerging global governance of AI? In this section, we argue that concepts and theories drawn from the general study of global governance will be helpful as we address these questions, but also that AI, conversely, raises novel issues that point to the need for new or refined theories. Specifically, we show how global AI governance may be mapped along several conceptual dimensions and submit that theories invoking power dynamics, interests, ideas, and non-state actors have explanatory promise.

Mapping AI Governance

A key priority for empirical research on the global governance of AI is descriptive: Where and how are new regulatory arrangements emerging at the global level? What features characterize the emergent regulatory landscape? In answering such questions, researchers can draw on scholarship on international law and IR, which have conceptualized mechanisms of regulatory change and drawn up analytical dimensions to map and categorize the resulting regulatory arrangements.

Any mapping exercise must consider the many different ways in global AI regulation may emerge and evolve. Previous research suggests that legal development may take place in at least three distinct ways. To begin with, existing rules could be reinterpreted to also cover AI ( Maas 2021 ). For example, the principles of distinction, proportionality, and precaution in international humanitarian law could be extended, via reinterpretation, to apply to LAWS, without changing the legal source. Another manner in which new AI regulation may appear is via “ add-ons ” to existing rules. For example, in the area of global regulation of autonomous vehicles, AI-related provisions were added to the 1968 Vienna Road Traffic Convention through an amendment in 2015 ( Kunz and Ó hÉigeartaigh 2020 ). Finally, AI regulation may appear as a completely new framework , either through new state behavior that results in customary international law or through a new legal act or treaty ( Maas 2021 , 96). Here, one example of regulating AI through a new framework is the aforementioned EU AI Act, which would take the form of a new EU regulation.

Once researchers have mapped emerging regulatory arrangements, a central task will be to categorize them. Prior scholarship suggests that regulatory arrangements may be fruitfully analyzed in terms of five key dimensions (cf. Koremenos et al. 2001 ; Wahlgren 2022 , 346–347). A first dimension is whether regulation is horizontal or vertical . A horizontal regulation covers several policy areas, whereas a vertical regulation is a delimited legal framework covering one specific policy area or application. In the field of AI, emergent governance appears to populate both ends of this spectrum. For example, the proposed EU AI Act (2021), the UNESCO Recommendations on the Ethics of AI (2021), and the OECD Principles on AI (2019), which are not specific to any particular AI application or field, would classify as attempts at horizontal regulation. When it comes to vertical regulation, there are fewer existing examples, but discussions on a new protocol on LAWS within the CCW signal that this type of regulation is likely to become more important in the future ( Maas 2019a ).

A second dimension runs from centralization to decentralization . Governance is centralized if there is a single, authoritative institution at the heart of a regime, such as in trade, where the World Trade Organization (WTO) fulfills this role. In contrast, decentralized arrangements are marked by parallel and partly overlapping institutions, such as in the governance of the environment, the internet, or genetic resources (cf. Raustiala and Victor 2004 ). While some IOs with universal membership, such as UNESCO, have taken initiatives relating to AI governance, no institution has assumed the role as the core regulatory body at the global level. Rather, the proliferation of parallel initiatives, across levels and regions, lends weight to the conclusion that contemporary arrangements for the global governance of AI are strongly decentralized ( Cihon et al. 2020a ).

A third dimension is the continuum from hard law to soft law . While domestic statutes and treaties may be described as hard law, soft law is associated with guidelines of conduct, recommendations, resolutions, standards, opinions, ethical principles, declarations, guidelines, board decisions, codes of conduct, negotiated agreements, and a large number of additional normative mechanisms ( Abbott and Snidal 2000 ; Wahlgren 2022 ). Even though such soft documents may initially have been drafted as non-legal texts, they may in actual practice acquire considerable strength in structuring international relations ( Orakhelashvili 2019 ). While some initiatives to regulate AI classify as hard law, including the EU’s AI Act, Burri (2017 ) suggests that AI governance is likely to be dominated by “supersoft law,” noting that there are currently numerous processes underway creating global standards outside traditional international law-making fora. In a phenomenon that might be described as “bottom-up law-making” ( Koven Levit 2017 ), states and IOs are bypassed, creating norms that defy traditional categories of international law ( Burri 2017 ).

A fourth dimension concerns private versus public regulation . The concept of private regulation overlaps partly with substance understood as soft law, to the extent that private actors develop non-binding guidelines ( Wahlgren 2022 ). Significant harmonization of standards may be developed by private standardization bodies, such as the IEEE ( Ebers 2022 ). Public authorities may regulate the responsibility of manufacturers through tort law and product liability law ( Greenstein 2022 ). Even though contracts are originally matters between private parties, some contractual matters may still be regulated and enforced by law ( Ubena 2022 ).

A fifth dimension relates to the division between military and non-military regulation . Several policymakers and scholars describe how military AI is about to escalate into a strategic arms race between major powers such as the United States and China, similar to the nuclear arms race during the Cold War (cf. Petman 2017 ; Thompson and Bremmer 2018 ; Maas 2019a ). The process in the CCW Group of Governmental Experts on the regulation of LAWS is probably the largest single negotiation on AI ( Maas 2019b ) next to the negotiations on the EU AI Act. The zero-sum logic that appears to exist between states in the area of national security, prompting a military AI arms race, may not be applicable to the same extent to non-military applications of AI, potentially enabling a clearer focus on realizing positive-sum gains through regulation.

These five dimensions can provide guidance as researchers take up the task of mapping and categorizing global AI regulation. While the evidence is preliminary, in its present form, the global governance of AI must be understood as combining horizontal and vertical elements, predominantly leaning toward soft law, being heavily decentralized, primarily public in nature, and mixing military and non-military regulation. This multi-faceted and non-hierarchical nature of global AI governance suggests that it is best characterized as a regime complex , or a “larger web of international rules and regimes” ( Alter and Meunier 2009 , 13; Keohane and Victor 2011 ) rather than as a single, discrete regime.

If global AI governance can be understood as a regime complex, which some researchers already claim ( Cihon et al. 2020a ), future scholarship should look for theoretical and methodological inspiration in research on regime complexity in other policy fields. This research has found that regime complexes are characterized by path dependence, as existing rules shape the formulation of new rules; venue shopping, as actors seek to steer regulatory efforts to the fora most advantageous to their interests; and legal inconsistencies, as rules emerge from fractious and overlapping negotiations in parallel processes ( Raustiala and Victor 2004 ). Scholars have also considered the design of regime complexes ( Eilstrup-Sangiovanni and Westerwinter 2021 ), institutional overlap among bodies in regime complexes ( Haftel and Lenz 2021 ), and actors’ forum-shopping within regime complexes ( Verdier 2022 ). Establishing whether these patterns and dynamics are key features also of the AI regime complex stands out as an important priority in future research.

Explaining AI Governance

As our understanding of the empirical patterns of global AI governance grows, a natural next step is to turn to explanatory questions. How may we explain the emerging global governance of AI? What accounts for variation in governance arrangements and how do they compare with those in other policy fields, such as environment, security, or trade? Political science and IR offer a plethora of useful theoretical tools that can provide insights into the global governance of AI. However, at the same time, the novelty of AI as a governance challenge raises new questions that may require novel or refined theories. Thus far, existing research on the global governance of AI has been primarily concerned with descriptive tasks and largely fallen short in engaging with explanatory questions.

We illustrate the potential of general theories to help explain global AI governance by pointing to three broad explanatory perspectives in IR ( Martin and Simmons 2012 )—power, interests, and ideas—which have served as primary sources of theorizing on global governance arrangements in other policy fields. These perspectives have conventionally been associated with the paradigmatic theories of realism, liberalism, and constructivism, respectively, but like much of the contemporary IR discipline, we prefer to formulate them as non-paradigmatic sources for mid-level theorizing of more specific phenomena (cf. Lake 2013 ). We focus our discussion on how accounts privileging power, interests, and ideas have explained the origins and designs of IOs and how they may help us explain wider patterns of global AI governance. We then discuss how theories of non-state actors and regime complexity, in particular, offer promising avenues for future research into the global governance of AI. Research fields like science and technology studies (e.g., Jasanoff 2016 ) or the political economy of international cooperation (e.g., Gilpin 1987 ) can provide additional theoretical insights, but these literatures are not discussed in detail here.

A first broad explanatory perspective is provided by power-centric theories, privileging the role of major states, capability differentials, and distributive concerns. While conventional realism emphasizes how states’ concern for relative gains impedes substantive international cooperation, viewing IOs as epiphenomenal reflections of underlying power relations ( Mearsheimer 1994 ), developed power-oriented theories have highlighted how powerful states seek to design regulatory contexts that favor their preferred outcomes ( Gruber 2000 ) or shape the direction of IOs using informal influence ( Stone 2011 ; Dreher et al. 2022 ).

In research on global AI governance, power-oriented perspectives are likely to prove particularly fruitful in investigating how great-power contestation shapes where and how the technology will be regulated. Focusing on the major AI powerhouses, scholars have started to analyze the contrasting regulatory strategies and policies of the United States, China, and the EU, often emphasizing issues of strategic competition, military balance, and rivalry ( Kania 2017 ; Horowitz et al. 2018 ; Payne 2018 , 2021 ; Johnson 2019 ; Jensen et al. 2020 ). Here, power-centric theories could help understand the apparent emphasis on military AI in both the United States and China, as witnessed by the recent establishment of a US National Security Commission on AI and China’s ambitious plans of integrating AI into its military forces ( Ding 2018 ). The EU, for its part, is negotiating the comprehensive AI Act, seeking to use its market power to set a European standard for AI that subsequently can become the global standard, as it previously did with its GDPR law on data protection and privacy ( Schmitt 2021 ). Given the primacy of these three actors in AI development, their preferences and outlook regarding regulatory solutions will remain a key research priority.

Power-based accounts are also likely to provide theoretical inspiration for research on AI governance in the domain of security and military competition. Some scholars are seeking to assess the implications of AI for strategic rivalries, and their possible regulation, by drawing on historical analogies ( Leung 2019 ; see also Drezner 2019 ). Observing that, from a strategic standpoint, military AI exhibits some similarities to the problems posed by nuclear weapons, researchers have examined whether lessons from nuclear arms control have applicability in the domain of AI governance. For example, Maas (2019a ) argues that historical experience suggests that the proliferation of military AI can potentially be slowed down via institutionalization, while Zaidi and Dafoe (2021 ), in a study of the Baruch Plan for Nuclear Weapons, contend that fundamental strategic obstacles—including mistrust and fear of exploitation by other states—need to be overcome to make regulation viable. This line of investigation can be extended by assessing other historical analogies, such as the negotiations that led to the Strategic Arms Limitation Talks (SALT) in 1972 or more recent efforts to contain the spread of nuclear weapons, where power-oriented factors have shown continued analytical relevance (e.g., Ruzicka 2018 ).

A second major explanatory approach is provided by the family of theoretical accounts that highlight how international cooperation is shaped by shared interests and functional needs ( Keohane 1984 ; Martin 1992 ). A key argument in rational functionalist scholarship is that states are likely to establish IOs to overcome barriers to cooperation—such as information asymmetries, commitment problems, and transaction costs—and that the design of these institutions will reflect the underlying problem structure, including the degree of uncertainty and the number of involved actors (e.g., Koremenos et al. 2001 ; Hawkins et al. 2006 ; Koremenos 2016 ).

Applied to the domain of AI, these approaches would bring attention to how the functional characteristics of AI as a governance problem shape the regulatory response. They would also emphasize the investigation of the distribution of interests and the possibility of efficiency gains from cooperation around AI governance. The contemporary proliferation of partnerships and initiatives on AI governance points to the suitability of this theoretical approach, and research has taken some preliminary steps, surveying state interests and their alignment (e.g., Campbell 2019 ; Radu 2021 ). However, a systematic assessment of how the distribution of interests would explain the nature of emerging governance arrangements, both in the aggregate and at the constituent level, has yet to be undertaken.

A third broad explanatory perspective is provided by theories emphasizing the role of history, norms, and ideas in shaping global governance arrangements. In contrast to accounts based on power and interests, this line of scholarship, often drawing on sociological assumptions and theory, focuses on how institutional arrangements are embedded in a wider ideational context, which itself is subject to change. This perspective has generated powerful analyses of how societal norms influence states’ international behavior (e.g., Acharya and Johnston 2007 ), how norm entrepreneurs play an active role in shaping the origins and diffusion of specific norms (e.g., Finnemore and Sikkink 1998 ), and how IOs socialize states and other actors into specific norms and behaviors (e.g., Checkel 2005 ).

Examining the extent to which domestic and societal norms shape discussions on global governance arrangements stands out as a particularly promising area of inquiry. Comparative research on national ethical standards for AI has already indicated significant cross-country convergence, indicating a cluster of normative principles that are likely to inspire governance frameworks in many parts of the world (e.g., Jobin et al. 2019 ). A closely related research agenda concerns norm entrepreneurship in AI governance. Here, preliminary findings suggest that civil society organizations have played a role in advocating norms relating to fundamental rights in the formulation of EU AI policy and other processes ( Ulnicane 2021 ). Finally, once AI governance structures have solidified further, scholars can begin to draw on norms-oriented scholarship to design strategies for the analysis of how those governance arrangements may play a role in socialization.

In light of the particularities of AI and its political landscape, we expect that global governance scholars will be motivated to refine and adapt these broad theoretical perspectives to address new questions and conditions. For example, considering China’s AI sector-specific resources and expertise, power-oriented theories will need to grapple with questions of institutional creation and modification occurring under a distribution of power that differs significantly from the Western-centric processes that underpin most existing studies. Similarly, rational functionalist scholars will need to adapt their tools to address questions of how the highly asymmetric distribution of AI capabilities—in particular between producers, which are few, concentrated, and highly resourced, and users and subjects, which are many, dispersed, and less resourced—affects the formation of state interests and bargaining around institutional solutions. For their part, norm-oriented theories may need to be refined to capture the role of previously understudied sources of normative and ideational content, such as formal and informal networks of computer programmers, which, on account of their expertise, have been influential in setting the direction of norms surrounding several AI technologies.

We expect that these broad theoretical perspectives will continue to inspire research on the global governance of AI, in particular for tailored, mid-level theorizing in response to new questions. However, a fully developed research agenda will gain from complementing these theories, which emphasize particular independent variables (power, interests, and norms), with theories and approaches that focus on particular issues, actors, and phenomena. There is an abundance of theoretical perspectives that can be helpful in this regard, including research on the relationship between science and politics ( Haas 1992 ; Jasanoff 2016 ), the political economy of international cooperation ( Gilpin 1987 ; Frieden et al. 2017 ), the complexity of global governance ( Raustiala and Victor 2004 ; Eilstrup-Sangiovanni and Westerwinter 2021 ), and the role of non-state actors ( Risse 2012 ; Tallberg et al. 2013 ). We focus here on the latter two: theories of regime complexity, which have grown to become a mainstream approach in global governance scholarship, as well as theories of non-state actors, which provide powerful tools for understanding how private organizations influence regulatory processes. Both literatures hold considerable promise in advancing scholarship of AI global governance beyond its current state.

As concluded above, the current structure of global AI governance fits the description of a regime complex. Thus, approaching AI governance through this theoretical lens, understanding it as a larger web of rules and regulations, can open new avenues of research (see Maas 2021 for a pioneering effort). One priority is to analyze the AI regime complex in terms of core dimensions, such as scale, diversity, and density ( Eilstrup-Sangiovanni and Westerwinter 2021 ). Pointing to the density of this regime complex, existing studies have suggested that global AI governance is characterized by a high degree of fragmentation ( Schmitt 2021 ), which has motivated assessments of the possibility of greater centralization ( Cihon et al. 2020b ). Another area of research is to examine the emergence of legal inconsistencies and tensions, likely to emerge because of the diverging preferences of major AI players and the tendency of self-interest actors to forum-shop when engaging within a regime complex. Finally, given that the AI regime complex exists in a very early state, it provides researchers with an excellent opportunity to trace the origins and evolution of this form of governance structure from the outset, thus providing a good case for both theory development and novel empirical applications.

If theories of regime complexity can shine a light on macro-level properties of AI governance, other theoretical approaches can guide research into micro-level dynamics and influences. Recognizing that non-state actors are central in both AI development and its emergent regulation, researchers should find inspiration in theories and tools developed to study the role and influence of non-state actors in global governance (for overviews, see Risse 2012 ; Jönsson and Tallberg forthcoming ). Drawing on such work will enable researchers to assess to what extent non-state actor involvement in the AI regime complex differs from previous experiences in other international regimes. It is clear that large tech companies, like Google, Meta, and Microsoft, have formed regulatory preferences and that their monetary resources and technological expertise enable them to promote these interests in legislative and bureaucratic processes. For example, the Partnership on AI (PAI), a multistakeholder organization with more than 50 members, includes American tech companies at the forefront of AI development and fosters research on issues of AI ethics and governance ( Schmitt 2021 ). Other non-state actors, including civil society watchdog organizations, like the Civil Liberties Union for Europe, have been vocal in the negotiations of the EU AI Act, further underlining the relevance of this strand of research.

When investigating the role of non-state actors in the AI regime complex, research may be guided by four primary questions. A first question concerns the interests of non-state actors regarding alternative AI global governance architectures. Here, a survey by Chavannes et al. (2020 ) on possible regulatory approaches to LAWS suggests that private companies developing AI applications have interests that differ from those of civil society organizations. Others have pointed to the role of actors rooted in research and academia who have sought to influence the development of AI ethics guidelines ( Zhu 2022 ). A second question is to what extent the regulatory institutions and processes are accessible to the aforementioned non-state actors in the first place. Are non-state actors given formal or informal opportunities to be substantively involved in the development of new global AI rules? Research points to a broad and comprehensive opening up of IOs over the past two decades ( Tallberg et al. 2013 ) and, in the domain of AI governance, early indications are that non-state actors have been granted access to several multilateral processes, including in the OECD and the EU (cf. Niklas and Dencik 2021 ). A third question concerns actual participation: Are non-state actors really making use of the opportunities to participate, and what determines the patterns of participation? In this vein, previous research has suggested that the participation of non-state actors is largely dependent on their financial resources ( Uhre 2014 ) or the political regime of their home country ( Hanegraaff et al. 2015 ). In the context of AI governance, this raises questions about if and how the vast resource disparities and divergent interests between private tech corporations and civil society organizations may bias patterns of participation. There is, for instance, research suggesting that private companies are contributing to a practice of ethics washing by committing to nonbinding ethical guidelines while circumventing regulation ( Wagner 2018 ; Jobin et al. 2019 ; Rességuier and Rodrigues 2020 ). Finally, a fourth question is to what extent, and how, non-state actors exert influence on adopted AI rules. Existing scholarship suggests that non-state actors typically seek to shape the direction of international cooperation via lobbying ( Dellmuth and Tallberg 2017 ), while others have argued that non-state actors use participation in international processes largely to expand or sustain their own resources ( Hanegraaff et al. 2016 ).

The previous section suggested that emerging global initiatives to regulate AI amount to a regime complex and that an empirical approach could help to map and explain these regulatory developments. In this section, we move beyond positive empirical questions to consider the normative concerns at stake in the global governance of AI. We argue that normative theorizing is needed both for assessing how well existing arrangements live up to ideals such as democracy and justice and for evaluating how best to specify what these ideals entail for the global governance of AI.

Ethical values frequently highlighted in the context of AI governance include transparency, inclusion, accountability, participation, deliberation, fairness, and beneficence ( Floridi et al. 2018 ; Jobin et al. 2019 ). A normative perspective suggests several ways in which to theorize and analyze such values in relation to the global governance of AI. One type of normative analysis focuses on application, that is, on applying an existing normative theory to instances of AI governance, assessing how well such regulatory arrangements realize their principles (similar to how political theorists have evaluated whether global governance lives up to standards of deliberation; see Dryzek 2011 ; Steffek and Nanz 2008 ). Such an analysis could also be pursued more narrowly by using a certain normative theory to assess the implications of AI technologies, for instance, by approaching the problem of algorithmic bias based on notions of fairness or justice ( Vredenburgh 2022 ). Another type of normative analysis moves from application to justification, analyzing the structure of global AI governance with the aim of theory construction. In this type of analysis, the goal is to construe and evaluate candidate principles for these regulatory arrangements in order to arrive at the best possible (most justified) normative theory. In this case, the theorist starts out from a normative ideal broadly construed (concept) and arrives at specific principles (conception).

In the remainder of this section, we will point to the promises of analyzing global AI governance based on the second approach. We will focus specifically on the normative ideals of justice and democracy. While many normative ideals could serve as focal points for an analysis of the AI domain, democracy and justice appear particularly central for understanding the normative implications of the governance of AI. Previous efforts to deploy political philosophy to shed light on normative aspects of global governance point to the promise of this focus (e.g., Caney 2005 , 2014 ; Buchanan 2013 ). It is also natural to focus on justice and democracy given that many of the values emphasized in AI ethics and existing ethics guidelines are analytically close to justice and democracy. Our core argument will be that normative research needs to be attentive to how these ideals would be best specified in relation to both the procedures and outcomes of the global governance of AI.

AI Ethics and the Normative Analysis of Global AI Governance

Although there is a rich literature on moral or ethical aspects related to specific AI applications, investigations into normative aspects of global AI governance are surprisingly sparse (for exceptions, see Müller 2020 ; Erman and Furendal 2022a , 2022b ). Researchers have so far focused mostly on normative and ethical questions raised by AI considered as a tool, enabling, for example, autonomous weapons systems ( Sparrow 2007 ) and new forms of political manipulation ( Susser et al. 2019 ; Christiano 2021 ). Some have also considered AI as a moral agent of its own, focusing on how we could govern, or be governed by, a hypothetical future artificial general intelligence ( Schwitzgebel and Garza 2015 ; Livingston and Risse 2019 ; cf. Tasioulas 2019 ; Bostrom et al. 2020 ; Erman and Furendal 2022a ). Examples such as these illustrate that there is, by now, a vibrant field of “AI ethics” that aims to consider normative aspects of specific AI applications.

As we have shown above, however, initiatives to regulate AI beyond the nation-state have become increasingly common, and they are often led by IOs, multinational companies, private standardization bodies, and civil society organizations. These developments raise normative issues that require a shift from AI ethics in general to systematic analyses of the implications of global AI governance. It is crucial to explore these normative dimensions of how AI is governed, since how AI is governed invokes key normative questions pertaining to the ideals that ought to be met.

Apart from attempts to map or describe the central norms in the existing global governance of AI (cf. Jobin et al.), most normative analyses of the global governance of AI can be said to have proceeded in two different ways. The dominant approach is to employ an outcome-based focus ( Dafoe 2018 ; Winfield et al. 2019 ; Taeihagh 2021 ), which starts by identifying a potential problem or promise created by AI technology and then seeks to identify governance mechanisms or principles that can minimize risks or make a desired outcome more likely. This approach can be contrasted with a procedure-based focus, which attaches comparatively more weight to how governance processes happen in existing or hypothetical regulatory arrangements. It recognizes that there are certain procedural aspects that are important and might be overlooked by an analysis that primarily assesses outcomes.

The benefits of this distinction become apparent if we focus on the ideals of justice and democracy. Broadly construed, we understand justice as an ideal for how to distribute benefits and burdens—specifying principles that determine “who owes what to whom”—and democracy as an ideal for collective decision-making and the exercise of political power—specifying principles that determine “who has political power over whom” ( Barry 1991 ; Weale 1999 ; Buchanan and Keohane 2006 ; Christiano 2008 ; Valentini 2012 , 2013 ). These two ideals can be analyzed with a focus on procedure or outcome, producing four fruitful avenues of normative research into global AI governance. First, justice could be understood as a procedural value or as a distributive outcome. Second, and likewise, democracy could be a feature of governance processes or an outcome of those processes. Below, we discuss existing research from the standpoint of each of these four avenues. We conclude that there is great potential for novel insights if normative theorists consider the relatively overlooked issues of outcome aspects of justice and procedural aspects of democracy in the global governance of AI.

Procedural and Outcome Aspects of Justice

Discussions around the implications of AI applications on justice, or fairness, are predominantly concerned with procedural aspects of how AI systems operate. For instance, ever since the problem of algorithmic bias—i.e., the tendency that AI-based decision-making reflects and exacerbates existing biases toward certain groups—was brought to public attention, AI ethicists have offered suggestions of why this is wrong, and AI developers have sought to construct AI systems that treat people “fairly” and thus produce “justice.” In this context, fairness and justice are understood as procedural ideals, which AI decision-making frustrates when it fails to treat like cases alike, and instead systematically treats individuals from different groups differently ( Fazelpour and Danks 2021 ; Zimmermann and Lee-Stronach 2022 ). Paradigmatic examples include automated predictions about recidivism among prisoners that have impacted decisions about people’s parole and algorithms used in recruitment that have systematically favored men over women ( Angwin et al. 2016 ; O'Neil 2017 ).

However, the emerging global governance of AI also has implications for how the benefits and burdens of AI technology are distributed among groups and states—i.e., outcomes ( Gilpin 1987 ; Dreher and Lang 2019 ). Like the regulation of earlier technological innovations ( Krasner 1991 ; Drezner 2019 ), AI governance may not only produce collective benefits, but also favor certain actors at the expense of others ( Dafoe 2018 ; Horowitz 2018 ). For instance, the concern about AI-driven automation and its impact on employment is that those who lose their jobs because of AI might carry a disproportionately large share of the negative externalities of the technology without being compensated through access to its benefits (cf. Korinek and Stiglitz 2019 ; Erman and Furendal 2022a ). Merely focusing on justice as a procedural value would overlook such distributive effects created by the diffusion of AI technology.

Moreover, this example illustrates that since AI adoption may produce effects throughout the global economy, regulatory efforts will have to go beyond issues relating to the technology itself. Recognizing the role of outcomes of AI governance entails that a broad range of policies need to be pursued by existing and emerging governance regimes. The global trade regime, for instance, may need to be reconsidered in order for the distribution of positive and negative externalities of AI technology to be just. Suggestions include pursuing policies that can incentivize certain kinds of AI technology or enable the profits gained by AI developers to be shared more widely (cf. Floridi et al. 2018 ; Erman and Furendal 2022a ).

In sum, with regard to outcome aspects of justice, theories are needed to settle which benefits and burdens created by global AI adoption ought to be fairly distributed and why (i.e., what the “site” and “scope” of AI justice are) (cf. Gabriel 2022 ). Similarly, theories of procedural aspects should look beyond individual applications of AI technology and ask whether a fairer distribution of influence over AI governance may help produce more fair outcomes, and if so how. Extending existing theories of distributive justice to the realm of global AI governance may put many of their central assumptions in a new light.

Procedural and Outcome Aspects of Democracy

Normative research could also fruitfully shed light on how emerging AI governance should be analyzed in relation to the ideal of democracy, such as what principles or criteria of democratic legitimacy are most defensible. It could be argued, for instance, that the decision process must be open to democratic influence for global AI governance to be democratically legitimate ( Erman and Furendal 2022b ). Here, normative theory can explain why it matters from the standpoint of democracy whether the affected public has had a say—either directly through open consultation or indirectly through representation—in formulating the principles that guide AI governance. The nature of the emerging AI regime complex—where prominent roles are held by multinational companies and private standard-setting bodies—suggests that it is far from certain that the public will have this kind of influence.

Importantly, it is likely that democratic procedures will take on different shapes in global governance compared to domestic politics ( Dahl 1999 ; Scholte 2011 ). A viable democratic theory must therefore make sense of how the unique properties of global governance raise issues or require solutions that are distinct from those in the domestic context. For example, the prominent influence of non-state actors, including the large tech corporations developing cutting-edge AI technology, suggests that it is imperative to ask whether different kinds of decision-making may require different normative standards and whether different kinds of actors may have different normative status in such decision-making arrangements.

Initiatives from non-state actors, such as the tech company-led PAI discussed above, often develop their own non-coercive ethics guidelines. Such documents may seek effects similar to coercively upheld regulation, such as the GDPR or the EU AI Act. For example, both Google and the EU specify that AI should not reinforce biases ( High-Level Expert Group on Artificial Intelligence 2019 ; Google 2022 ). However, from the perspective of democratic legitimacy, it may matter extensively which type of entity adopts AI regulations and on what grounds those decision-making entities have the authority to issue AI regulations ( Erman and Furendal 2022b ).

Apart from procedural aspects, a satisfying democratic theory of global AI governance will also have to include a systematic analysis of outcome aspects. Important outcome aspects of democracy include accountability and responsiveness. Accountability may be improved, for example, by instituting mechanisms to prevent corruption among decision-makers and to secure public access to governing documents, and responsiveness may be improved by strengthening the discursive quality of global decision processes, for instance, by involving international NGOs and civil movements that give voice to marginalized groups in society. With regard to tracing citizens’ preferences, some have argued that democratic decision-making can be enhanced by AI technology that tracks what people want and consistently reach “better” decisions than human decision-makers (cf. König and Wenzelburger 2022 ). Apart from accountability and responsiveness, other relevant outcome aspects of democracy include, for example, the tendency to promote conflict resolution, improve the epistemic quality of decisions, and dignity and equality among citizens.

In addition, it is important to analyze how procedural and outcome concerns are related. This issue is often neglected, which again can be illustrated by the ethics guidelines from IOs, such as the OECD Principles on Artificial Intelligence and the UNESCO Recommendation on Ethics of AI. Such documents often stress the importance of democratic values and principles, such as transparency, accountability, participation, and deliberation. Yet they typically treat these values as discrete and rarely explain how they are interconnected ( Jobin et al. 2019 ; Schiff et al. 2020 ; Hagendorff 2020 , 103). Democratic theory can fruitfully step in to explain how the ideal of “the rule by the people” includes two sides that are intimately connected. First, there is an access side of political power, where those affected should have a say in the decision-making, which might require participation, deliberation, and political equality. Second, there is an exercise side of political power, where those very decisions should apply in appropriate ways, which in turn might require effectiveness, transparency, and accountability. In addition to efforts to map and explain norms and values in the global governance of AI, theories of democratic AI governance can hence help explain how these two aspects are connected (cf. Erman 2020 ).

In sum, the global governance of AI raises a number of issues for normative research. We have identified four promising avenues, focused on procedural and outcome aspects of justice and democracy in the context of global AI governance. Research along these four avenues can help to shed light on the normative challenges facing the global governance of AI and the key values at stake, as well as provide the impetus for novel theories on democratic and just global AI governance.

This article has charted a new agenda for research into the global governance of AI. While existing scholarship has been primarily descriptive or policy-oriented, we propose an agenda organized around theory-driven positive and normative questions. To this end, we have outlined two broad analytical perspectives on the global governance of AI: an empirical approach, aimed at conceptualizing and explaining global AI governance; and a normative approach, aimed at developing and applying ideals for appropriate global AI governance. Pursuing these empirical and normative approaches can help to guide future scholarship on the global governance of AI toward critical questions, core concepts, and promising theories. At the same time, exploring AI as a regulatory issue provides an opportunity to further develop these general analytical approaches as they confront the particularities of this important area of governance.

We conclude this article by highlighting the key takeaways from this research agenda for future scholarship on empirical and normative dimensions of the global governance of AI. First, research is required to identify where and how AI is becoming globally governed . Mapping and conceptualizing the emerging global governance of AI is a first necessary step. We argue that research may benefit from considering the variety of ways in which new regulation may come about, from the reinterpretation of existing rules and the extension of prevailing sectoral governance to the negotiation of entirely new frameworks. In addition, we suggest that scholarship may benefit from considering how global AI governance may be conceptualized in terms of key analytical dimensions, such as horizontal–vertical, centralized–decentralized, and formal–informal.

Second, research is necessary to explain why AI is becoming globally governed in particular ways . Having mapped global AI governance, we need to account for the factors that drive and shape these regulatory processes and arrangements. We argue that political science and IR offer a variety of theoretical tools that can help to explain the global governance of AI. In particular, we highlight the promise of theories privileging the role of power, interests, ideas, regime complexes, and non-state actors, but also recognize that research fields such as science and technology studies and political economy can yield additional theoretical insights.

Third, research is needed to identify what normative ideals global AI governance ought to meet . Moving from positive to normative issues, a first critical question pertains to the ideals that should guide the design of appropriate global AI governance. We argue that normative theory provides the tools necessary to engage with this question. While normative theory can suggest several potential principles, we believe that it may be especially fruitful to start from the ideals of democracy and justice, which are foundational and recurrent concerns in discussions about political governing arrangements. In addition, we suggest that these two ideals are relevant both for the procedures by which AI regulation is adopted and for the outcomes of such regulation.

Fourth, research is required to evaluate how well global AI governance lives up to these normative ideals . Once appropriate normative ideals have been selected, we can assess to what extent and how existing arrangements conform to these principles. We argue that previous research on democracy and justice in global governance offers a model in this respect. A critical component of such research is the integration of normative and empirical research: normative research for elucidating how normative ideals would be expressed in practice, and empirical research for analyzing data on whether actual arrangements live up to those ideals.

In all, the research agenda that we outline should be of interest to multiple audiences. For students of political science and IR, it offers an opportunity to apply and refine concepts and theories in a novel area of global governance of extensive future importance. For scholars of AI, it provides an opportunity to understand how political actors and considerations shape the conditions under which AI applications may be developed and used. For policymakers, it presents an opportunity to learn about evolving regulatory practices and gaps, interests shaping emerging arrangements, and trade-offs to be confronted in future efforts to govern AI at the global level.

A previous version of this article was presented at the Global and Regional Governance workshop at Stockholm University. We are grateful to Tim Bartley, Niklas Bremberg, Lisa Dellmuth, Felicitas Fritzsche, Faradj Koliev, Rickard Söder, Carl Vikberg, Johanna von Bahr, and three anonymous reviewers for ISR for insightful comments and suggestions. The research for this article was funded by the WASP-HS program of the Marianne and Marcus Wallenberg Foundation (Grant no. MMW 2020.0044).

We use “global governance” to refer to regulatory processes beyond the nation-state, whether on a global or regional level. While states and IOs often are central to these regulatory processes, global governance also involves various types of non-state actors ( Rosenau 1999 ).

Abbott Kenneth W. , and Snidal Duncan . 2000 . “ Hard and Soft Law in International Governance .” International Organization . 54 ( 3 ): 421 – 56 .

Google Scholar

Acemoglu Daron , and Restrepo Pascual . 2020 . “ The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand .” Cambridge Journal of Regions, Economy and Society . 13 ( 1 ): 25 – 35 .

Acharya Amitav , and Johnston Alistair Iain . 2007 . “ Conclusion: Institutional Features, Cooperation Effects, and the Agenda for Further Research on Comparative Regionalism .” In Crafting Cooperation: Regional International Institutions in Comparative Perspective , edited by Acharya Amitav , Johnston Alistair Iain , 244 – 78 .. Cambridge : Cambridge University Press .

Google Preview

Alter Karen J. , and Meunier Sophie . 2009 . “ The Politics of International Regime Complexity .” Perspectives on Politics . 7 ( 1 ): 13 – 24 .

Angwin Julia , Larson Jeff , Mattu Surya , and Kirchner Lauren . 2016 . “ Machine Bias .” ProPublica , May 23 . Internet (last accessed August 25, 2023): https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing .

Barry Brian . 1991 . “ Humanity and Justice in Global Perspective .” In Liberty and Justice , edited by Barry Brian . Oxford : Clarendon .

Berk Richard A . 2021 . “ Artificial Intelligence, Predictive Policing, and Risk Assessment for Law Enforcement .” Annual Review of Criminology . 4 ( 1 ): 209 – 37 .

Bostrom Nick , Dafoe Allan , and Flynn Carrick . 2020 . “ Public Policy and Superintelligent AI: A Vector Field Approach .” In Ethics of Artificial Intelligence , edited by Liao S. Matthew , 293 – 326 .. Oxford : Oxford University Press .

Buchanan Allen , and Keohane Robert O. . 2006 . “ The Legitimacy of Global Governance Institutions .” Ethics & International Affairs . 20 (4) : 405 – 37 .

Buchanan Allen . 2013 . The Heart of Human Rights . Oxford : Oxford University Press .

Burrell Jenna . 2016 . “ How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms .” Big Data & Society . 3 ( 1 ): 1 – 12 .. https://doi.org/10.1177/2053951715622512 .

Burri Thomas . 2017 . “ International Law and Artificial Intelligence .” In German Yearbook of International Law , vol. 60 , 91 – 108 .. Berlin : Duncker and Humblot .

Butcher James , and Beridze Irakli . 2019 . “ What is the State of Artificial Intelligence Governance Globally?” . The RUSI Journal . 164 ( 5-6 ): 88 – 96 .

Büthe Tim , Djeffal Christian , Lütge Christoph , Maasen Sabine , and von Ingersleben-Seip Nora . 2022 . “ Governing AI—Attempting to Herd Cats? Introduction to the Special Issue on the Governance of Artificial Intelligence .” Journal of European Public Policy . 29 ( 11 ): 1721 – 52 .

Campbell Thomas A . 2019 . Artificial Intelligence: An Overview of State Initiatives . Evergreen, CO : FutureGrasp .

Caney Simon . 2005 . “ Cosmopolitan Justice, Responsibility, and Global Climate Change .” Leiden Journal of International Law . 18 ( 4 ): 747 – 75 .

Caney Simon . 2014 . “ Two Kinds of Climate Justice: Avoiding Harm and Sharing Burdens .” Journal of Political Philosophy . 22 ( 2 ): 125 – 49 .

Chavannes Esther , Klonowska Klaudia , and Sweijs Tim . 2020 . Governing Autonomous Weapon Systems: Expanding the Solution Space, From Scoping to Applying . The Hague : The Hague Center for Strategic Studies .

Checkel Jeffrey T . 2005 . “ International Institutions and Socialization in Europe: Introduction and Framework .” International organization . 59 ( 4 ): 801 – 26 .

Christiano Thomas . 2008 . The Constitution of Equality . Oxford : Oxford University Press .

Christiano Thomas . 2021 . “ Algorithms, Manipulation, and Democracy .” Canadian Journal of Philosophy . 52 ( 1 ): 109 – 124 .. https://doi.org/10.1017/can.2021.29 .

Cihon Peter , Maas Matthijs M. , and Kemp Luke . 2020a . “ Fragmentation and the Future: Investigating Architectures for International AI Governance .” Global Policy . 11 ( 5 ): 545 – 56 .

Cihon Peter , Maas Matthijs M. , and Kemp Luke . 2020b . “ Should Artificial Intelligence Governance Be Centralised? Design Lessons from History .” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society , 228 – 34 . New York, NY: ACM .

Council of Europe . 2023 . “ AI Initiatives ,” accessed 16 June 2023, AI initiatives (coe.int).

Dafoe Allan . 2018 . AI Governance: A Research Agenda . Oxford: Governance of AI Program , Future of Humanity Institute, University of Oxford . www.fhi.ox.ac.uk/govaiagenda .

Dahl Robert . 1999 . “ Can International Organizations Be Democratic: A Skeptic's View .” In Democracy's Edges , edited by Shapiro Ian , Hacker-Córdon Casiano , 19 – 36 .. Cambridge : Cambridge University Press .

Dellmuth Lisa M. , and Tallberg Jonas . 2017 . “ Advocacy Strategies in Global Governance: Inside versus Outside Lobbying .” Political Studies . 65 ( 3 ): 705 – 23 .

Ding Jeffrey . 2018 . Deciphering China's AI Dream: The Context, Components, Capabilities and Consequences of China's Strategy to Lead the World in AI . Oxford: Centre for the Governance of AI , Future of Humanity Institute, University of Oxford .

Dreher Axel , and Lang Valentin . 2019 . “ The Political Economy of International Organizations .” In The Oxford Handbook of Public Choice , Volume 2, edited by Congleton Roger O. , Grofman Bernhard , Voigt Stefan . Oxford : Oxford University Press .

Dreher Axel , Lang Valentin , Rosendorff B. Peter , and Vreeland James R. . 2022 . “ Bilateral or Multilateral? International Financial Flows and the Dirty Work-Hypothesis .” The Journal of Politics . 84 ( 4 ): 1932 – 1946 .

Drezner Daniel W . 2019 . “ Technological Change and International Relations .” International Relations . 33 ( 2 ): 286 – 303 .

Dryzek John . 2011 . “ Global Democratization: Soup, Society, or System? ” Ethics & International Affairs , 25 ( 2 ): 211 – 234 .

Ebers Martin . 2022 . “ Explainable AI in the European Union: An Overview of the Current Legal Framework(s) .” In Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence , edited by Lianne Colonna and Stanley Greenstein . Stockholm: The Swedish Law and Informatics Institute, Stockholm University .

Eilstrup-Sangiovanni Mette , and Westerwinter Oliver . 2021 . “ The Global Governance Complexity Cube: Varieties of Institutional Complexity in Global Governance .” Review of International Organizations . 17 (2): 233 – 262 .

Erman Eva , and Furendal Markus . 2022a . “ The Global Governance of Artificial Intelligence: Some Normative Concerns .” Moral Philosophy & Politics . 9 (2): 267−291. https://www.degruyter.com/document/doi/10.1515/mopp-2020-0046/html .

Erman Eva , and Furendal Markus . 2022b . “ Artificial Intelligence and the Political Legitimacy of Global Governance .” Political Studies . https://journals.sagepub.com/doi/full/10.1177/00323217221126665 .

Erman Eva . 2020 . “ A Function-Sensitive Approach to the Political Legitimacy of Global Governance .” British Journal of Political Science . 50 ( 3 ): 1001 – 24 .

Fazelpour Sina , and Danks David . 2021 . “ Algorithmic Bias: Senses, Sources, Solutions .” Philosophy Compass . 16 ( 8 ): e12760.

Finnemore Martha , and Sikkink Kathryn . 1998 . “ International Norm Dynamics and Political Change .” International Organization . 52 ( 4 ): 887 – 917 .

Floridi Luciano , Cowls Josh , Beltrametti Monica , Chatila Raja , Chazerand Patrice , Dignum Virginia , Luetge Christoph et al.  2018 . “ AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations .” Minds and Machines . 28 ( 4 ): 689 – 707 .

Frey Carl Benedikt . 2019 . The Technology Trap: Capital, Labor, and Power in the Age of Automation . Princeton, NJ : Princeton University Press .

Frieden Jeffry , Lake David A. , and Lawrence Broz J. 2017 . International Political Economy: Perspectives on Global Power and Wealth . Sixth Edition. New York, NY : W.W. Norton .

Future of Life Institute , 2023 . “ Pause Giant AI Experiments: An Open Letter .” Accessed June 13, 2023. https://futureoflife.org/open-letter/pause-giant-ai-experiments/ .

Gabriel Iason , 2022 . “ Toward a Theory of Justice for Artificial Intelligence .” Daedalus , 151 ( 2 ): 218 – 31 .

Gilpin Robert . 1987 . The Political Economy of International Relations . Princeton, NJ : Princeton University Press .

Google . 2022 . “ Artificial Intelligence at Google: Our Principles .” Internet (last accessed August 25, 2023): https://ai.google/principles/ .

Greenstein Stanley . 2022 . “ Liability in the Era of Artificial Intelligence .” In Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence , edited by Colonna Lianne , Greenstein Stanley . Stockholm: The Swedish Law and Informatics Institute, Stockholm University .

Gruber Lloyd . 2000 . Ruling the World . Princeton, NJ : Princeton University Press .

Haas Peter . 1992 . “ Introduction: Epistemic Communities and International Policy Coordination .” International Organization . 46 ( 1 ): 1 – 36 .

Haftel Yoram Z. , and Lenz Tobias . 2021 . “ Measuring Institutional Overlap in Global Governance .” Review of International Organizations . 17(2) : 323 – 347 .

Hagendorff Thilo . 2020 . “ The Ethics of AI Ethics: an Evaluation of Guidelines .” Minds and Machines . 30 ( 1 ): 99 – 120 .

Hanegraaff Marcel , Beyers Jan , and De Bruycker Iskander . 2016 . “ Balancing Inside and Outside Lobbying: The Political Strategy of Lobbyists at Global Diplomatic Conferences .” European Journal of Political Research . 55 ( 3 ): 568 – 88 .

Hanegraaff Marcel , Braun Caelesta , De Bièvre Dirk , and Beyers Jan . 2015 . “ The Domestic and Global Origins of Transnational Advocacy: Explaining Lobbying Presence During WTO Ministerial Conferences .” Comparative Political Studies . 48 : 1591 – 621 .

Hawkins Darren G. , Lake David A. , Nielson Daniel L. , Tierney Michael J. Eds. 2006 . Delegation and Agency in International Organizations . Cambridge : Cambridge University Press .

Heikkilä Melissa . 2022a . “ AI: Decoded. IoT Under Fire—Defining AI?—Meta's New AI Supercomputer .” Accessed June 5, 2022, https://www.politico.eu/newsletter/ai-decoded/iot-under-fire-defining-ai-metas-new-ai-supercomputer-2 /.

Heikkilä Melissa 2022b . “ AI: Decoded. A Dutch Algorithm Scandal Serves a Warning to Europe—The AI Act won't Save Us .” Accessed June 5, 2022, https://www.politico.eu/newsletter/ai-decoded/a-dutch-algorithm-scandal-serves-a-warning-to-europe-the-ai-act-wont-save-us-2/ .

Henriques-Gomes Luke . 2020 . “ Robodebt: Government Admits It Will Be Forced to Refund $550 m under Botched Scheme .” The Guardian . sec. Australia news . Internet (last accessed August 25, 2023): https://www.theguardian.com/australia-news/2020/mar/27/robodebt-government-admits-it-will-be-forced-to-refund-550m-under-botched-scheme .

Hernandez Joe . 2021 . “ A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says .” National Public Radio . Internet (las accessed August 25, 2023): https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d .

High-Level Expert Group on Artificial Intelligence . 2019 . Ethics Guidelines for Trustworthy AI . Brussels: European Commission . https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai .

Horowitz Michael . 2018 . “ Artificial Intelligence, International Competition, and the Balance of Power .” Texas National Security Review . 1 ( 3 ): 37 – 57 .

Horowitz Michael C. , Allen Gregory C. , Kania Elsa B. , Scharre Paul . 2018 . Strategic Competition in an Era of Artificial Intelligence . Washington D.C. : Center for a New American Security .

Hu Krystal . 2023 . ChatGPT Sets Record for Fastest-Growing User Base—Analyst Note. Reuters , February 2, 2023, sec. Technology , Accessed June 12, 2023, https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ .

Jasanoff Sheila . 2016 . The Ethics of Invention: Technology and the Human Future . New York : Norton .

Jensen Benjamin M. , Whyte Christopher , and Cuomo Scott . 2020 . “ Algorithms at War: the Promise, Peril, and Limits of Artificial Intelligence .” International Studies Review . 22 ( 3 ): 526 – 50 .

Jobin Anna , Ienca Marcello , and Vayena Effy . 2019 . “ The Global Landscape of AI Ethics Guidelines .” Nature Machine Intelligence . 1 ( 9 ): 389 – 99 .

Johnson J. 2019 . “ Artificial intelligence & Future Warfare: Implications for International Security .” Defense & Security Analysis . 35 ( 2 ): 147 – 69 .

Jönsson Christer , and Tallberg Jonas . Forthcoming. “Opening up to Civil Society: Access, Participation, and Impact .” In Handbook on Governance in International Organizations , edited by Edgar Alistair . Cheltenham : Edward Elgar Publishing .

Kania E. B . 2017 . Battlefield singularity. Artificial Intelligence, Military Revolution, and China's Future Military Power . Washington D.C.: CNAS .

Keohane Robert O . 1984 . After Hegemony . Princeton, NJ : Princeton University Press .

Keohane Robert O. , and Victor David G. . 2011 . “ The Regime Complex for Climate Change .” Perspectives on Politics . 9 ( 1 ): 7 – 23 .

König Pascal D. and Georg Wenzelburger 2022 . “ Between Technochauvinism and Human-Centrism: Can Algorithms Improve Decision-Making in Democratic Politics? ” European Political Science , 21 ( 1 ): 132 – 49 .

Koremenos Barbara , Lipson Charles , and Snidal Duncan . 2001 . “ The Rational Design of International Institutions .” International Organization . 55 ( 4 ): 761 – 99 .

Koremenos Barbara . 2016 . The Continent of International Law: Explaining Agreement Design . Cambridge : Cambridge University Press .

Korinek Anton and Stiglitz Joseph E. 2019 . “ Artificial Intelligence and Its Implications for Income Distribution and Unemployment ” In The Economics of Artificial Intelligence: An Agenda . edited by Agrawal A. , Gans J. and Goldfarb A. . University of Chicago Press . :.

Koven Levit Janet . 2007 . “ Bottom-Up International Lawmaking: Reflections on the New Haven School of International Law .” Yale Journal of International Law . 32 : 393 – 420 .

Krasner Stephen D . 1991 . “ Global Communications and National Power: Life on the Pareto Frontier .” World Politics . 43 ( 3 ): 336 – 66 .

Kunz Martina , and hÉigeartaigh Seán Ó . 2020 . “ Artificial Intelligence and Robotization .” In The Oxford Handbook on the International Law of Global Security , edited by Geiss Robin , Melzer Nils . Oxford : Oxford University Press .

Lake David. A . 2013 . “ Theory is Dead, Long Live Theory: The End of the Great Debates and the rise of eclecticism in International Relations .” European Journal of International Relations . 19 ( 3 ): 567 – 87 .

Leung Jade . 2019 . “ Who Will Govern Artificial Intelligence?” . Learning from the History of Strategic Politics in Emerging Technologies . Doctoral dissertation . Oxford: University of Oxford .

Livingston Steven , and Mathias Risse . 2019 , “ The Future Impact of Artificial Intelligence on Humans and Human Rights .” Ethics & International Affairs . 33 ( 2 ): 141 – 58 .

Maas Matthijs M . 2019a . “ How Viable is International Arms Control for Military Artificial Intelligence? Three Lessons from Nuclear Weapons .” Contemporary Security Policy . 40 ( 3 ): 285 – 311 .

Maas Matthijs M . 2019b . “ Innovation-proof Global Governance for Military Artificial Intelligence? How I Learned to Stop Worrying, and Love the Bot ,” Journal of International Humanitarian Legal Studies . 10 ( 1 ): 129 – 57 .

Maas Matthijs M . 2021 . Artificial Intelligence Governance under Change: Foundations, Facets, Frameworks . PhD dissertation . Copenhagen: University of Copenhagen .

Martin Lisa L . 1992 . “ Interests, Power, and Multilateralism .” International Organization . 46 ( 4 ): 765 – 92 .

Martin Lisa L. , and Simmons Beth A. . 2012 . “ International Organizations and Institutions .” In Handbook of International Relations , edited by Carlsnaes Walter , Risse Thomas , Simmons Beth A. . London : SAGE .

McCarthy John , Minsky Marvin L. , Rochester Nathaniel , and Shannon Claude E . 1955 . “ A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence .” AI Magazine . 27 ( 4 ): 12 – 14 (reprint) .

Mearsheimer John J. . 1994 . “ The False Promise of International Institutions .” International Security , 19 ( 3 ): 5 – 49 .

Metz Cade . 2022 . “ Lawsuit Takes Aim at the Way a.I. Is Built .” The New York Times , November 23, Accessed June 21, 2023. https://www.nytimes.com/2022/11/23/technology/copilot-microsoft-ai-lawsuit.html . June 21, 2023 .

Misuraca Gianluca , van Noordt Colin 2022 . “ Artificial Intelligence for the Public Sector: Results of Landscaping the Use of AI in Government Across the European Union .” Government Information Quarterly . 101714 . https://doi.org/10.1016/j.giq.2022.101714 .

Müller Vincent C . 2020 . “ Ethics of Artificial Intelligence and Robotics .” In Stanford Encyclopedia of Philosophy , edited by Zalta Edward N. Internet (last accessed August 25, 2023): https://plato.stanford.edu/archives/fall2020/entries/ethics-ai/ .

Niklas Jedrzen , Dencik Lina . 2021 . “ What Rights Matter? Examining the Place of Social Rights in the EU's Artificial Intelligence Policy Debate .” Internet Policy Review . 10 ( 3 ): 1 – 29 .

OECD . 2021 . “ OECD AI Policy Observatory .” Accessed February 17, 2022. https://oecd.ai .

O'Neil Cathy . 2017 . Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy . UK : Penguin Books .

Orakhelashvili Alexander . 2019 . Akehurst's Modern Introduction to International Law , Eighth Edition . London : Routledge .

Payne K . 2021 . I, Warbot: The Dawn of Artificially Intelligent Conflict . Oxford: Oxford University Press .

Payne Kenneth . 2018 . “ Artificial Intelligence: a Revolution in Strategic Affairs?” . Survival . 60 ( 5 ): 7 – 32 .

Petman Jarna . 2017 . Autonomous Weapons Systems and International Humanitarian Law: ‘Out of the Loop’ . Helsinki : The Eric Castren Institute of International Law and Human Rights .

Powers Thomas M. , and Ganascia Jean-Gabriel . 2020 . “ The Ethics of the Ethics of AI .” In The Oxford Handbook of Ethics of AI , edited by Dubber Markus D. , Pasquale Frank , Das Sunit , 25 – 51 .. Oxford : Oxford University Press .

Rademacher Timo . 2019 . “ Artificial Intelligence and Law Enforcement .” In Regulating Artificial Intelligence , edited by Wischmeyer Thomas , Rademacher Timo , 225 – 54 .. Cham: Springer .

Radu Roxana . 2021 . “ Steering the Governance of Artificial Intelligence: National Strategies in Perspective .” Policy and Society . 40 ( 2 ): 178 – 93 .

Raustiala Kal and David G. Victor . 2004 .“ The Regime Complex for Plant Genetic Resources .” International Organization , 58 ( 2 ): 277 – 309 .

Rességuier Anaïs , and Rodrigues Rowena . 2020 . “ AI Ethics Should Not Remain Toothless! A Call to Bring Back the Teeth of Ethics .” Big Data & Society . 7 ( 2 ). https://doi.org/10.1177/2053951720942541 .

Risse Thomas . 2012 . “ Transnational Actors and World Politics .” In Handbook of International Relations , 2nd ed., edited by Carlsnaes Walter , Risse Thomas , Simmons Beth A. . London : Sage .

Roach Steven C. , and Eckert Amy , eds. 2020 . Moral Responsibility in Twenty-First-Century Warfare: Just War Theory and the Ethical Challenges of Autonomous Weapons Systems . Albany, NY : State University of New York .

Roberts Huw , Cowls Josh , Morley Jessica , Taddeo Mariarosaria , Wang Vincent , and Floridi Luciano . 2021 . “ The Chinese Approach to Artificial Intelligence: An Analysis of Policy, Ethics, and Regulation .” AI & Society . 36 ( 1 ): 59 – 77 .

Rosenau James N . 1999 . “ Toward an Ontology for Global Governance .” In Approaches to Global Governance Theory , edited by Hewson Martin , Sinclair Timothy J. , 287 – 301 .. Albany, NY : SUNY Press .

Rosert Elvira , and Sauer Frank . 2018 . Perspectives for Regulating Lethal Autonomous Weapons at the CCW: A Comparative Analysis of Blinding Lasers, Landmines, and LAWS . Paper prepared for the workshop “New Technologies of Warfare: Implications of Autonomous Weapons Systems for International Relations,” 5th EISA European Workshops in International Studies , Groningen , 6-9 June 2018 . Internet (last accessed August 25, 2023): https://www.academia.edu/36768452/Perspectives_for_Regulating_Lethal_Autonomous_Weapons_at_the_CCW_A_Comparative_Analysis_of_Blinding_Lasers_Landmines_and_LAWS

Rosert Elvira , and Sauer Frank . 2019 . “ Prohibiting Autonomous Weapons: Put Human Dignity First .” Global Policy . 10 ( 3 ): 370 – 5 .

Russell Stuart J. , and Norvig Peter . 2020 . Artificial Intelligence: A Modern Approach . Boston, MA : Pearson .

Ruzicka Jan . 2018 . “ Behind the Veil of Good Intentions: Power Analysis of the Nuclear Non-proliferation Regime .” International Politics . 55 ( 3 ): 369 – 85 .

Saif Hassan , Dickinson Thomas , Kastler Leon , Fernandez Miriam , and Alani Harith . 2017 . “ A Semantic Graph-Based Approach for Radicalisation Detection on Social Media .” ESWC 2017: The Semantic Web—Proceedings, Part 1 , 571 – 87 .. Cham : Springer .

Schiff Daniel , Justin Biddle , Jason Borenstein , and Kelly Laas . 2020 . “ What’s Next for AI Ethics, Policy, and Governance? A Global Overview .” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society . ACM , , .

Schmitt Lewin . 2021 . “ Mapping Global AI Governance: A Nascent Regime in a Fragmented Landscape .” AI and Ethics . 2 ( 2 ): 303 – 314 .

Scholte Jan Aart . ed. 2011 . Building Global Democracy? Civil Society and Accountable Global Governance . Cambridge : Cambridge University Press .

Schwitzgebel Eric , and Garza Mara . 2015 . “ A Defense of the Rights of Artificial Intelligences .” Midwest Studies In Philosophy . 39 ( 1 ): 98 – 119 .

Sparrow Robert . 2007 . “ Killer Robots .” Journal of Applied Philosophy . 24 ( 1 ): 62 – 77 .

Steffek Jens and Patrizia Nanz . 2008 . “ Emergent Patterns of Civil Society Participation in Global and European Governance ” In Civil Society Participation in European and Global Governance . edited by Jens Steffek , Claudia Kissling , and Patrizia Nanz Basingstoke: Palgrave Macmillan . 1–29.

Stone Randall. W . 2011 . Controlling Institutions: International Organizations and the Global Economy . Cambridge : Cambridge University Press .

Stop Killer Robots . 2023 . “ About Us.” . Accessed June 13, 2023, https://www.stopkillerrobots.org/about-us/ .

Susser Daniel , Roessler Beate , Nissenbaum Helen . 2019 . “ Technology, Autonomy, and Manipulation .” Internet Policy Review . 8 ( 2 ):. https://doi.org/10.14763/2019.2.1410 .

Taeihagh Araz . 2021 . “ Governance of Artificial Intelligence .” Policy and Society . 40 ( 2 ): 137 – 57 .

Tallberg Jonas , Sommerer Thomas , Squatrito Theresa , and Jönsson Christer . 2013 . The Opening Up of International Organizations . Cambridge : Cambridge University Press .

Tasioulas John . 2019 . “ First Steps Towards an Ethics of Robots and Artificial Intelligence .” The Journal of Practical Ethics . 7 ( 1 ): 61-95. https://doi.org/10.2139/ssrn.3172840 .

Thompson Nicholas , and Bremmer Ian . 2018. “ The AI Cold War that Threatens us all .” Wired, October 23. Internet (last accessed August 25, 2023): https://www.wired.com/story/ai-cold-war-china-coulddoom-us-all/ .

Trager Robert F. , and Luca Laura M. . 2022 . “ Killer Robots Are Here—And We Need to Regulate Them .” Foreign Policy, May 11 . Internet (last accessed August 25, 2023): https://foreignpolicy.com/2022/05/11/killer-robots-lethal-autonomous-weapons-systems-ukraine-libya-regulation/

Ubena John . 2022 . “ Can Artificial Intelligence be Regulated?” . Lessons from Legislative Techniques . In Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence , edited by Colonna Lianne , Greenstein Stanley . Stockholm: The Swedish Law and Informatics Institute , Stockholm University.

Uhre Andreas Nordang . 2014 . “ Exploring the Diversity of Transnational Actors in Global Environmental Governance .” Interest Groups & Advocacy . 3 ( 1 ): 59 – 78 .

Ulnicane Inga . 2021 . “ Artificial Intelligence in the European Union: Policy, Ethics and Regulation .” In The Routledge Handbook of European Integrations , edited by Hoerber Thomas , Weber Gabriel , Cabras Ignazio . London : Routledge .

Valentini Laura . 2013 . “ Justice, Disagreement and Democracy .” British Journal of Political Science . 43 ( 1 ): 177 – 99 .

Valentini Laura . 2012 . “ Assessing the Global Order: Justice, Legitimacy, or Political Justice?” . Critical Review of International Social and Political Philosophy . 15 ( 5 ): 593 – 612 .

Vredenburgh Kate . 2022 . “ Fairness .” In The Oxford Handbook of AI Governance , edited by Bullock Justin B. , Chen Yu-Che , Himmelreich Johannes , Hudson Valerie M. , Korinek Anton , Young Matthew M. , Zhang Baobao . Oxford : Oxford University Press .

Verdier Daniel . 2022 . “ Bargaining Strategies for Governance Complex Games .” The Review of International Organizations , 17 ( 2 ): 349 – 371 .

Wagner Ben . 2018 . “ Ethics as an Escape from Regulation. From “Ethics-washing” to Ethics-shopping? .” In Being Profiled: Cogitas Ergo Sum. 10 Years of ‘Profiling the European Citizen , edited by Bayamiloglu Emre , Baraliuc Irina , Janssens Liisa , Hildebrandt Mireille Amsterdam : Amsterdam University Press .

Wahlgren Peter . 2022 . “ How to Regulate AI?” In Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence , edited by Colonna Lianne , Greenstein Stanley . Stockholm: The Swedish Law and Informatics Institute, Stockholm University .

Weale Albert . 1999 . Democracy . New York : St Martin's Press .

Winfield Alan F. , Michael Katina , Pitt Jeremy , and Evers Vanessa . 2019 . “ Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems .” Proceedings of the IEEE . 107 ( 3 ): 509 – 17 .

Zaidi Waqar , Dafoe Allan . 2021 . International Control of Powerful Technology: Lessons from the Baruch Plan for Nuclear Weapons . Working Paper 2021: 9 . Oxford : Centre for the Governance of AI .

Zhu J. 2022 . “ AI ethics with Chinese Characteristics? Concerns and preferred solutions in Chinese academia .” AI & Society . https://doi.org/10.1007/s00146-022-01578-w .

Zimmermann Annette , and Lee-Stronach Chad . 2022 . “ Proceed with Caution .” Canadian Journal of Philosophy . 52 ( 1 ): 6 – 25 .

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1468-2486
  • Print ISSN 1521-9488
  • Copyright © 2024 International Studies Association
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Research: Negotiating Is Unlikely to Jeopardize Your Job Offer

  • Einav Hart,
  • Julia Bear,
  • Zhiying (Bella) Ren

articles on research ethics

A series of seven studies found that candidates have more power than they assume.

Job seekers worry about negotiating an offer for many reasons, including the worst-case scenario that the offer will be rescinded. Across a series of seven studies, researchers found that these fears are consistently exaggerated: Candidates think they are much more likely to jeopardize a deal than managers report they are. This fear can lead candidates to avoid negotiating altogether. The authors explore two reasons driving this fear and offer research-backed advice on how anxious candidates can approach job negotiations.

Imagine that you just received a job offer for a position you are excited about. Now what? You might consider negotiating for a higher salary, job flexibility, or other benefits , but you’re apprehensive. You can’t help thinking: What if I don’t get what I ask for? Or, in the worst-case scenario, what if the hiring manager decides to withdraw the offer?

articles on research ethics

  • Einav Hart is an assistant professor of management at George Mason University’s Costello College of Business, and a visiting scholar at the Wharton School. Her research interests include conflict management, negotiations, and organizational behavior.
  • Julia Bear is a professor of organizational behavior at the College of Business at Stony Brook University (SUNY). Her research interests include the influence of gender on negotiation, as well as understanding gender gaps in organizations more broadly.
  • Zhiying (Bella) Ren is a doctoral student at the Wharton School of the University of Pennsylvania. Her research focuses on conversational dynamics in organizations and negotiations.

Partner Center

A case of sodium bromfenac eye drop-induced toxic epidermal necrolysis and literature review

  • RESEARCH LETTER
  • Published: 11 May 2024
  • Volume 316 , article number  167 , ( 2024 )

Cite this article

articles on research ethics

  • Liling Liu 1 ,
  • Xiaoqing Du 1 ,
  • Yanning Qi 1 &
  • Limin Yao 1  

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

articles on research ethics

Data availability

All relevant data to this case is reported in the manuscript. Further enquiries can be directed to the corresponding author.

Tsai TY, Huang IH, Chao YC et al (2021) Treating toxic epidermal necrolysis with systemic immunomodulating therapies: a systematic review and network meta-analysis. J Am Acad Dermatol 84(2):390–397. https://doi.org/10.1016/j.jaad.2020.08.122

Article   CAS   PubMed   Google Scholar  

Charlton OA, Harris V, Phan K, Mewton E, Jackson C, Cooper A (2020) Toxic epidermal necrolysis and steven-johnson syndrome: a comprehensive review. Adv Wound Care (New Rochelle) 9(7):426–439. https://doi.org/10.1089/wound.2019.0977

Article   PubMed   Google Scholar  

Creamer D, Walsh SA, Dziewulski P et al (2016) UK guidelines for the management of stevens-johnson syndrome/toxic epidermal necrolysis in adults 2016. J Plast Reconstr Aesthet Surg 69(6):e119–e153. https://doi.org/10.1016/j.bjps.2016.01.034

Sassolas B, Haddad C, Mockenhaupt M et al (2010) ALDEN, an algorithm for assessment of drug causality in Stevens-Johnson Syndrome and toxic epidermal necrolysis: comparison with case-control analysis. Clin Pharmacol Ther 88(1):60–68. https://doi.org/10.1038/clpt.2009.252

Schneck J, Fagot JP, Sekula P, Sassolas B, Roujeau JC, Mockenhaupt M (2008) Effects of treatments on the mortality of Stevens-Johnson syndrome and toxic epidermal necrolysis: a retrospective study on patients included in the prospective EuroSCAR Study. J Am Acad Dermatol 58(1):33–40. https://doi.org/10.1016/j.jaad.2007.08.039

Download references

Acknowledgements

We would like to acknowledge the hard and dedicated work of all the staff that implemented the intervention and evaluation components of the study.

This study was funded by the Medical Science Research Project of Hebei Province (20241336). The funding body had no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.

Author information

Authors and affiliations.

Department of Dermatology and Venereology, Bethune International Peace Hospital, No. 398 of Zhongshan West Road, Qiaoxi District, Shijiazhuang, 050200, China

Liling Liu, Xiaoqing Du, Yanning Qi & Limin Yao

You can also search for this author in PubMed   Google Scholar

Contributions

Liling Liu and Xiaoqing Du collected the data. Liling Liu and Xiaoqing Du analysed the data. Yanning Qi made a statistical analysis of the data. Limin Yao obtained the funding. Liling Liu and Limin Yao drafted the manuscript, then Yanning Qi, Liling Liu and Xiaoqing Du reviewed the manuscript. All authors read and approved the final draft.

Corresponding author

Correspondence to Limin Yao .

Ethics declarations

Conflict of interest.

The authors declare that they have no competing interests.

Ethics approval

I confirm that I have read the Editorial Policy pages. This study was conducted with approval from the Ethics Committee of Bethune International Peace Hospital. This study was conducted in accordance with the declaration of Helsinki.

Informed consent

Written informed consent was obtained from all participants.

Consent for publication

All participants signed a document of informed consent.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Liu, L., Du, X., Qi, Y. et al. A case of sodium bromfenac eye drop-induced toxic epidermal necrolysis and literature review. Arch Dermatol Res 316 , 167 (2024). https://doi.org/10.1007/s00403-024-02914-4

Download citation

Received : 21 March 2024

Revised : 12 April 2024

Accepted : 26 April 2024

Published : 11 May 2024

DOI : https://doi.org/10.1007/s00403-024-02914-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Research Ethics: Definition, Principles and Advantages

    articles on research ethics

  2. Research Ethics

    articles on research ethics

  3. (PDF) Research and Publication Ethics: A Textbook

    articles on research ethics

  4. FREE 11+ Research Ethics Templates in PDF

    articles on research ethics

  5. (PDF) Ethics in research

    articles on research ethics

  6. (PDF) OVERVIEWS OF RESEARCH ETHICS TO MAKE A SECURE AND PROGRESSIVE

    articles on research ethics

VIDEO

  1. Ethics In Research #researchmethodology #ethicsinresearch

  2. Characteristics

  3. Understanding research ethics

  4. The Incompatibility of "More Moral"

  5. Research Ethics toolkit for supervisors & Researchers 20240313 090921 Meeting Recording

  6. Penn Ethics of Human Research: Preview

COMMENTS

  1. Research Ethics: Sage Journals

    Research Ethics is aimed at all readers and authors interested in ethical issues in the conduct of research, the regulation of research, the procedures and process of ethical review as well as broader ethical issues related to research such as scientific … | View full journal description. This journal is a member of the Committee on ...

  2. Ethical Issues in Research: Perceptions of Researchers, Research Ethics

    Introduction. Research includes a set of activities in which researchers use various structured methods to contribute to the development of knowledge, whether this knowledge is theoretical, fundamental, or applied (Drolet & Ruest, accepted).University research is carried out in a highly competitive environment that is characterized by ever-increasing demands (i.e., on time, productivity ...

  3. What Is Ethics in Research and Why Is It Important?

    Education in research ethics is can help people get a better understanding of ethical standards, policies, and issues and improve ethical judgment and decision making. Many of the deviations that occur in research may occur because researchers simply do not know or have never thought seriously about some of the ethical norms of research. For ...

  4. Research Ethics

    Research ethics is a foundational principle of modern medical research across all disciplines. The overarching body, the IRB, is intentionally comprised of experts across a range of disciplines that can include ethicists, social workers, physicians, nurses, other scientific researchers, counselors and mental health professionals, and advocates ...

  5. Principles of research ethics: A research primer for low- and middle

    Others argue that local development of research ethics is crucial to ensuring buy-in and avoiding bioethical imperialism [11,12]. Acknowledging the past abuses that took place under the guise of research, Western ethical thought now insists on voluntary consent, with special protections for traditionally vulnerable populations such as children ...

  6. Fundamentals of Medical Ethics

    In this issue of the Journal, we are launching a new Perspective series, Fundamentals of Medical Ethics, in which we explore some key ethical questions facing medicine today, from how to make ...

  7. Ethical Considerations in Research

    Research ethics are a set of principles that guide your research designs and practices in both quantitative and qualitative research. In this article, you will learn about the types and examples of ethical considerations in research, such as informed consent, confidentiality, and avoiding plagiarism. You will also find out how to apply ethical principles to your own research projects with ...

  8. What Covid Has Taught the World about Ethics

    Research is a duty for health professionals and in the best interest of patients in times of a pandemic: Empirical exploration and ethical implications of the Research Ethics in Times of Pandemic ...

  9. Full article: Principles for ethical research involving humans: ethical

    Morality, ethics and ethical practice. Ethics, or moral philosophy, is a branch of philosophy that seeks to address questions of morality. Morality refers to beliefs or standards about concepts like good and bad, right and wrong (Jennings Citation 2003).When used as the basis for guiding individual and collective behaviour, ethics takes on a normative function, helping individuals consider how ...

  10. First do no harm: An exploration of researchers' ethics of conduct in

    Research ethics has traditionally been guided by well-established documents such as the Belmont Report and the Declaration of Helsinki. At the same time, the introduction of Big Data methods, that is having a great impact in behavioral research, is raising complex ethical issues that make protection of research participants an increasingly difficult challenge. By conducting 39 semi-structured ...

  11. Research ethics News, Research and Analysis

    Articles on Research ethics Displaying 1 - 20 of 84 articles Considering how to reduce the impact of conflicts of interest on psychedelic studies is essential to avoid public health risks.

  12. Research ethics

    Advancing psychology to benefit society and improve lives. Find resources on research misconduct, publication ethics, protecting research participants, ethics of online research, and guidance from various agencies and organizations, such as the NIH.

  13. Assisting you to advance with ethics in research: an introduction to

    Individual responsibilities to enhance RRP. As explained in Fig. 1, a successfully governed research should consider ethics at the planning stages prior to research.Many international guidance are compatible in enforcing/recommending 14 different "responsibilities" that were first highlighted in the Singapore Statement (2010) for researchers to follow and achieve competency in RRP.

  14. Full article: A framework for ethical research in international and

    Research ethics in international and comparative education (ICE) highlights the diverse challenges that ICE researchers face in enacting ethical practice. In particular, the significant gaps between ethics presented in Western ethical guidelines and international fieldwork. Through analysis of existing guidelines and questionnaire responses ...

  15. Ethics in Research and Publication

    Abstract. Published articles in scientific journals are a key method for knowledge-sharing. Researchers can face the pressures to publish and this can sometimes lead to a breach of ethical values, whether consciously or unconsciously. The prevention of such practices is achieved by the application of strict ethical guidelines applicable to ...

  16. Ethics and international business research: Considerations and best

    Best ethical practices. 4.3.1. Research design. Research ethics starts with a sound research design. Knight, Chidlow and Minbaeva assert that the research design "sets out the research plan for empirically addressing a research question (s) that aims to develop theory in a feasible way" (2022: 45).

  17. MDPI

    Publication Ethics Statement. MDPI is a member of COPE. We fully adhere to its Core Practices and to its Guidelines . MDPI journals uphold a rigorous peer-review process together with clear ethical policies and standards to support the addition of high-quality scientific studies to the field of scholarly publication.

  18. Ethics: Articles, Research, & Case Studies on Ethics- HBS Working Knowledge

    by Dina Gerdeman. Corporate misconduct has grown in the past 30 years, with losses often totaling billions of dollars. What businesses may not realize is that misconduct often results from managers who set unrealistic expectations, leading decent people to take unethical shortcuts, says Lynn S. Paine. 23 Apr 2024.

  19. (PDF) Research Ethics

    physiological, biological, political, social and other issues of the participants. 12. Researchers are expected to consider ethical implications of their research. 13. To uphold the ethical ...

  20. Frontiers

    Searches were conducted in Web of Science, Scopus, PhilPapers and MEDLINE databases. Theoretical research articles and reviews published in peer-reviewed journals until the end of 2020 were included in the study. The language of publications was English. According to preliminary data, 1453 publications were retrieved from the four databases.

  21. Global Governance of Artificial Intelligence: Next Steps for Empirical

    The purpose of this article is to outline an agenda for research into the global governance of AI. The article distinguishes between two broad perspectives: an empirical approach, aimed at mapping and explaining global AI governance; and a normative approach, aimed at developing and applying standards for appropriate global AI governance.

  22. Legal and ethical issues in research

    Abstract. Legal and ethical issues form an important component of modern research, related to the subject and researcher. This article seeks to briefly review the various international guidelines and regulations that exist on issues related to informed consent, confidentiality, providing incentives and various forms of research misconduct.

  23. Research: Negotiating Is Unlikely to Jeopardize Your Job Offer

    Her research interests include conflict management, negotiations, and organizational behavior. Julia Bear is a professor of organizational behavior at the College of Business at Stony Brook ...

  24. Principles of research ethics: A research primer for low- and middle

    This paper describes the basic principles of Western research ethics - respect for persons, beneficence, and justice - and how the principles may be contextualized in different settings, by researchers of various backgrounds with different funding streams. Examples of lapses in ethical practice of research are used to highlight best practices.

  25. A case of sodium bromfenac eye drop-induced toxic epidermal ...

    Archives of Dermatological Research - The 2016 UK Adult SJS/TEN Management Guidelines [] recommend utilizing the ALDEN scoring system [4, 5] to evaluate the likelihood of drugs causing TEN in patients exposed to multiple medications.After reviewing the medication history of the patient, considering factors such as drug half-life, TEN risk level, and liver and kidney function of the patient ...