Grad Coach

Research Bias 101: What You Need To Know

By: Derek Jansen (MBA) | Expert Reviewed By: Dr Eunice Rautenbach | September 2022

If you’re new to academic research, research bias (also sometimes called researcher bias) is one of the many things you need to understand to avoid compromising your study. If you’re not careful, research bias can ruin the credibility of your study. 

In this post, we’ll unpack the thorny topic of research bias. We’ll explain what it is , look at some common types of research bias and share some tips to help you minimise the potential sources of bias in your research.

Overview: Research Bias 101

  • What is research bias (or researcher bias)?
  • Bias #1 – Selection bias
  • Bias #2 – Analysis bias
  • Bias #3 – Procedural (admin) bias

So, what is research bias?

Well, simply put, research bias is when the researcher – that’s you – intentionally or unintentionally skews the process of a systematic inquiry , which then of course skews the outcomes of the study . In other words, research bias is what happens when you affect the results of your research by influencing how you arrive at them.

For example, if you planned to research the effects of remote working arrangements across all levels of an organisation, but your sample consisted mostly of management-level respondents , you’d run into a form of research bias. In this case, excluding input from lower-level staff (in other words, not getting input from all levels of staff) means that the results of the study would be ‘biased’ in favour of a certain perspective – that of management.

Of course, if your research aims and research questions were only interested in the perspectives of managers, this sampling approach wouldn’t be a problem – but that’s not the case here, as there’s a misalignment between the research aims and the sample .

Now, it’s important to remember that research bias isn’t always deliberate or intended. Quite often, it’s just the result of a poorly designed study, or practical challenges in terms of getting a well-rounded, suitable sample. While perfect objectivity is the ideal, some level of bias is generally unavoidable when you’re undertaking a study. That said, as a savvy researcher, it’s your job to reduce potential sources of research bias as much as possible.

To minimize potential bias, you first need to know what to look for . So, next up, we’ll unpack three common types of research bias we see at Grad Coach when reviewing students’ projects . These include selection bias , analysis bias , and procedural bias . Keep in mind that there are many different forms of bias that can creep into your research, so don’t take this as a comprehensive list – it’s just a useful starting point.

Research bias definition

Bias #1 – Selection Bias

First up, we have selection bias . The example we looked at earlier (about only surveying management as opposed to all levels of employees) is a prime example of this type of research bias. In other words, selection bias occurs when your study’s design automatically excludes a relevant group from the research process and, therefore, negatively impacts the quality of the results.

With selection bias, the results of your study will be biased towards the group that it includes or favours, meaning that you’re likely to arrive at prejudiced results . For example, research into government policies that only includes participants who voted for a specific party is going to produce skewed results, as the views of those who voted for other parties will be excluded.

Selection bias commonly occurs in quantitative research , as the sampling strategy adopted can have a major impact on the statistical results . That said, selection bias does of course also come up in qualitative research as there’s still plenty room for skewed samples. So, it’s important to pay close attention to the makeup of your sample and make sure that you adopt a sampling strategy that aligns with your research aims. Of course, you’ll seldom achieve a perfect sample, and that okay. But, you need to be aware of how your sample may be skewed and factor this into your thinking when you analyse the resultant data.

Need a helping hand?

research study biases

Bias #2 – Analysis Bias

Next up, we have analysis bias . Analysis bias occurs when the analysis itself emphasises or discounts certain data points , so as to favour a particular result (often the researcher’s own expected result or hypothesis). In other words, analysis bias happens when you prioritise the presentation of data that supports a certain idea or hypothesis , rather than presenting all the data indiscriminately .

For example, if your study was looking into consumer perceptions of a specific product, you might present more analysis of data that reflects positive sentiment toward the product, and give less real estate to the analysis that reflects negative sentiment. In other words, you’d cherry-pick the data that suits your desired outcomes and as a result, you’d create a bias in terms of the information conveyed by the study.

Although this kind of bias is common in quantitative research, it can just as easily occur in qualitative studies, given the amount of interpretive power the researcher has. This may not be intentional or even noticed by the researcher, given the inherent subjectivity in qualitative research. As humans, we naturally search for and interpret information in a way that confirms or supports our prior beliefs or values (in psychology, this is called “confirmation bias”). So, don’t make the mistake of thinking that analysis bias is always intentional and you don’t need to worry about it because you’re an honest researcher – it can creep up on anyone .

To reduce the risk of analysis bias, a good starting point is to determine your data analysis strategy in as much detail as possible, before you collect your data . In other words, decide, in advance, how you’ll prepare the data, which analysis method you’ll use, and be aware of how different analysis methods can favour different types of data. Also, take the time to reflect on your own pre-conceived notions and expectations regarding the analysis outcomes (in other words, what do you expect to find in the data), so that you’re fully aware of the potential influence you may have on the analysis – and therefore, hopefully, can minimize it.

Analysis bias

Bias #3 – Procedural Bias

Last but definitely not least, we have procedural bias , which is also sometimes referred to as administration bias . Procedural bias is easy to overlook, so it’s important to understand what it is and how to avoid it. This type of bias occurs when the administration of the study, especially the data collection aspect, has an impact on either who responds or how they respond.

A practical example of procedural bias would be when participants in a study are required to provide information under some form of constraint. For example, participants might be given insufficient time to complete a survey, resulting in incomplete or hastily-filled out forms that don’t necessarily reflect how they really feel. This can happen really easily, if, for example, you innocently ask your participants to fill out a survey during their lunch break.

Another form of procedural bias can happen when you improperly incentivise participation in a study. For example, offering a reward for completing a survey or interview might incline participants to provide false or inaccurate information just to get through the process as fast as possible and collect their reward. It could also potentially attract a particular type of respondent (a freebie seeker), resulting in a skewed sample that doesn’t really reflect your demographic of interest.

The format of your data collection method can also potentially contribute to procedural bias. If, for example, you decide to host your survey or interviews online, this could unintentionally exclude people who are not particularly tech-savvy, don’t have a suitable device or just don’t have a reliable internet connection. On the flip side, some people might find in-person interviews a bit intimidating (compared to online ones, at least), or they might find the physical environment in which they’re interviewed to be uncomfortable or awkward (maybe the boss is peering into the meeting room, for example). Either way, these factors all result in less useful data.

Although procedural bias is more common in qualitative research, it can come up in any form of fieldwork where you’re actively collecting data from study participants. So, it’s important to consider how your data is being collected and how this might impact respondents. Simply put, you need to take the respondent’s viewpoint and think about the challenges they might face, no matter how small or trivial these might seem. So, it’s always a good idea to have an informal discussion with a handful of potential respondents before you start collecting data and ask for their input regarding your proposed plan upfront.

Procedural bias

Let’s Recap

Ok, so let’s do a quick recap. Research bias refers to any instance where the researcher, or the research design , negatively influences the quality of a study’s results, whether intentionally or not.

The three common types of research bias we looked at are:

  • Selection bias – where a skewed sample leads to skewed results
  • Analysis bias – where the analysis method and/or approach leads to biased results – and,
  • Procedural bias – where the administration of the study, especially the data collection aspect, has an impact on who responds and how they respond.

As I mentioned, there are many other forms of research bias, but we can only cover a handful here. So, be sure to familiarise yourself with as many potential sources of bias as possible to minimise the risk of research bias in your study.

research study biases

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Research proposal mistakes

This is really educational and I really like the simplicity of the language in here, but i would like to know if there is also some guidance in regard to the problem statement and what it constitutes.

Alvin Neil A. Gutierrez

Do you have a blog or video that differentiates research assumptions, research propositions and research hypothesis?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 11 December 2020

Quantifying and addressing the prevalence and bias of study designs in the environmental and social sciences

  • Alec P. Christie   ORCID: orcid.org/0000-0002-8465-8410 1 ,
  • David Abecasis   ORCID: orcid.org/0000-0002-9802-8153 2 ,
  • Mehdi Adjeroud 3 ,
  • Juan C. Alonso   ORCID: orcid.org/0000-0003-0450-7434 4 ,
  • Tatsuya Amano   ORCID: orcid.org/0000-0001-6576-3410 5 ,
  • Alvaro Anton   ORCID: orcid.org/0000-0003-4108-6122 6 ,
  • Barry P. Baldigo   ORCID: orcid.org/0000-0002-9862-9119 7 ,
  • Rafael Barrientos   ORCID: orcid.org/0000-0002-1677-3214 8 ,
  • Jake E. Bicknell   ORCID: orcid.org/0000-0001-6831-627X 9 ,
  • Deborah A. Buhl 10 ,
  • Just Cebrian   ORCID: orcid.org/0000-0002-9916-8430 11 ,
  • Ricardo S. Ceia   ORCID: orcid.org/0000-0001-7078-0178 12 , 13 ,
  • Luciana Cibils-Martina   ORCID: orcid.org/0000-0002-2101-4095 14 , 15 ,
  • Sarah Clarke 16 ,
  • Joachim Claudet   ORCID: orcid.org/0000-0001-6295-1061 17 ,
  • Michael D. Craig 18 , 19 ,
  • Dominique Davoult 20 ,
  • Annelies De Backer   ORCID: orcid.org/0000-0001-9129-9009 21 ,
  • Mary K. Donovan   ORCID: orcid.org/0000-0001-6855-0197 22 , 23 ,
  • Tyler D. Eddy 24 , 25 , 26 ,
  • Filipe M. França   ORCID: orcid.org/0000-0003-3827-1917 27 ,
  • Jonathan P. A. Gardner   ORCID: orcid.org/0000-0002-6943-2413 26 ,
  • Bradley P. Harris 28 ,
  • Ari Huusko 29 ,
  • Ian L. Jones 30 ,
  • Brendan P. Kelaher 31 ,
  • Janne S. Kotiaho   ORCID: orcid.org/0000-0002-4732-784X 32 , 33 ,
  • Adrià López-Baucells   ORCID: orcid.org/0000-0001-8446-0108 34 , 35 , 36 ,
  • Heather L. Major   ORCID: orcid.org/0000-0002-7265-1289 37 ,
  • Aki Mäki-Petäys 38 , 39 ,
  • Beatriz Martín 40 , 41 ,
  • Carlos A. Martín 8 ,
  • Philip A. Martin 1 , 42 ,
  • Daniel Mateos-Molina   ORCID: orcid.org/0000-0002-9383-0593 43 ,
  • Robert A. McConnaughey   ORCID: orcid.org/0000-0002-8537-3695 44 ,
  • Michele Meroni 45 ,
  • Christoph F. J. Meyer   ORCID: orcid.org/0000-0001-9958-8913 34 , 35 , 46 ,
  • Kade Mills 47 ,
  • Monica Montefalcone 48 ,
  • Norbertas Noreika   ORCID: orcid.org/0000-0002-3853-7677 49 , 50 ,
  • Carlos Palacín 4 ,
  • Anjali Pande 26 , 51 , 52 ,
  • C. Roland Pitcher   ORCID: orcid.org/0000-0003-2075-4347 53 ,
  • Carlos Ponce 54 ,
  • Matt Rinella 55 ,
  • Ricardo Rocha   ORCID: orcid.org/0000-0003-2757-7347 34 , 35 , 56 ,
  • María C. Ruiz-Delgado 57 ,
  • Juan J. Schmitter-Soto   ORCID: orcid.org/0000-0003-4736-8382 58 ,
  • Jill A. Shaffer   ORCID: orcid.org/0000-0003-3172-0708 10 ,
  • Shailesh Sharma   ORCID: orcid.org/0000-0002-7918-4070 59 ,
  • Anna A. Sher   ORCID: orcid.org/0000-0002-6433-9746 60 ,
  • Doriane Stagnol 20 ,
  • Thomas R. Stanley 61 ,
  • Kevin D. E. Stokesbury 62 ,
  • Aurora Torres 63 , 64 ,
  • Oliver Tully 16 ,
  • Teppo Vehanen   ORCID: orcid.org/0000-0003-3441-6787 65 ,
  • Corinne Watts 66 ,
  • Qingyuan Zhao 67 &
  • William J. Sutherland 1 , 42  

Nature Communications volume  11 , Article number:  6377 ( 2020 ) Cite this article

14k Accesses

43 Citations

62 Altmetric

Metrics details

  • Environmental impact
  • Scientific community
  • Social sciences

Building trust in science and evidence-based decision-making depends heavily on the credibility of studies and their findings. Researchers employ many different study designs that vary in their risk of bias to evaluate the true effect of interventions or impacts. Here, we empirically quantify, on a large scale, the prevalence of different study designs and the magnitude of bias in their estimates. Randomised designs and controlled observational designs with pre-intervention sampling were used by just 23% of intervention studies in biodiversity conservation, and 36% of intervention studies in social science. We demonstrate, through pairwise within-study comparisons across 49 environmental datasets, that these types of designs usually give less biased estimates than simpler observational designs. We propose a model-based approach to combine study estimates that may suffer from different levels of study design bias, discuss the implications for evidence synthesis, and how to facilitate the use of more credible study designs.

Similar content being viewed by others

research study biases

Ghost roads and the destruction of Asia-Pacific tropical forests

Jayden E. Engert, Mason J. Campbell, … William F. Laurance

research study biases

FSC-certified forest management benefits large mammals compared to non-FSC

Joeri A. Zwerts, E. H. M. Sterck, … Marijke van Kuijk

research study biases

Interviews in the social sciences

Eleanor Knott, Aliya Hamid Rao, … Chana Teeger

Introduction

The ability of science to reliably guide evidence-based decision-making hinges on the accuracy and credibility of studies and their results 1 , 2 . Well-designed, randomised experiments are widely accepted to yield more credible results than non-randomised, ‘observational studies’ that attempt to approximate and mimic randomised experiments 3 . Randomisation is a key element of study design that is widely used across many disciplines because of its ability to remove confounding biases (through random assignment of the treatment or impact of interest 4 , 5 ). However, ethical, logistical, and economic constraints often prevent the implementation of randomised experiments, whereas non-randomised observational studies have become popular as they take advantage of historical data for new research questions, larger sample sizes, less costly implementation, and more relevant and representative study systems or populations 6 , 7 , 8 , 9 . Observational studies nevertheless face the challenge of accounting for confounding biases without randomisation, which has led to innovations in study design.

We define ‘study design’ as an organised way of collecting data. Importantly, we distinguish between data collection and statistical analysis (as opposed to other authors 10 ) because of the belief that bias introduced by a flawed design is often much more important than bias introduced by statistical analyses. This was emphasised by Light, Singer & Willet 11 (p. 5): “You can’t fix by analysis what you bungled by design…”; and Rubin 3 : “Design trumps analysis.” Nevertheless, the importance of study design has often been overlooked in debates over the inability of researchers to reproduce the original results of published studies (so-called ‘reproducibility crises’ 12 , 13 ) in favour of other issues (e.g., p-hacking 14 and Hypothesizing After Results are Known or ‘HARKing’ 15 ).

To demonstrate the importance of study designs, we can use the following decomposition of estimation error equation 16 :

This demonstrates that even if we improve the quality of modelling and analysis (to reduce modelling bias through a better bias-variance trade-off 17 ) or increase sample size (to reduce statistical noise), we cannot remove the intrinsic bias introduced by the choice of study design (design bias) unless we collect the data in a different way. The importance of study design in determining the levels of bias in study results therefore cannot be overstated.

For the purposes of this study we consider six commonly used study designs; differences and connections can be visualised in Fig.  1 . There are three major components that allow us to define these designs: randomisation, sampling before and after the impact of interest occurs, and the use of a control group.

figure 1

A hypothetical study set-up is shown where the abundance of birds in three impact and control replicates (e.g., fields represented by blocks in a row) are monitored before and after an impact (e.g., ploughing) that occurs in year zero. Different colours represent each study design and illustrate how replicates are sampled. Approaches for calculating an estimate of the true effect of the impact for each design are also shown, along with synonyms from different disciplines.

Of the non-randomised observational designs, the Before-After Control-Impact (BACI) design uses a control group and samples before and after the impact occurs (i.e., in the ‘before-period’ and the ‘after-period’). Its rationale is to explicitly account for pre-existing differences between the impact group (exposed to the impact) and control group in the before-period, which might otherwise bias the estimate of the impact’s true effect 6 , 18 , 19 .

The BACI design improves upon several other commonly used observational study designs, of which there are two uncontrolled designs: After, and Before-After (BA). An After design monitors an impact group in the after-period, while a BA design compares the state of the impact group between the before- and after-periods. Both designs can be expected to yield poor estimates of the impact’s true effect (large design bias; Equation (1)) because changes in the response variable could have occurred without the impact (e.g., due to natural seasonal changes; Fig.  1 ).

The other observational design is Control-Impact (CI), which compares the impact group and control group in the after-period (Fig.  1 ). This design may suffer from design bias introduced by pre-existing differences between the impact group and control group in the before-period; bias that the BACI design was developed to account for 20 , 21 . These differences have many possible sources, including experimenter bias, logistical and environmental constraints, and various confounding factors (variables that change the propensity of receiving the impact), but can be adjusted for through certain data pre-processing techniques such as matching and stratification 22 .

Among the randomised designs, the most commonly used are counterparts to the observational CI and BACI designs: Randomised Control-Impact (R-CI) and Randomised Before-After Control-Impact (R-BACI) designs. The R-CI design, often termed ‘Randomised Controlled Trials’ (RCTs) in medicine and hailed as the ‘gold standard’ 23 , 24 , removes any pre-impact differences in a stochastic sense, resulting in zero design bias (Equation ( 1 )). Similarly, the R-BACI design should also have zero design bias, and the impact group measurements in the before-period could be used to improve the efficiency of the statistical estimator. No randomised equivalents exist of After or BA designs as they are uncontrolled.

It is important to briefly note that there is debate over two major statistical methods that can be used to analyse data collected using BACI and R-BACI designs, and which is superior at reducing modelling bias 25 (Equation (1)). These statistical methods are: (i) Differences in Differences (DiD) estimator; and (ii) covariance adjustment using the before-period response, which is an extension of Analysis of Covariance (ANCOVA) for generalised linear models — herein termed ‘covariance adjustment’ (Fig.  1 ). These estimators rely on different assumptions to obtain unbiased estimates of the impact’s true effect. The DiD estimator assumes that the control group response accurately represents the impact group response had it not been exposed to the impact (‘parallel trends’ 18 , 26 ) whereas covariance adjustment assumes there are no unmeasured confounders and linear model assumptions hold 6 , 27 .

From both theory and Equation (1), with similar sample sizes, randomised designs (R-BACI and R-CI) are expected to be less biased than controlled, observational designs with sampling in the before-period (BACI), which in turn should be superior to observational designs without sampling in the before-period (CI) or without a control group (BA and After designs 7 , 28 ). Between randomised designs, we might expect that an R-BACI design performs better than a R-CI design because utilising extra data before the impact may improve the efficiency of the statistical estimator by explicitly characterising pre-existing differences between the impact group and control group.

Given the likely differences in bias associated with different study designs, concerns have been raised over the use of poorly designed studies in several scientific disciplines 7 , 29 , 30 , 31 , 32 , 33 , 34 , 35 . Some disciplines, such as the social and medical sciences, commonly undertake direct comparisons of results obtained by randomised and non-randomised designs within a single study 36 , 37 , 38 or between multiple studies (between-study comparisons 39 , 40 , 41 ) to specifically understand the influence of study designs on research findings. However, within-study comparisons are limited in their scope (e.g., a single study 42 , 43 ) and between-study comparisons can be confounded by variability in context or study populations 44 . Overall, we lack quantitative estimates of the prevalence of different study designs and the levels of bias associated with their results.

In this work, we aim to first quantify the prevalence of different study designs in the social and environmental sciences. To fill this knowledge gap, we take advantage of summaries for several thousand biodiversity conservation intervention studies in the Conservation Evidence database 45 ( www.conservationevidence.com ) and social intervention studies in systematic reviews by the Campbell Collaboration ( www.campbellcollaboration.org ). We then quantify the levels of bias in estimates obtained by different study designs (R-BACI, R-CI, BACI, BA, and CI) by applying a hierarchical model to approximately 1000 within-study comparisons across 49 raw environmental datasets from a range of fields. We show that R-BACI, R-CI and BACI designs are poorly represented in studies testing biodiversity conservation and social interventions, and that these types of designs tend to give less biased estimates than simpler observational designs. We propose a model-based approach to combine study estimates that may suffer from different levels of study design bias, discuss the implications for evidence synthesis, and how to facilitate the use of more credible study designs.

Prevalence of study designs

We found that the biodiversity-conservation (conservation evidence) and social-science (Campbell collaboration) literature had similarly high proportions of intervention studies that used CI designs and After designs, but low proportions that used R-BACI, BACI, or BA designs (Fig.  2 ). There were slightly higher proportions of R-CI designs used by intervention studies in social-science systematic reviews than in the biodiversity-conservation literature (Fig.  2 ). The R-BACI, R-CI, and BACI designs made up 23% of intervention studies for biodiversity conservation, and 36% of intervention studies for social science.

figure 2

Intervention studies from the biodiversity-conservation literature were screened from the Conservation Evidence database ( n =4260 studies) and studies from the social-science literature were screened from 32 Campbell Collaboration systematic reviews ( n =1009 studies – note studies excluded by these reviews based on their study design were still counted). Percentages for the social-science literature were calculated for each systematic review (blue data points) and then averaged across all 32 systematic reviews (blue bars and black vertical lines represent mean and 95% Confidence Intervals, respectively). Percentages for the biodiversity-conservation literature are absolute values (shown as green bars) calculated from the entire Conservation Evidence database (after excluding any reviews). Source data are provided as a Source Data file. BA before-after, CI control-impact, BACI before-after-control-impact, R-BACI randomised BACI, R-CI randomised CI.

Influence of different study designs on study results

In non-randomised datasets, we found that estimates of BACI (with covariance adjustment) and CI designs were very similar, while the point estimates for most other designs often differed substantially in their magnitude and sign. We found similar results in randomised datasets for R-BACI (with covariance adjustment) and R-CI designs. For ~30% of responses, in both non-randomised and randomised datasets, study design estimates differed in their statistical significance (i.e., p < 0.05 versus p  > =0.05), except for estimates of (R-)BACI (with covariance adjustment) and (R-)CI designs (Table  1 ; Fig.  3 ). It was rare for the 95% confidence intervals of different designs’ estimates to not overlap – except when comparing estimates of BA designs to (R-)BACI (with covariance adjustment) and (R-)CI designs (Table  1 ). It was even rarer for estimates of different designs to have significantly different signs (i.e., one estimate with entirely negative confidence intervals versus one with entirely positive confidence intervals; Table  1 , Fig.  3 ). Overall, point estimates often differed greatly in their magnitude and, to a lesser extent, in their sign between study designs, but did not differ as greatly when accounting for the uncertainty around point estimates – except in terms of their statistical significance.

figure 3

t-statistics were obtained from two-sided t-tests of estimates obtained by each design for different responses in each dataset using Generalised Linear Models (see Methods). For randomised datasets, BACI and CI axis labels refer to R-BACI and R-CI designs (denoted by ‘R-’). DiD Difference in Differences; CA covariance adjustment. Lines at t-statistic values of 1.96 denote boundaries between cells and colours of points indicate differences in direction and statistical significance ( p  < 0.05; grey = same sign and significance, orange = same sign but difference in significance, red = different sign and significance). Numbers refer to the number of responses in each cell. Source data are provided as a Source Data file. BA Before-After, CI Control-Impact, BACI Before-After-Control-Impact.

Levels of bias in estimates of different study designs

We modelled study design bias using a random effect across datasets in a hierarchical Bayesian model; σ is the standard deviation of the bias term, and assuming bias is randomly distributed across datasets and is on average zero, larger values of σ will indicate a greater magnitude of bias (see Methods). We found that, for randomised datasets, estimates of both R-BACI (using covariance adjustment; CA) and R-CI designs were affected by negligible amounts of bias (very small values of σ; Table  2 ). When the R-BACI design used the DiD estimator, it suffered from slightly more bias (slightly larger values of σ), whereas the BA design had very high bias when applied to randomised datasets (very large values of σ; Table  2 ). There was a highly positive correlation between the estimates of R-BACI (using covariance adjustment) and R-CI designs (Ω[R-BACI CA, R-CI] was close to 1; Table  2 ). Estimates of R-BACI using the DiD estimator were also positively correlated with estimates of R-BACI using covariance adjustment and R-CI designs (moderate positive mean values of Ω[R-BACI CA, R-BACI DiD] and Ω[R-BACI DiD, R-CI]; Table  2 ).

For non-randomised datasets, controlled designs (BACI and CI) were substantially less biased (far smaller values of σ) than the uncontrolled BA design (Table  2 ). A BACI design using the DiD estimator was slightly less biased than the BACI design using covariance adjustment, which was, in turn, slightly less biased than the CI design (Table  2 ).

Standard errors estimated by the hierarchical Bayesian model were reasonably accurate for the randomised datasets (see λ in Methods and Table  2 ), whereas there was some underestimation of standard errors and lack-of-fit for non-randomised datasets.

Our approach provides a principled way to quantify the levels of bias associated with different study designs. We found that randomised study designs (R-BACI and R-CI) and observational BACI designs are poorly represented in the environmental and social sciences; collectively, descriptive case studies (the After design), the uncontrolled, observational BA design, and the controlled, observational CI design made up a substantially greater proportion of intervention studies (Fig.  2 ). And yet R-BACI, R-CI and BACI designs were found to be quantifiably less biased than other observational designs.

As expected the R-CI and R-BACI designs (using a covariance adjustment estimator) performed well; the R-BACI design using a DiD estimator performed slightly less well, probably because the differencing of pre-impact data by this estimator may introduce additional statistical noise compared to covariance adjustment, which controls for these data using a lagged regression variable. Of the observational designs, the BA design performed very poorly (both when analysing randomised and non-randomised data) as expected, being uncontrolled and therefore prone to severe design bias 7 , 28 . The CI design also tended to be more biased than the BACI design (using a DiD estimator) due to pre-existing differences between the impact and control groups. For BACI designs, we recommend that the underlying assumptions of DiD and CA estimators are carefully considered before choosing to apply them to data collected for a specific research question 6 , 27 . Their levels of bias were negligibly different and their known bracketing relationship suggests they will typically give estimates with the same sign, although their tendency to over- or underestimate the true effect will depend on how well the underlying assumptions of each are met (most notably, parallel trends for DiD and no unmeasured confounders for CA; see Introduction) 6 , 27 . Overall, these findings demonstrate the power of large within-study comparisons to directly quantify differences in the levels of bias associated with different designs.

We must acknowledge that the assumptions of our hierarchical model (that the bias for each design (j) is on average zero and normally distributed) cannot be verified without gold standard randomised experiments and that, for observational designs, the model was overdispersed (potentially due to underestimation of statistical error by GLM(M)s or positively correlated design biases). The exact values of our hierarchical model should therefore be treated with appropriate caution, and future research is needed to refine and improve our approach to quantify these biases more precisely. Responses within datasets may also not be independent as multiple species could interact; therefore, the estimates analysed by our hierarchical model are statistically dependent on each other, and although we tried to account for this using a correlation matrix (see Methods, Eq. ( 3 )), this is a limitation of our model. We must also recognise that we collated datasets using non-systematic searches 46 , 47 and therefore our analysis potentially exaggerates the intrinsic biases of observational designs (i.e., our data may disproportionately reflect situations where the BACI design was chosen to account for confounding factors). We nevertheless show that researchers were wise to use the BACI design because it was less biased than CI and BA designs across a wide range of datasets from various environmental systems and locations. Without undertaking costly and time-consuming pre-impact sampling and pilot studies, researchers are also unlikely to know the levels of bias that could affect their results. Finally, we did not consider sample size, but it is likely that researchers might use larger sample sizes for CI and BA designs than BACI designs. This is, however, unlikely to affect our main conclusions because larger sample sizes could increase type I errors (false positive rate) by yielding more precise, but biased estimates of the true effect 28 .

Our analyses provide several empirically supported recommendations for researchers designing future studies to assess an impact of interest. First, using a controlled and/or randomised design (if possible) was shown to strongly reduce the level of bias in study estimates. Second, when observational designs must be used (as randomisation is not feasible or too costly), we urge researchers to choose the BACI design over other observational designs—and when that is not possible, to choose the CI design over the uncontrolled BA design. We acknowledge that limited resources, short funding timescales, and ethical or logistical constraints 48 may force researchers to use the CI design (if randomisation and pre-impact sampling are impossible) or the BA design (if appropriate controls cannot be found 28 ). To facilitate the usage of less biased designs, longer-term investments in research effort and funding are required 43 . Far greater emphasis on study designs in statistical education 49 and better training and collaboration between researchers, practitioners and methodologists, is needed to improve the design of future studies; for example, potentially improving the CI design by pairing or matching the impact group and control group 22 , or improving the BA design using regression discontinuity methods 48 , 50 . Where the choice of study design is limited, researchers must transparently communicate the limitations and uncertainty associated with their results.

Our findings also have wider implications for evidence synthesis, specifically the exclusion of certain observational study designs from syntheses (the ‘rubbish in, rubbish out’ concept 51 , 52 ). We believe that observational designs should be included in systematic reviews and meta-analyses, but that careful adjustments are needed to account for their potential biases. Exclusion of observational studies often results from subjective, checklist-based ‘Risk of Bias’ or quality assessments of studies (e.g., AMSTRAD 2 53 , ROBINS-I 54 , or GRADE 55 ) that are not data-driven and often neglect to identify the actual direction, or quantify the magnitude, of possible bias introduced by observational studies when rating the quality of a review’s recommendations. We also found that there was a small proportion of studies that used randomised designs (R-CI or R-BACI) or observational BACI designs (Fig.  2 ), suggesting that systematic reviews and meta-analyses risk excluding a substantial proportion of the literature and limiting the scope of their recommendations if such exclusion criteria are used 32 , 56 , 57 . This problem is compounded by the fact that, at least in conservation science, studies using randomised or BACI designs are strongly concentrated in Europe, Australasia, and North America 31 . Systematic reviews that rely on these few types of study designs are therefore likely to fail to provide decision makers outside of these regions with locally relevant recommendations that they prefer 58 . The Covid-19 pandemic has highlighted the difficulties in making locally relevant evidence-based decisions using studies conducted in different countries with different demographics and cultures, and on patients of different ages, ethnicities, genetics, and underlying health issues 59 . This problem is also acute for decision-makers working on biodiversity conservation in the tropical regions, where the need for conservation is arguably the greatest (i.e., where most of Earth’s biodiversity exists 60 ) but they either have to rely on very few well-designed studies that are not locally relevant (i.e., have low generalisability), or more studies that are locally relevant but less well-designed 31 , 32 . Either option could lead decision-makers to take ineffective or inefficient decisions. In the long-term, improving the quality and coverage of scientific evidence and evidence syntheses across the world will help solve these issues, but shorter-term solutions to synthesising patchy evidence bases are required.

Our work furthers sorely needed research on how to combine evidence from studies that vary greatly in their design. Our approach is an alternative to conventional meta-analyses which tend to only weight studies by their sample size or the inverse of their variance 61 ; when studies vary greatly in their study design, simply weighting by inverse variance or sample size is unlikely to account for different levels of bias introduced by different study designs (see Equation (1)). For example, a BA study could receive a larger weight if it had lower variance than a BACI study, despite our results suggesting a BA study usually suffers from greater design bias. Our model provides a principled way to weight studies by both their variance and the likely amount of bias introduced by their study design; it is therefore a form of ‘bias-adjusted meta-analysis’ 62 , 63 , 64 , 65 , 66 . However, instead of relying on elicitation of subjective expert opinions on the bias of each study, we provide a data-driven, empirical quantification of study biases – an important step that was called for to improve such meta-analytic approaches 65 , 66 .

Future research is needed to refine our methodology, but our empirically grounded form of bias-adjusted meta-analysis could be implemented as follows: 1.) collate studies for the same true effect, their effect size estimates, standard errors, and the type of study design; 2.) enter these data into our hierarchical model, where effect size estimates share the same intercept (the true causal effect), a random effect term due to design bias (whose variance is estimated by the method we used), and a random effect term for statistical noise (whose variance is estimated by the reported standard error of studies); 3.) fit this model and estimate the shared intercept/true effect. Heuristically, this can be thought of as weighting studies by both their design bias and their sampling variance and could be implemented on a dynamic meta-analysis platform (such as metadataset.com 67 ). This approach has substantial potential to develop evidence synthesis in fields (such as biodiversity conservation 31 , 32 ) with patchy evidence bases, where reliably synthesising findings from studies that vary greatly in their design is a fundamental and unavoidable challenge.

Our study has highlighted an often overlooked aspect of debates over scientific reproducibility: that the credibility of studies is fundamentally determined by study design. Testing the effectiveness of conservation and social interventions is undoubtedly of great importance given the current challenges facing biodiversity and society in general and the serious need for more evidence-based decision-making 1 , 68 . And yet our findings suggest that quantifiably less biased study designs are poorly represented in the environmental and social sciences. Greater methodological training of researchers and funding for intervention studies, as well as stronger collaborations between methodologists and practitioners is needed to facilitate the use of less biased study designs. Better communication and reporting of the uncertainty associated with different study designs is also needed, as well as more meta-research (the study of research itself) to improve standards of study design 69 . Our hierarchical model provides a principled way to combine studies using a variety of study designs that vary greatly in their risk of bias, enabling us to make more efficient use of patchy evidence bases. Ultimately, we hope that researchers and practitioners testing interventions will think carefully about the types of study designs they use, and we encourage the evidence synthesis community to embrace alternative methods for combining evidence from heterogeneous sets of studies to improve our ability to inform evidence-based decision-making in all disciplines.

Quantifying the use of different designs

We compared the use of different study designs in the literature that quantitatively tested interventions between the fields of biodiversity conservation (4,260 studies collated by Conservation Evidence 45 ) and social science (1,009 studies found by 32 systematic reviews produced by the Campbell Collaboration: www.campbellcollaboration.org ).

Conservation Evidence is a database of intervention studies, each of which has quantitatively tested a conservation intervention (e.g., sowing strips of wildflower seeds on farmland to benefit birds), that is continuously being updated through comprehensive, manual searches of conservation journals for a wide range of fields in biodiversity conservation (e.g., amphibian, bird, peatland, and farmland conservation 45 ). To obtain the proportion of studies that used each design from Conservation Evidence, we simply extracted the type of study design from each study in the database in 2019 – the study design was determined using a standardised set of criteria; reviews were not included (Table  3 ). We checked if the designs reported in the database accurately reflected the designs in the original publication and found that for a random subset of 356 studies, 95.1% were accurately described.

Each systematic review produced by the Campbell Collaboration collates and analyses studies that test a specific social intervention; we collated systematic reviews that tested a variety of social interventions across several fields in the social sciences, including education, crime and justice, international development and social welfare (Supplementary Data  1 ). We retrieved systematic reviews produced by the Campbell Collaboration by searching their website ( www.campbellcollaboration.org ) for reviews published between 2013‒2019 (as of 8th September 2019) — we limited the date range as we could not go through every review. As we were interested in the use of study designs in the wider social-science literature, we only considered reviews (32 in total) that contained sufficient information on the number of included and excluded studies that used different study designs. Studies may be excluded from systematic reviews for several reasons, such as their relevance to the scope of the review (e.g., testing a relevant intervention) and their study design. We only considered studies if the sole reason for their exclusion from the systematic review was their study design – i.e., reviews clearly reported that the study was excluded because it used a particular study design, and not because of any other reason, such as its relevance to the review’s research questions. We calculated the proportion of studies that used each design in each systematic review (using the same criteria as for the biodiversity-conservation literature – see Table  3 ) and then averaged these proportions across all systematic reviews.

Within-study comparisons of different study designs

We wanted to make direct within-study comparisons between the estimates obtained by different study designs (e.g., see 38 , 70 , 71 for single within-study comparisons) for many different studies. If a dataset contains data collected using a BACI design, subsets of these data can be used to mimic the use of other study designs (a BA design using only data for the impact group, and a CI design using only data collected after the impact occurred). Similarly, if data were collected using a R-BACI design, subsets of these data can be used to mimic the use of a BA design and a R-CI design. Collecting BACI and R-BACI datasets would therefore allow us to make direct within-study comparisons of the estimates obtained by these designs.

We collated BACI and R-BACI datasets by searching the Web of Science Core Collection 72 which included the following citation indexes: Science Citation Index Expanded (SCI-EXPANDED) 1900-present; Social Sciences Citation Index (SSCI) 1900-present Arts & Humanities Citation Index (A&HCI) 1975-present; Conference Proceedings Citation Index - Science (CPCI-S) 1990-present; Conference Proceedings Citation Index - Social Science & Humanities (CPCI-SSH) 1990-present; Book Citation Index - Science (BKCI-S) 2008-present; Book Citation Index - Social Sciences & Humanities (BKCI-SSH) 2008-present; Emerging Sources Citation Index (ESCI) 2015-present; Current Chemical Reactions (CCR-EXPANDED) 1985-present (Includes Institut National de la Propriete Industrielle structure data back to 1840); Index Chemicus (IC) 1993-present. The following search terms were used: [‘BACI’] OR [‘Before-After Control-Impact’] and the search was conducted on the 18th December 2017. Our search returned 674 results, which we then refined by selecting only ‘Article’ as the document type and using only the following Web of Science Categories: ‘Ecology’, ‘Marine Freshwater Biology’, ‘Biodiversity Conservation’, ‘Fisheries’, ‘Oceanography’, ‘Forestry’, ‘Zoology’, Ornithology’, ‘Biology’, ‘Plant Sciences’, ‘Entomology’, ‘Remote Sensing’, ‘Toxicology’ and ‘Soil Science’. This left 579 results, which we then restricted to articles published since 2002 (15 years prior to search) to give us a realistic opportunity to obtain the raw datasets, thus reducing this number to 542. We were able to access the abstracts of 521 studies and excluded any that did not test the effect of an environmental intervention or threat using an R-BACI or BACI design with response measures related to the abundance (e.g., density, counts, biomass, cover), reproduction (reproductive success) or size (body length, body mass) of animals or plants. Many studies did not test a relevant metric (e.g., they measured species richness), did not use a BACI or R-BACI design, or did not test the effect of an intervention or threat — this left 96 studies for which we contacted all corresponding authors to ask for the raw dataset. We were able to fully access 54 raw datasets, but upon closer inspection we found that three of these datasets either: did not use a BACI design; did not use the metrics we specified; or did not provide sufficient data for our analyses. This left 51 datasets in total that we used in our preliminary analyses (Supplementary Data  2 ).

All the datasets were originally collected to evaluate the effect of an environmental intervention or impact. Most of them contained multiple response variables (e.g., different measures for different species, such as abundance or density for species A, B, and C). Within a dataset, we use the term “response” to refer to the estimation of the true effect of an impact on one response variable. There were 1,968 responses in total across 51 datasets. We then excluded 932 responses (resulting in the exclusion of one dataset) where one or more of the four time-period and treatment subsets (Before Control, Before Impact, After Control, and After Impact data) consisted of entirely zero measurements, or two or more of these subsets had more than 90% zero measurements. We also excluded one further dataset as it was the only one to not contain repeated measurements at sites in both the before- and after-periods. This was necessary to generate reliable standard errors when modelling these data. We modelled the remaining 1,036 responses from across 49 datasets (Supplementary Table  1 ).

We applied each study design to the appropriate components of each dataset using Generalised Linear Models (GLMs 73 , 74 ) because of their generality and ability to implement the statistical estimators of many different study designs. The model structure of GLMs was adjusted for each response in each dataset based on the study design specified, response measure and dataset structure (Supplementary Table  2 ). We quantified the effect of the time period for the BA design (After vs Before the impact) and the effect of the treatment type for the CI and R-CI designs (Impact vs Control) on the response variable (Supplementary Table  2 ). For BACI and R-BACI designs, we implemented two statistical estimators: 1.) a DiD estimator that estimated the true effect using an interaction term between time and treatment type; and 2.) a covariance adjustment estimator that estimated the true effect using a term for the treatment type with a lagged variable (Supplementary Table  2 ).

As there were large numbers of responses, we used general a priori rules to specify models for each response; this may have led to some model misspecification, but was unlikely to have substantially affected our pairwise comparison of estimates obtained by different designs. The error family of each GLM was specified based on the nature of the measure used and preliminary data exploration: count measures (e.g., abundance) = poisson; density measures (e.g., biomass or abundance per unit area) = quasipoisson, as data for these measures tended to be overdispersed; percentage measures (e.g., percentage cover) = quasibinomial; and size measures (e.g., body length) = gaussian.

We treated each year or season in which data were collected as independent observations because the implementation of a seasonal term in models is likely to vary on a case-by-case basis; this will depend on the research questions posed by each study and was not feasible for us to consider given the large number of responses we were modelling. The log link function was used for all models to generate a standardised log response ratio as an estimate of the true effect for each response; a fixed effect coefficient (a variable named treatment status; Supplementary Table  2 ) was used to estimate the log response ratio 61 . If the response had at least ten ‘sites’ (independent sampling units) and two measurements per site on average, we used the random effects of subsample (replicates within a site) nested within site to capture the dependence within a site and subsample (i.e., a Generalised Linear Mixed Model or GLMM 73 , 74 was implemented instead of a GLM); otherwise we fitted a GLM with only the fixed effects (Supplementary Table  2 ).

We fitted all models using R version 3.5.1 75 , and packages lme4 76 and MASS 77 . Code to replicate all analyses is available (see Data and Code Availability). We compared the estimates obtained using each study design (both in terms of point estimates and estimates with associated standard error) by their magnitude and sign.

A model-based quantification of the bias in study design estimates

We used a hierarchical Bayesian model motivated by the decomposition in Equation (1) to quantify the bias in different study design estimates. This model takes the estimated effects of impacts and their standard errors as inputs. Let \(\hat \beta _{ij}\) be the true effect estimator in study \(i\) using design \(j\) and \(\hat \sigma _{ij}\) be its estimated standard error from the corresponding GLM or GLMM. Our hierarchical model assumes:

where β i is the true effect for response \(i\) , \(\gamma _{ij}\) is the bias of design j in response \(i\) , and \(\varepsilon _{ij}\) is the sampling noise of the statistical estimator. Although \(\gamma _{ij}\) technically incorporates both the design bias and any misspecification (modelling) bias due to using GLMs or GLMMs (Equation (1)), we expect the modelling bias to be much smaller than the design bias 3 , 11 . We assume the statistical errors \(\varepsilon _i\) within a response are related to the estimated standard errors through the following joint distribution:

where \({\Omega}\) is the correlation matrix for the different estimators in the same response and λ is a scaling factor to account for possible over/under-estimation of the standard errors.

This model effectively quantifies the bias of design \(j\) using the value of \(\sigma _j\) (larger values = more bias) by accounting for within-response correlations using the correlation matrix \({\Omega}\) and for possible under-estimation of the standard error using \(\lambda\) . We ensured that the prior distributions we used had very large variances so they would have a very small effect on the posterior distribution — accordingly we placed the following disperse priors on the variance parameters:

We fitted the hierarchical Bayesian model in R version 3.5.1 using the Bayesian inference package rstan 78 .

Data availability

All data analysed in the current study are available from Zenodo, https://doi.org/10.5281/zenodo.3560856 .  Source data are provided with this paper.

Code availability

All code used in the current study is available from Zenodo, https://doi.org/10.5281/zenodo.3560856 .

Donnelly, C. A. et al. Four principles to make evidence synthesis more useful for policy. Nature 558 , 361–364 (2018).

Article   ADS   CAS   PubMed   Google Scholar  

McKinnon, M. C., Cheng, S. H., Garside, R., Masuda, Y. J. & Miller, D. C. Sustainability: map the evidence. Nature 528 , 185–187 (2015).

Rubin, D. B. For objective causal inference, design trumps analysis. Ann. Appl. Stat. 2 , 808–840 (2008).

Article   MathSciNet   MATH   Google Scholar  

Peirce, C. S. & Jastrow, J. On small differences in sensation. Mem. Natl Acad. Sci. 3 , 73–83 (1884).

Fisher, R. A. Statistical methods for research workers . (Oliver and Boyd, 1925).

Angrist, J. D. & Pischke, J.-S. Mostly harmless econometrics: an empiricist’s companion . (Princeton University Press, 2008).

de Palma, A. et al . Challenges with inferring how land-use affects terrestrial biodiversity: study design, time, space and synthesis. in Next Generation Biomonitoring: Part 1 163–199 (Elsevier Ltd., 2018).

Sagarin, R. & Pauchard, A. Observational approaches in ecology open new ground in a changing world. Front. Ecol. Environ. 8 , 379–386 (2010).

Article   Google Scholar  

Shadish, W. R., Cook, T. D. & Campbell, D. T. Experimental and quasi-experimental designs for generalized causal inference . (Houghton Mifflin, 2002).

Rosenbaum, P. R. Design of observational studies . vol. 10 (Springer, 2010).

Light, R. J., Singer, J. D. & Willett, J. B. By design: Planning research on higher education. By design: Planning research on higher education . (Harvard University Press, 1990).

Ioannidis, J. P. A. Why most published research findings are false. PLOS Med. 2 , e124 (2005).

Article   PubMed   PubMed Central   Google Scholar  

Open Science Collaboration. Estimating the reproducibility of psychological science. Science 349 , aac4716 (2015).

Article   CAS   Google Scholar  

John, L. K., Loewenstein, G. & Prelec, D. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol. Sci. 23 , 524–532 (2012).

Article   PubMed   Google Scholar  

Kerr, N. L. HARKing: hypothesizing after the results are known. Personal. Soc. Psychol. Rev. 2 , 196–217 (1998).

Zhao, Q., Keele, L. J. & Small, D. S. Comment: will competition-winning methods for causal inference also succeed in practice? Stat. Sci. 34 , 72–76 (2019).

Article   MATH   Google Scholar  

Friedman, J., Hastie, T. & Tibshirani, R. The Elements of Statistical Learning . vol. 1 (Springer series in statistics, 2001).

Underwood, A. J. Beyond BACI: experimental designs for detecting human environmental impacts on temporal variations in natural populations. Mar. Freshw. Res. 42 , 569–587 (1991).

Stewart-Oaten, A. & Bence, J. R. Temporal and spatial variation in environmental impact assessment. Ecol. Monogr. 71 , 305–339 (2001).

Eddy, T. D., Pande, A. & Gardner, J. P. A. Massive differential site-specific and species-specific responses of temperate reef fishes to marine reserve protection. Glob. Ecol. Conserv. 1 , 13–26 (2014).

Sher, A. A. et al. Native species recovery after reduction of an invasive tree by biological control with and without active removal. Ecol. Eng. 111 , 167–175 (2018).

Imbens, G. W. & Rubin, D. B. Causal Inference in Statistics, Social, and Biomedical Sciences . (Cambridge University Press, 2015).

Greenhalgh, T. How to read a paper: the basics of Evidence Based Medicine . (John Wiley & Sons, Ltd, 2019).

Salmond, S. S. Randomized Controlled Trials: Methodological Concepts and Critique. Orthopaedic Nursing 27 , (2008).

Geijzendorffer, I. R. et al. How can global conventions for biodiversity and ecosystem services guide local conservation actions? Curr. Opin. Environ. Sustainability 29 , 145–150 (2017).

Dimick, J. B. & Ryan, A. M. Methods for evaluating changes in health care policy. JAMA 312 , 2401 (2014).

Article   CAS   PubMed   Google Scholar  

Ding, P. & Li, F. A bracketing relationship between difference-in-differences and lagged-dependent-variable adjustment. Political Anal. 27 , 605–615 (2019).

Christie, A. P. et al. Simple study designs in ecology produce inaccurate estimates of biodiversity responses. J. Appl. Ecol. 56 , 2742–2754 (2019).

Watson, M. et al. An analysis of the quality of experimental design and reliability of results in tribology research. Wear 426–427 , 1712–1718 (2019).

Kilkenny, C. et al. Survey of the quality of experimental design, statistical analysis and reporting of research using animals. PLoS ONE 4 , e7824 (2009).

Christie, A. P. et al. The challenge of biased evidence in conservation. Conserv, Biol . 13577, https://doi.org/10.1111/cobi.13577 (2020).

Christie, A. P. et al. Poor availability of context-specific evidence hampers decision-making in conservation. Biol. Conserv. 248 , 108666 (2020).

Moscoe, E., Bor, J. & Bärnighausen, T. Regression discontinuity designs are underutilized in medicine, epidemiology, and public health: a review of current and best practice. J. Clin. Epidemiol. 68 , 132–143 (2015).

Goldenhar, L. M. & Schulte, P. A. Intervention research in occupational health and safety. J. Occup. Med. 36 , 763–778 (1994).

CAS   PubMed   Google Scholar  

Junker, J. et al. A severe lack of evidence limits effective conservation of the World’s primates. BioScience https://doi.org/10.1093/biosci/biaa082 (2020).

Altindag, O., Joyce, T. J. & Reeder, J. A. Can Nonexperimental Methods Provide Unbiased Estimates of a Breastfeeding Intervention? A Within-Study Comparison of Peer Counseling in Oregon. Evaluation Rev. 43 , 152–188 (2019).

Chaplin, D. D. et al. The Internal And External Validity Of The Regression Discontinuity Design: A Meta-Analysis Of 15 Within-Study Comparisons. J. Policy Anal. Manag. 37 , 403–429 (2018).

Cook, T. D., Shadish, W. R. & Wong, V. C. Three conditions under which experiments and observational studies produce comparable causal estimates: New findings from within-study comparisons. J. Policy Anal. Manag. 27 , 724–750 (2008).

Ioannidis, J. P. A. et al. Comparison of evidence of treatment effects in randomized and nonrandomized studies. J. Am. Med. Assoc. 286 , 821–830 (2001).

dos Santos Ribas, L. G., Pressey, R. L., Loyola, R. & Bini, L. M. A global comparative analysis of impact evaluation methods in estimating the effectiveness of protected areas. Biol. Conserv. 246 , 108595 (2020).

Benson, K. & Hartz, A. J. A Comparison of Observational Studies and Randomized, Controlled Trials. N. Engl. J. Med. 342 , 1878–1886 (2000).

Smokorowski, K. E. et al. Cautions on using the Before-After-Control-Impact design in environmental effects monitoring programs. Facets 2 , 212–232 (2017).

França, F. et al. Do space-for-time assessments underestimate the impacts of logging on tropical biodiversity? An Amazonian case study using dung beetles. J. Appl. Ecol. 53 , 1098–1105 (2016).

Duvendack, M., Hombrados, J. G., Palmer-Jones, R. & Waddington, H. Assessing ‘what works’ in international development: meta-analysis for sophisticated dummies. J. Dev. Effectiveness 4 , 456–471 (2012).

Sutherland, W. J. et al. Building a tool to overcome barriers in research-implementation spaces: The Conservation Evidence database. Biol. Conserv. 238 , 108199 (2019).

Gusenbauer, M. & Haddaway, N. R. Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. Res. Synth. Methods 11 , 181–217 (2020).

Konno, K. & Pullin, A. S. Assessing the risk of bias in choice of search sources for environmental meta‐analyses. Res. Synth. Methods 11 , 698–713 (2020).

PubMed   Google Scholar  

Butsic, V., Lewis, D. J., Radeloff, V. C., Baumann, M. & Kuemmerle, T. Quasi-experimental methods enable stronger inferences from observational data in ecology. Basic Appl. Ecol. 19 , 1–10 (2017).

Brownstein, N. C., Louis, T. A., O’Hagan, A. & Pendergast, J. The role of expert judgment in statistical inference and evidence-based decision-making. Am. Statistician 73 , 56–68 (2019).

Article   MathSciNet   Google Scholar  

Hahn, J., Todd, P. & Klaauw, W. Identification and estimation of treatment effects with a regression-discontinuity design. Econometrica 69 , 201–209 (2001).

Slavin, R. E. Best evidence synthesis: an intelligent alternative to meta-analysis. J. Clin. Epidemiol. 48 , 9–18 (1995).

Slavin, R. E. Best-evidence synthesis: an alternative to meta-analytic and traditional reviews. Educ. Researcher 15 , 5–11 (1986).

Shea, B. J. et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ (Online) 358 , 1–8 (2017).

Google Scholar  

Sterne, J. A. C. et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ 355 , i4919 (2016).

Guyatt, G. et al. GRADE guidelines: 11. Making an overall rating of confidence in effect estimates for a single outcome and for all outcomes. J. Clin. Epidemiol. 66 , 151–157 (2013).

Davies, G. M. & Gray, A. Don’t let spurious accusations of pseudoreplication limit our ability to learn from natural experiments (and other messy kinds of ecological monitoring). Ecol. Evolution 5 , 5295–5304 (2015).

Lortie, C. J., Stewart, G., Rothstein, H. & Lau, J. How to critically read ecological meta-analyses. Res. Synth. Methods 6 , 124–133 (2015).

Gutzat, F. & Dormann, C. F. Exploration of concerns about the evidence-based guideline approach in conservation management: hints from medical practice. Environ. Manag. 66 , 435–449 (2020).

Greenhalgh, T. Will COVID-19 be evidence-based medicine’s nemesis? PLOS Med. 17 , e1003266 (2020).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Barlow, J. et al. The future of hyperdiverse tropical ecosystems. Nature 559 , 517–526 (2018).

Gurevitch, J. & Hedges, L. V. Statistical issues in ecological meta‐analyses. Ecology 80 , 1142–1149 (1999).

Stone, J. C., Glass, K., Munn, Z., Tugwell, P. & Doi, S. A. R. Comparison of bias adjustment methods in meta-analysis suggests that quality effects modeling may have less limitations than other approaches. J. Clin. Epidemiol. 117 , 36–45 (2020).

Rhodes, K. M. et al. Adjusting trial results for biases in meta-analysis: combining data-based evidence on bias with detailed trial assessment. J. R. Stat. Soc.: Ser. A (Stat. Soc.) 183 , 193–209 (2020).

Article   MathSciNet   CAS   Google Scholar  

Efthimiou, O. et al. Combining randomized and non-randomized evidence in network meta-analysis. Stat. Med. 36 , 1210–1226 (2017).

Article   MathSciNet   PubMed   Google Scholar  

Welton, N. J., Ades, A. E., Carlin, J. B., Altman, D. G. & Sterne, J. A. C. Models for potentially biased evidence in meta-analysis using empirically based priors. J. R. Stat. Soc. Ser. A (Stat. Soc.) 172 , 119–136 (2009).

Turner, R. M., Spiegelhalter, D. J., Smith, G. C. S. & Thompson, S. G. Bias modelling in evidence synthesis. J. R. Stat. Soc.: Ser. A (Stat. Soc.) 172 , 21–47 (2009).

Shackelford, G. E. et al. Dynamic meta-analysis: a method of using global evidence for local decision making. bioRxiv 2020.05.18.078840, https://doi.org/10.1101/2020.05.18.078840 (2020).

Sutherland, W. J., Pullin, A. S., Dolman, P. M. & Knight, T. M. The need for evidence-based conservation. Trends Ecol. evolution 19 , 305–308 (2004).

Ioannidis, J. P. A. Meta-research: Why research on research matters. PLOS Biol. 16 , e2005468 (2018).

Article   PubMed   PubMed Central   CAS   Google Scholar  

LaLonde, R. J. Evaluating the econometric evaluations of training programs with experimental data. Am. Econ. Rev. 76 , 604–620 (1986).

Long, Q., Little, R. J. & Lin, X. Causal inference in hybrid intervention trials involving treatment choice. J. Am. Stat. Assoc. 103 , 474–484 (2008).

Article   MathSciNet   CAS   MATH   Google Scholar  

Thomson Reuters. ISI Web of Knowledge. http://www.isiwebofknowledge.com (2019).

Stroup, W. W. Generalized linear mixed models: modern concepts, methods and applications . (CRC press, 2012).

Bolker, B. M. et al. Generalized linear mixed models: a practical guide for ecology and evolution. Trends Ecol. Evolution 24 , 127–135 (2009).

R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing (2019).

Bates, D., Mächler, M., Bolker, B. & Walker, S. Fitting linear mixed-effects models using lme4. J. Stat. Softw. 67 , 1–48 (2015).

Venables, W. N. & Ripley, B. D. Modern Applied Statistics with S . (Springer, 2002).

Stan Development Team. RStan: the R interface to Stan. R package version 2.19.3 (2020).

Download references

Acknowledgements

We are grateful to the following people and organisations for contributing datasets to this analysis: P. Edwards, G.R. Hodgson, H. Welsh, J.V. Vieira, authors of van Deurs et al. 2012, T. M. Grome, M. Kaspersen, H. Jensen, C. Stenberg, T. K. Sørensen, J. Støttrup, T. Warnar, H. Mosegaard, Axel Schwerk, Alberto Velando, Dolores River Restoration Partnership, J.S. Pinilla, A. Page, M. Dasey, D. Maguire, J. Barlow, J. Louzada, Jari Florestal, R.T. Buxton, C.R. Schacter, J. Seoane, M.G. Conners, K. Nickel, G. Marakovich, A. Wright, G. Soprone, CSIRO, A. Elosegi, L. García-Arberas, J. Díez, A. Rallo, Parks and Wildlife Finland, Parc Marin de la Côte Bleue. Author funding sources: T.A. was supported by the Grantham Foundation for the Protection of the Environment, Kenneth Miller Trust and Australian Research Council Future Fellowship (FT180100354); W.J.S. and P.A.M. were supported by Arcadia, MAVA, and The David and Claudia Harding Foundation; A.P.C. was supported by the Natural Environment Research Council via Cambridge Earth System Science NERC DTP (NE/L002507/1); D.A. was funded by Portugal national funds through the FCT – Foundation for Science and Technology, under the Transitional Standard – DL57 / 2016 and through the strategic project UIDB/04326/2020; M.A. acknowledges Koniambo Nickel SAS, and particularly Gregory Marakovich and Andy Wright; J.C.A. was funded through by Dirección General de Investigación Científica, projects PB97-1252, BOS2002-01543, CGL2005-04893/BOS, CGL2008-02567 and Comunidad de Madrid, as well as by contract HENARSA-CSIC 2003469-CSIC19637; A.A. was funded by Spanish Government: MEC (CGL2007-65176); B.P.B. was funded through the U.S. Geological Survey and the New York City Department of Environmental Protection; R.B. was funded by Comunidad de Madrid (2018-T1/AMB-10374); J.A.S. and D.A.B. were funded through the U.S. Geological Survey and NextEra Energy; R.S.C. was funded by the Portuguese Foundation for Science and Technology (FCT) grant SFRH/BD/78813/2011 and strategic project UID/MAR/04292/2013; A.D.B. was funded through the Belgian offshore wind monitoring program (WINMON-BE), financed by the Belgian offshore wind energy sector via RBINS—OD Nature; M.K.D. was funded by the Harold L. Castle Foundation; P.M.E. was funded by the Clackamas County Water Environment Services River Health Stewardship Program and the Portland State University Student Watershed Research Project; T.D.E., J.P.A.G. and A.P. were supported by funding from the New Zealand Department of Conservation (Te Papa Atawhai) and from the Centre for Marine Environmental & Economic Research, Victoria University of Wellington, New Zealand; F.M.F. was funded by CNPq-CAPES grants (PELD site 23 403811/2012-0, PELD-RAS 441659/2016-0, BEX5528/13-5 and 383744/2015-6) and BNP Paribas Foundation (Climate & Biodiversity Initiative, BIOCLIMATE project); B.P.H. was funded by NOAA-NMFS sea scallop research set-aside program awards NA16FM1031, NA06FM1001, NA16FM2416, and NA04NMF4720332; A.L.B. was funded by the Portuguese Foundation for Science and Technology (FCT) grant FCT PD/BD/52597/2014, Bat Conservation International student research fellowship and CNPq grant 160049/2013-0; L.C.M. acknowledges Secretaría de Ciencia y Técnica (UNRC); R.A.M. acknowledges Alaska Fisheries Science Center, NOAA Fisheries, and U.S. Department of Commerce for salary support; C.F.J.M. was funded by the Portuguese Foundation for Science and Technology (FCT) grant SFRH/BD/80488/2011; R.R. was funded by the Portuguese Foundation for Science and Technology (FCT) grant PTDC/BIA-BIC/111184/2009, by Madeira’s Regional Agency for the Development of Research, Technology and Innovation (ARDITI) grant M1420-09-5369-FSE-000002 and by a Bat Conservation International student research fellowship; J.C. and S.S. were funded by the Alabama Department of Conservation and Natural Resources; A.T. was funded by the Spanish Ministry of Education with a Formacion de Profesorado Universitario (FPU) grant AP2008-00577 and Dirección General de Investigación Científica, project CGL2008-02567; C.W. was funded by Strategic Science Investment Funding of the Ministry of Business, Innovation and Employment, New Zealand; J.S.K. acknowledges Boreal Peatland LIFE (LIFE08 NAT/FIN/000596), Parks and Wildlife Finland and Kone Foundation; J.J.S.S. was funded by the Mexican National Council on Science and Technology (CONACYT 242558); N.N. was funded by The Carl Tryggers Foundation; I.L.J. was funded by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada; D.D. and D.S. were funded by the French National Research Agency via the “Investment for the Future” program IDEALG (ANR-10-BTBR-04) and by the ALGMARBIO project; R.C.P. was funded by CSIRO and whose research was also supported by funds from the Great Barrier Reef Marine Park Authority, the Fisheries Research and Development Corporation, the Australian Fisheries Management Authority, and Queensland Department of Primary Industries (QDPI). Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government. The scientific results and conclusions, as well as any views or opinions expressed herein, are those of the author(s) and do not necessarily reflect those of NOAA or the Department of Commerce.

Author information

Authors and affiliations.

Conservation Science Group, Department of Zoology, University of Cambridge, The David Attenborough Building, Downing Street, Cambridge, CB3 3QZ, UK

Alec P. Christie, Philip A. Martin & William J. Sutherland

Centre of Marine Sciences (CCMar), Universidade do Algarve, Campus de Gambelas, 8005-139, Faro, Portugal

David Abecasis

Institut de Recherche pour le Développement (IRD), UMR 9220 ENTROPIE & Laboratoire d’Excellence CORAIL, Université de Perpignan Via Domitia, 52 avenue Paul Alduy, 66860, Perpignan, France

Mehdi Adjeroud

Museo Nacional de Ciencias Naturales, CSIC, Madrid, Spain

Juan C. Alonso & Carlos Palacín

School of Biological Sciences, University of Queensland, Brisbane, 4072, QLD, Australia

Tatsuya Amano

Education Faculty of Bilbao, University of the Basque Country (UPV/EHU). Sarriena z/g E-48940 Leioa, Basque Country, Spain

Alvaro Anton

U.S. Geological Survey, New York Water Science Center, 425 Jordan Rd., Troy, NY, 12180, USA

Barry P. Baldigo

Universidad Complutense de Madrid, Departamento de Biodiversidad, Ecología y Evolución, Facultad de Ciencias Biológicas, c/ José Antonio Novais, 12, E-28040, Madrid, Spain

Rafael Barrientos & Carlos A. Martín

Durrell Institute of Conservation and Ecology (DICE), School of Anthropology and Conservation, University of Kent, Canterbury, CT2 7NR, UK

Jake E. Bicknell

U.S. Geological Survey, Northern Prairie Wildlife Research Center, Jamestown, ND, 58401, USA

Deborah A. Buhl & Jill A. Shaffer

Northern Gulf Institute, Mississippi State University, 1021 Balch Blvd, John C. Stennis Space Center, Mississippi, 39529, USA

Just Cebrian

MARE – Marine and Environmental Sciences Centre, Dept. Life Sciences, University of Coimbra, Coimbra, Portugal

Ricardo S. Ceia

CFE – Centre for Functional Ecology, Dept. Life Sciences, University of Coimbra, Coimbra, Portugal

Departamento de Ciencias Naturales, Universidad Nacional de Río Cuarto (UNRC), Córdoba, Argentina

Luciana Cibils-Martina

CONICET, Buenos Aires, Argentina

Marine Institute, Rinville, Oranmore, Galway, Ireland

Sarah Clarke & Oliver Tully

National Center for Scientific Research, PSL Université Paris, CRIOBE, USR 3278 CNRS-EPHE-UPVD, Maison des Océans, 195 rue Saint-Jacques, 75005, Paris, France

Joachim Claudet

School of Biological Sciences, University of Western Australia, Nedlands, WA, 6009, Australia

Michael D. Craig

School of Environmental and Conservation Sciences, Murdoch University, Murdoch, WA, 6150, Australia

Sorbonne Université, CNRS, UMR 7144, Station Biologique, F.29680, Roscoff, France

Dominique Davoult & Doriane Stagnol

Flanders Research Institute for Agriculture, Fisheries and Food (ILVO), Ankerstraat 1, 8400, Ostend, Belgium

Annelies De Backer

Marine Science Institute, University of California Santa Barbara, Santa Barbara, CA, 93106, USA

Mary K. Donovan

Hawaii Institute of Marine Biology, University of Hawaii at Manoa, Honolulu, HI, 96822, USA

Baruch Institute for Marine & Coastal Sciences, University of South Carolina, Columbia, SC, USA

Tyler D. Eddy

Centre for Fisheries Ecosystems Research, Fisheries & Marine Institute, Memorial University of Newfoundland, St. John’s, Canada

School of Biological Sciences, Victoria University of Wellington, P O Box 600, Wellington, 6140, New Zealand

Tyler D. Eddy, Jonathan P. A. Gardner & Anjali Pande

Lancaster Environment Centre, Lancaster University, LA1 4YQ, Lancaster, UK

Filipe M. França

Fisheries, Aquatic Science and Technology Laboratory, Alaska Pacific University, 4101 University Dr., Anchorage, AK, 99508, USA

Bradley P. Harris

Natural Resources Institute Finland, Manamansalontie 90, 88300, Paltamo, Finland

Department of Biology, Memorial University, St. John’s, NL, A1B 2R3, Canada

Ian L. Jones

National Marine Science Centre and Marine Ecology Research Centre, Southern Cross University, 2 Bay Drive, Coffs Harbour, 2450, Australia

Brendan P. Kelaher

Department of Biological and Environmental Science, University of Jyväskylä, Jyväskylä, Finland

Janne S. Kotiaho

School of Resource Wisdom, University of Jyväskylä, Jyväskylä, Finland

Centre for Ecology, Evolution and Environmental Changes – cE3c, Faculty of Sciences, University of Lisbon, 1749-016, Lisbon, Portugal

Adrià López-Baucells, Christoph F. J. Meyer & Ricardo Rocha

Biological Dynamics of Forest Fragments Project, National Institute for Amazonian Research and Smithsonian Tropical Research Institute, 69011-970, Manaus, Brazil

Granollers Museum of Natural History, Granollers, Spain

Adrià López-Baucells

Department of Biological Sciences, University of New Brunswick, PO Box 5050, Saint John, NB, E2L 4L5, Canada

Heather L. Major

Voimalohi Oy, Voimatie 23, Voimatie, 91100, Ii, Finland

Aki Mäki-Petäys

Natural Resources Institute Finland, Paavo Havaksen tie 3, 90014 University of Oulu, Oulu, Finland

Fundación Migres CIMA Ctra, Cádiz, Spain

Beatriz Martín

Intergovernmental Oceanographic Commission of UNESCO, Marine Policy and Regional Coordination Section Paris 07, Paris, France

BioRISC, St. Catharine’s College, Cambridge, CB2 1RL, UK

Philip A. Martin & William J. Sutherland

Departamento de Ecología e Hidrología, Universidad de Murcia, Campus de Espinardo, 30100, Murcia, Spain

Daniel Mateos-Molina

RACE Division, Alaska Fisheries Science Center, National Marine Fisheries Service, NOAA, 7600 Sand Point Way NE, Seattle, WA, 98115, USA

Robert A. McConnaughey

European Commission, Joint Research Centre (JRC), Ispra, VA, Italy

Michele Meroni

School of Science, Engineering and Environment, University of Salford, Salford, M5 4WT, UK

Christoph F. J. Meyer

Victorian National Park Association, Carlton, VIC, Australia

Department of Earth, Environment and Life Sciences (DiSTAV), University of Genoa, Corso Europa 26, 16132, Genoa, Italy

Monica Montefalcone

Department of Ecology, Swedish University of Agricultural Sciences, Uppsala, Sweden

Norbertas Noreika

Chair of Plant Health, Institute of Agricultural and Environmental Sciences, Estonian University of Life Sciences, Tartu, Estonia

Biosecurity New Zealand – Tiakitanga Pūtaiao Aotearoa, Ministry for Primary Industries – Manatū Ahu Matua, 66 Ward St, PO Box 40742, Wallaceville, New Zealand

Anjali Pande

National Institute of Water & Atmospheric Research Ltd (NIWA), 301 Evans Bay Parade, Greta Point Wellington, New Zealand

CSIRO Oceans & Atmosphere, Queensland Biosciences Precinct, 306 Carmody Road, ST. LUCIA QLD, 4067, Australia

C. Roland Pitcher

Museo Nacional de Ciencias Naturales, CSIC, José Gutiérrez Abascal 2, E-28006, Madrid, Spain

Carlos Ponce

Fort Keogh Livestock and Range Research Laboratory, 243 Fort Keogh Rd, Miles City, Montana, 59301, USA

Matt Rinella

CIBIO-InBIO, Research Centre in Biodiversity and Genetic Resources, University of Porto, Vairão, Portugal

Ricardo Rocha

Departamento de Sistemas Físicos, Químicos y Naturales, Universidad Pablo de Olavide, ES-41013, Sevilla, Spain

María C. Ruiz-Delgado

El Colegio de la Frontera Sur, A.P. 424, 77000, Chetumal, QR, Mexico

Juan J. Schmitter-Soto

Division of Fish and Wildlife, New York State Department of Environmental Conservation, 625 Broadway, Albany, NY, 12233-4756, USA

Shailesh Sharma

University of Denver Department of Biological Sciences, Denver, CO, USA

Anna A. Sher

U.S. Geological Survey, Fort Collins Science Center, Fort Collins, CO, 80526, USA

Thomas R. Stanley

School for Marine Science and Technology, University of Massachusetts Dartmouth, New Bedford, MA, USA

Kevin D. E. Stokesbury

Georges Lemaître Earth and Climate Research Centre, Earth and Life Institute, Université Catholique de Louvain, 1348, Louvain-la-Neuve, Belgium

Aurora Torres

Center for Systems Integration and Sustainability, Department of Fisheries and Wildlife, 13 Michigan State University, East Lansing, MI, 48823, USA

Natural Resources Institute Finland, Latokartanonkaari 9, 00790, Helsinki, Finland

Teppo Vehanen

Manaaki Whenua – Landcare Research, Private Bag 3127, Hamilton, 3216, New Zealand

Corinne Watts

Statistical Laboratory, Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WB, UK

Qingyuan Zhao

You can also search for this author in PubMed   Google Scholar

Contributions

A.P.C., T.A., P.A.M., Q.Z., and W.J.S. designed the research; A.P.C. wrote the paper; D.A., M.A., J.C.A., A.A., B.P.B, R.B., J.B., D.A.B., J.C., R.S.C., L.C.M., S.C., J.C., M.D.C, D.D., A.D.B., M.K.D., T.D.E., P.M.E., F.M.F., J.P.A.G., B.P.H., A.H., I.L.J., B.P.K., J.S.K., A.L.B., H.L.M., A.M., B.M., C.A.M., D.M., R.A.M, M.M., C.F.J.M.,K.M., M.M., N.N., C.P., A.P., C.R.P., C.P., M.R., R.R., M.C.R., J.J.S.S., J.A.S., S.S., A.A.S., D.S., K.D.E.S., T.R.S., A.T., O.T., T.V., C.W. contributed datasets for analyses. All authors reviewed, edited, and approved the manuscript.

Corresponding author

Correspondence to Alec P. Christie .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Peer review information Nature Communications thanks Casper Albers, Samuel Scheiner, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information, peer review file, description of additional supplementary information, supplementary data 1, supplementary data 2, source data, source data, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Christie, A.P., Abecasis, D., Adjeroud, M. et al. Quantifying and addressing the prevalence and bias of study designs in the environmental and social sciences. Nat Commun 11 , 6377 (2020). https://doi.org/10.1038/s41467-020-20142-y

Download citation

Received : 29 January 2020

Accepted : 13 November 2020

Published : 11 December 2020

DOI : https://doi.org/10.1038/s41467-020-20142-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Is there a “difference-in-difference” the impact of scientometric evaluation on the evolution of international publications in egyptian universities and research centres.

  • Mona Farouk Ali

Scientometrics (2024)

Quantifying research waste in ecology

  • Marija Purgar
  • Tin Klanjscek
  • Antica Culina

Nature Ecology & Evolution (2022)

Assessing assemblage-wide mammal responses to different types of habitat modification in Amazonian forests

  • Paula C. R. Almeida-Maués
  • Anderson S. Bueno
  • Ana Cristina Mendes-Oliveira

Scientific Reports (2022)

Mitigating impacts of invasive alien predators on an endangered sea duck amidst high native predation pressure

  • Kim Jaatinen
  • Ida Hermansson

Oecologia (2022)

Standards of conduct and reporting in evidence syntheses that could inform environmental policy and management decisions

  • Andrew S. Pullin
  • Samantha H. Cheng
  • Paul Woodcock

Environmental Evidence (2022)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Anthropocene newsletter — what matters in anthropocene research, free to your inbox weekly.

research study biases

Incorporate STEM journalism in your classroom

  • Exercise type: Activity
  • Topic: Science & Society
  • Category: Research & Design
  • Category: Diversity in STEM

How bias affects scientific research

  • Download Student Worksheet

Purpose: Students will work in groups to evaluate bias in scientific research and engineering projects and to develop guidelines for minimizing potential biases.

Procedural overview: After reading the Science News for Students article “ Think you’re not biased? Think again ,” students will discuss types of bias in scientific research and how to identify it. Students will then search the Science News archive for examples of different types of bias in scientific and medical research. Students will read the National Institute of Health’s Policy on Sex as a Biological Variable and analyze how this policy works to reduce bias in scientific research on the basis of sex and gender. Based on their exploration of bias, students will discuss the benefits and limitations of research guidelines for minimizing particular types of bias and develop guidelines of their own.

Approximate class time: 2 class periods

How Bias Affects Scientific Research student guide

Computer with access to the Science News archive

Interactive meeting and screen-sharing application for virtual learning (optional)

Directions for teachers:

One of the guiding principles of scientific inquiry is objectivity. Objectivity is the idea that scientific questions, methods and results should not be affected by the personal values, interests or perspectives of researchers. However, science is a human endeavor, and experimental design and analysis of information are products of human thought processes. As a result, biases may be inadvertently introduced into scientific processes or conclusions.

In scientific circles, bias is described as any systematic deviation between the results of a study and the “truth.” Bias is sometimes described as a tendency to prefer one thing over another, or to favor one person, thing or explanation in a way that prevents objectivity or that influences the outcome of a study or the understanding of a phenomenon. Bias can be introduced in multiple points during scientific research — in the framing of the scientific question, in the experimental design, in the development or implementation of processes used to conduct the research, during collection or analysis of data, or during the reporting of conclusions.

Researchers generally recognize several different sources of bias, each of which can strongly affect the results of STEM research. Three types of bias that often occur in scientific and medical studies are researcher bias, selection bias and information bias.

Researcher bias occurs when the researcher conducting the study is in favor of a certain result. Researchers can influence outcomes through their study design choices, including who they choose to include in a study and how data are interpreted. Selection bias can be described as an experimental error that occurs when the subjects of the study do not accurately reflect the population to whom the results of the study will be applied. This commonly happens as unequal inclusion of subjects of different races, sexes or genders, ages or abilities. Information bias occurs as a result of systematic errors during the collection, recording or analysis of data.

When bias occurs, a study’s results may not accurately represent phenomena in the real world, or the results may not apply in all situations or equally for all populations. For example, if a research study does not address the full diversity of people to whom the solution will be applied, then the researchers may have missed vital information about whether and how that solution will work for a large percentage of a target population.

Bias can also affect the development of engineering solutions. For example, a new technology product tested only with teenagers or young adults who are comfortable using new technologies may have user experience issues when placed in the hands of older adults or young children.

Want to make it a virtual lesson? Post the links to the  Science News for Students article “ Think you’re not biased? Think again ,” and the National Institutes of Health information on sickle-cell disease . A link to additional resources can be provided for the students who want to know more. After students have reviewed the information at home, discuss the four questions in the setup and the sickle-cell research scenario as a class. When the students have a general understanding of bias in research, assign students to breakout rooms to look for examples of different types of bias in scientific and medical research, to discuss the Science News article “ Biomedical studies are including more female subjects (finally) ” and the National Institute of Health’s Policy on Sex as a Biological Variable and to develop bias guidelines of their own. Make sure the students have links to all articles they will need to complete their work. Bring the groups back together for an all-class discussion of the bias guidelines they write.

Assign the Science News for Students article “ Think you’re not biased? Think again ” as homework reading to introduce students to the core concepts of scientific objectivity and bias. Request that they answer the first two questions on their guide before the first class discussion on this topic. In this discussion, you will cover the idea of objective truth and introduce students to the terminology used to describe bias. Use the background information to decide what level of detail you want to give to your students.

As students discuss bias, help them understand objective and subjective data and discuss the importance of gathering both kinds of data. Explain to them how these data differ. Some phenomena — for example, body temperature, blood type and heart rate — can be objectively measured. These data tend to be quantitative. Other phenomena cannot be measured objectively and must be considered subjectively. Subjective data are based on perceptions, feelings or observations and tend to be qualitative rather than quantitative. Subjective measurements are common and essential in biomedical research, as they can help researchers understand whether a therapy changes a patient’s experience. For instance, subjective data about the amount of pain a patient feels before and after taking a medication can help scientists understand whether and how the drug works to alleviate pain. Subjective data can still be collected and analyzed in ways that attempt to minimize bias.

Try to guide student discussion to include a larger context for bias by discussing the effects of bias on understanding of an “objective truth.” How can someone’s personal views and values affect how they analyze information or interpret a situation?

To help students understand potential effects of biases, present them with the following scenario based on information from the National Institutes of Health :

Sickle-cell disease is a group of inherited disorders that cause abnormalities in red blood cells. Most of the people who have sickle-cell disease are of African descent; it also appears in populations from the Mediterranean, India and parts of Latin America. Males and females are equally likely to inherit the condition. Imagine that a therapy was developed to treat the condition, and clinical trials enlisted only male subjects of African descent. How accurately would the results of that study reflect the therapy’s effectiveness for all people who suffer from sickle-cell disease?

In the sickle-cell scenario described above, scientists will have a good idea of how the therapy works for males of African descent. But they may not be able to accurately predict how the therapy will affect female patients or patients of different races or ethnicities. Ask the students to consider how they would devise a study that addressed all the populations affected by this disease.

Before students move on, have them answer the following questions. The first two should be answered for homework and discussed in class along with the remaining questions.

1.What is bias?

In common terms, bias is a preference for or against one idea, thing or person. In scientific research, bias is a systematic deviation between observations or interpretations of data and an accurate description of a phenomenon.

2. How can biases affect the accuracy of scientific understanding of a phenomenon? How can biases affect how those results are applied?

Bias can cause the results of a scientific study to be disproportionately weighted in favor of one result or group of subjects. This can cause misunderstandings of natural processes that may make conclusions drawn from the data unreliable. Biased procedures, data collection or data interpretation can affect the conclusions scientists draw from a study and the application of those results. For example, if the subjects that participate in a study testing an engineering design do not reflect the diversity of a population, the end product may not work as well as desired for all users.

3. Describe two potential sources of bias in a scientific, medical or engineering research project. Try to give specific examples.

Researchers can intentionally or unintentionally introduce biases as a result of their attitudes toward the study or its purpose or toward the subjects or a group of subjects. Bias can also be introduced by methods of measuring, collecting or reporting data. Examples of potential sources of bias include testing a small sample of subjects, testing a group of subjects that is not diverse and looking for patterns in data to confirm ideas or opinions already held.

4. How can potential biases be identified and eliminated before, during or after a scientific study?

Students should brainstorm ways to identify sources of bias in the design of research studies. They may suggest conducting implicit bias testing or interviews before a study can be started, developing guidelines for research projects, peer review of procedures and samples/subjects before beginning a study, and peer review of data and conclusions after the study is completed and before it is published. Students may focus on the ideals of transparency and replicability of results to help reduce biases in scientific research.

Obtain and evaluate information about bias

Students will now work in small groups to select and analyze articles for different types of bias in scientific and medical research. Students will start by searching the Science News or Science News for Students archives and selecting articles that describe scientific studies or engineering design projects. If the Science News or Science News for Students articles chosen by students do not specifically cite and describe a study, students should consult the Citations at the end of the article for links to related primary research papers. Students may need to read the methods section and the conclusions of the primary research paper to better understand the project’s design and to identify potential biases. Do not assume that every scientific paper features biased research.

Student groups should evaluate the study or engineering design project outlined in the article to identify any biases in the experimental design, data collection, analysis or results. Students may need additional guidance for identifying biases. Remind them of the prior discussion about sources of bias and task them to review information about indicators of bias. Possible indicators include extreme language such as all , none or nothing ; emotional appeals rather than logical arguments; proportions of study subjects with specific characteristics such as gender, race or age; arguments that support or refute one position over another and oversimplifications or overgeneralizations. Students may also want to look for clues related to the researchers’ personal identity such as race, religion or gender. Information on political or religious points of view, sources of funding or professional affiliations may also suggest biases.

Students should also identify any deliberate attempts to reduce or eliminate bias in the project or its results. Then groups should come back together and share the results of their analysis with the class.

If students need support in searching the archives for appropriate articles, encourage groups to brainstorm search terms that may turn up related articles. Some potential search terms include bias , study , studies , experiment , engineer , new device , design , gender , sex , race , age , aging , young , old , weight , patients , survival or medical .

If you are short on time or students do not have access to the Science News or Science News for Students archive, you may want to provide articles for students to review. Some suggested articles are listed in the additional resources  below.

Once groups have selected their articles, students should answer the following questions in their groups.

1. Record the title and URL of the article and write a brief summary of the study or project.

Answers will vary, but students should accurately cite the article evaluated and summarize the study or project described in the article. Sample answer: We reviewed the Science News article “Even brain images can be biased,” which can be found at www.sciencenews.org/blog/scicurious/even-brain-images-can-be-biased. This article describes how scientific studies of human brains that involve electronic images of brains tend to include study subjects from wealthier and more highly educated households and how researchers set out to collect new data to make the database of brain images more diverse.

2. What sources of potential bias (if any) did you identify in the study or project? Describe any procedures or policies deliberately included in the study or project to eliminate biases.

The article “Even brain images can be biased” describes how scientists identified a sampling bias in studies of brain images that resulted from the way subjects were recruited. Most of these studies were conducted at universities, so many college students volunteer to participate, which resulted in the samples being skewed toward wealthier, educated, white subjects. Scientists identified a database of pediatric brain images and evaluated the diversity of the subjects in that database. They found that although the subjects in that database were more ethnically diverse than the U.S. population, the subjects were generally from wealthier households and the parents of the subjects tended to be more highly educated than average. Scientists applied statistical methods to weight the data so that study samples from the database would more accurately reflect American demographics.

3. How could any potential biases in the study or design project have affected the results or application of the results to the target population?

Scientists studying the rate of brain development in children were able to recognize the sampling bias in the brain image database. When scientists were able to apply statistical methods to ensure a better representation of socioeconomically diverse samples, they saw a different pattern in the rate of brain development in children. Scientists learned that, in general, children’s brains matured more quickly than they had previously thought. They were able to draw new conclusions about how certain factors, such as family wealth and education, affected the rate at which children’s brains developed. But the scientsits also suggested that they needed to perform additional studies with a deliberately selected group of children to ensure true diversity in the samples.

In this phase, students will review the Science News article “ Biomedical studies are including more female subjects (finally) ” and the NIH Policy on Sex as a Biological Variable , including the “ guidance document .” Students will identify how sex and gender biases may have affected the results of biomedical research before NIH instituted its policy. The students will then work with their group to recommend other policies to minimize biases in biomedical research.

To guide their development of proposed guidelines, students should answer the following questions in their groups.

1. How have sex and gender biases affected the value and application of biomedical research?

Gender and sex biases in biomedical research have diminished the accuracy and quality of research studies and reduced the applicability of results to the entire population. When girls and women are not included in research studies, the responses and therapeutic outcomes of approximately half of the target population for potential therapies remain unknown.

2. Why do you think the NIH created its policy to reduce sex and gender biases?

In the guidance document, the NIH states that “There is a growing recognition that the quality and generalizability of biomedical research depends on the consideration of key biological variables, such as sex.” The document goes on to state that many diseases and conditions affect people of both sexes, and restricting diversity of biological variables, notably sex and gender, undermines the “rigor, transparency, and generalizability of research findings.”

3. What impact has the NIH Policy on Sex as a Biological Variable had on biomedical research?

The NIH’s policy that sex is factored into research designs, analyses and reporting tries to ensure that when developing and funding biomedical research studies, researchers and institutes address potential biases in the planning stages, which helps to reduce or eliminate those biases in the final study. Including females in biomedical research studies helps to ensure that the results of biomedical research are applicable to a larger proportion of the population, expands the therapies available to girls and women and improves their health care outcomes.

4. What other policies do you think the NIH could institute to reduce biases in biomedical research? If you were to recommend one set of additional guidelines for reducing bias in biomedical research, what guidelines would you propose? Why?

Students could suggest that the NIH should have similar policies related to race, gender identity, wealth/economic status and age. Students should identify a category of bias or an underserved segment of the population that they think needs to be addressed in order to improve biomedical research and health outcomes for all people and should recommend guidelines to reduce bias related to that group. Students recommending guidelines related to race might suggest that some populations, such as African Americans, are historically underserved in terms of access to medical services and health care, and they might suggest guidelines to help reduce the disparity. Students might recommend that a certain percentage of each biomedical research project’s sample include patients of diverse racial and ethnic backgrounds.

5. What biases would your suggested policy help eliminate? How would it accomplish that goal?

Students should describe how their proposed policy would address a discrepancy in the application of biomedical research to the entire human population. Race can be considered a biological variable, like sex, and race has been connected to higher or lower incidence of certain characteristics or medical conditions, such as blood types or diabetes, which sometimes affect how the body reponds to infectious agents, drugs, procedures or other therapies. By ensuring that people from diverse racial and ethnic groups are included in biomedical research studies, scientists and medical professionals can provide better medical care to members of those populations.

Class discussion about bias guidelines

Allow each group time to present its proposed bias-reducing guideline to another group and to receive feedback. Then provide groups with time to revise their guidelines, if necessary. Act as a facilitator while students conduct the class discussion. Use this time to assess individual and group progress. Students should demonstrate an understanding of different biases that may affect patient outcomes in biomedical research studies and in practical medical settings. As part of the group discussion, have students answer the following questions.

1. Why is it important to identify and eliminate biases in research and engineering design?

The goal of most scientific research and engineering projects is to improve the quality of life and the depth of understanding of the world we live in. By eliminating biases, we can better serve the entirety of the human population and the planet .

2. Were there any guidelines that were suggested by multiple groups? How do those actions or policies help reduce bias?

Answers will depend on the guidelines developed and recommended by other groups. Groups could suggest policies related to race, gender identity, wealth/economic status and age. Each group should clearly identify how its guidelines are designed to reduce bias and improve the quality of human life.

3. Which guidelines developed by your classmates do you think would most reduce the effects of bias on research results or engineering designs? Support your selection with evidence and scientific reasoning.

Answers will depend on the guidelines developed and recommended by other groups. Students should agree that guidelines that minimize inequities and improve health care outcomes for a larger group are preferred. Guidelines addressing inequities of race and wealth/economic status are likely to expand access to improved medical care for the largest percentage of the population. People who grow up in less economically advantaged settings have specific health issues related to nutrition and their access to clean water, for instance. Ensuring that people from the lowest economic brackets are represented in biomedical research improves their access to medical care and can dramatically change the length and quality of their lives.

Possible extension

Challenge students to honestly evaluate any biases they may have. Encourage them to take an Implicit Association Test (IAT) to identify any implicit biases they may not recognize. Harvard University has an online IAT platform where students can participate in different assessments to identify preferences and biases related to sex and gender, race, religion, age, weight and other factors. You may want to challenge students to take a test before they begin the activity, and then assign students to take a test after completing the activity to see if their preferences have changed. Students can report their results to the class if they want to discuss how awareness affects the expression of bias.

Additional resources

If you want additional resources for the discussion or to provide resources for student groups, check out the links below.

Additional Science News articles:

Even brain images can be biased

Data-driven crime prediction fails to erase human bias

What we can learn from how a doctor’s race can affect Black newborns’ survival

Bias in a common health care algorithm disproportionately hurts black patients

Female rats face sex bias too

There’s no evidence that a single ‘gay gene’ exists

Positive attitudes about aging may pay off in better health

What male bias in the mammoth fossil record says about the animal’s social groups

The man flu struggle might be real, says one researcher

Scientists may work to prevent bias, but they don’t always say so

The Bias Finders

Showdown at Sex Gap

University resources:

Project Implicit (Take an Implicit Association Tests)

Catalogue of Bias

Understanding Health Research

research study biases

The Ultimate Guide to Qualitative Research - Part 1: The Basics

research study biases

  • Introduction and overview
  • What is qualitative research?
  • What is qualitative data?
  • Examples of qualitative data
  • Qualitative vs. quantitative research
  • Mixed methods
  • Qualitative research preparation
  • Theoretical perspective
  • Theoretical framework
  • Literature reviews
  • Research question
  • Conceptual framework
  • Conceptual vs. theoretical framework
  • Data collection
  • Qualitative research methods
  • Focus groups
  • Observational research
  • Case studies
  • Ethnographical research
  • Ethical considerations
  • Confidentiality and privacy

What is research bias?

Understanding unconscious bias, how to avoid bias in research, bias and subjectivity in research.

  • Power dynamics
  • Reflexivity

Bias in research

In a purely objective world, research bias would not exist because knowledge would be a fixed and unmovable resource; either one knows about a particular concept or phenomenon, or they don't. However, qualitative research and the social sciences both acknowledge that subjectivity and bias exist in every aspect of the social world, which naturally includes the research process too. This bias is manifest in the many different ways that knowledge is understood, constructed, and negotiated, both in and out of research.

research study biases

Understanding research bias has profound implications for data collection methods and data analysis , requiring researchers to take particular care of how to account for the insights generated from their data .

Research bias, often unavoidable, is a systematic error that can creep into any stage of the research process , skewing our understanding and interpretation of findings. From data collection to analysis, interpretation , and even publication , bias can distort the truth we seek to capture and communicate in our research.

It’s also important to distinguish between bias and subjectivity, especially when engaging in qualitative research . Most qualitative methodologies are based on epistemological and ontological assumptions that there is no such thing as a fixed or objective world that exists “out there” that can be empirically measured and understood through research. Rather, many qualitative researchers embrace the socially constructed nature of our reality and thus recognize that all data is produced within a particular context by participants with their own perspectives and interpretations. Moreover, the researcher’s own subjective experiences inevitably shape how they make sense of the data. These subjectivities are considered to be strengths, not limitations, of qualitative research approaches, because they open new avenues for knowledge generation. This is also why reflexivity is so important in qualitative research. When we refer to bias in this guide, on the other hand, we are referring to systematic errors that can negatively affect the research process but that can be mitigated through researchers’ careful efforts.

To fully grasp what research bias is, it's essential to understand the dual nature of bias. Bias is not inherently evil. It's simply a tendency, inclination, or prejudice for or against something. In our daily lives, we're subject to countless biases, many of which are unconscious. They help us navigate our world, make quick decisions, and understand complex situations. But when conducting research, these same biases can cause significant issues.

research study biases

Research bias can affect the validity and credibility of research findings, leading to erroneous conclusions. It can emerge from the researcher's subconscious preferences or the methodological design of the study itself. For instance, if a researcher unconsciously favors a particular outcome of the study, this preference could affect how they interpret the results, leading to a type of bias known as confirmation bias.

Research bias can also arise due to the characteristics of study participants. If the researcher selectively recruits participants who are more likely to produce desired outcomes, this can result in selection bias.

Another form of bias can stem from data collection methods . If a survey question is phrased in a way that encourages a particular response, this can introduce response bias. Moreover, inappropriate survey questions can have a detrimental effect on future research if such studies are seen by the general population as biased toward particular outcomes depending on the preferences of the researcher.

Bias can also occur during data analysis . In qualitative research for instance, the researcher's preconceived notions and expectations can influence how they interpret and code qualitative data, a type of bias known as interpretation bias. It's also important to note that quantitative research is not free of bias either, as sampling bias and measurement bias can threaten the validity of any research findings.

Given these examples, it's clear that research bias is a complex issue that can take many forms and emerge at any stage in the research process. This section will delve deeper into specific types of research bias, provide examples, discuss why it's an issue, and provide strategies for identifying and mitigating bias in research.

What is an example of bias in research?

Bias can appear in numerous ways. One example is confirmation bias, where the researcher has a preconceived explanation for what is going on in their data, and any disconfirming evidence is (unconsciously) ignored. For instance, a researcher conducting a study on daily exercise habits might be inclined to conclude that meditation practices lead to greater engagement in exercise because that researcher has personally experienced these benefits. However, conducting rigorous research entails assessing all the data systematically and verifying one’s conclusions by checking for both supporting and refuting evidence.

research study biases

What is a common bias in research?

Confirmation bias is one of the most common forms of bias in research. It happens when researchers unconsciously focus on data that supports their ideas while ignoring or undervaluing data that contradicts their ideas. This bias can lead researchers to mistakenly confirm their theories, despite having insufficient or conflicting evidence.

What are the different types of bias?

There are several types of research bias, each presenting unique challenges. Some common types include:

Confirmation bias: As already mentioned, this happens when a researcher focuses on evidence supporting their theory while overlooking contradictory evidence.

Selection bias: This occurs when the researcher's method of choosing participants skews the sample in a particular direction.

Response bias: This happens when participants in a study respond inaccurately or falsely, often due to misleading or poorly worded questions.

Observer bias (or researcher bias): This occurs when the researcher unintentionally influences the results because of their expectations or preferences.

Publication bias: This type of bias arises when studies with positive results are more likely to get published, while studies with negative or null results are often ignored.

Analysis bias: This type of bias occurs when the data is manipulated or analyzed in a way that leads to a particular result, whether intentionally or unintentionally.

research study biases

What is an example of researcher bias?

Researcher bias, also known as observer bias, can occur when a researcher's expectations or personal beliefs influence the results of a study. For instance, if a researcher believes that a particular therapy is effective, they might unconsciously interpret ambiguous results in a way that supports the efficacy of the therapy, even if the evidence is not strong enough.

Even quantitative research methodologies are not immune from bias from researchers. Market research surveys or clinical trial research, for example, may encounter bias when the researcher chooses a particular population or methodology to achieve a specific research outcome. Questions in customer feedback surveys whose data is employed in quantitative analysis can be structured in such a way as to bias survey respondents toward certain desired answers.

Turn your data into findings with ATLAS.ti

Key insights are at your fingertips with our powerful interface. See how with a free trial.

Identifying and avoiding bias in research

As we will remind you throughout this chapter, bias is not a phenomenon that can be removed altogether, nor should we think of it as something that should be eliminated. In a subjective world involving humans as researchers and research participants, bias is unavoidable and almost necessary for understanding social behavior. The section on reflexivity later in this guide will highlight how different perspectives among researchers and human subjects are addressed in qualitative research. That said, bias in excess can place the credibility of a study's findings into serious question. Scholars who read your research need to know what new knowledge you are generating, how it was generated, and why the knowledge you present should be considered persuasive. With that in mind, let's look at how bias can be identified and, where it interferes with research, minimized.

How do you identify bias in research?

Identifying bias involves a critical examination of your entire research study involving the formulation of the research question and hypothesis , the selection of study participants, the methods for data collection, and the analysis and interpretation of data. Researchers need to assess whether each stage has been influenced by bias that may have skewed the results. Tools such as bias checklists or guidelines, peer review , and reflexivity (reflecting on one's own biases) can be instrumental in identifying bias.

How do you identify research bias?

Identifying research bias often involves careful scrutiny of the research methodology and the researcher's interpretations. Was the sample of participants relevant to the research question ? Were the interview or survey questions leading? Were there any conflicts of interest that could have influenced the results? It also requires an understanding of the different types of bias and how they might manifest in a research context. Does the bias occur in the data collection process or when the researcher is analyzing data?

Research transparency requires a careful accounting of how the study was designed, conducted, and analyzed. In qualitative research involving human subjects, the researcher is responsible for documenting the characteristics of the research population and research context. With respect to research methods, the procedures and instruments used to collect and analyze data are described in as much detail as possible.

While describing study methodologies and research participants in painstaking detail may sound cumbersome, a clear and detailed description of the research design is necessary for good research. Without this level of detail, it is difficult for your research audience to identify whether bias exists, where bias occurs, and to what extent it may threaten the credibility of your findings.

How to recognize bias in a study?

Recognizing bias in a study requires a critical approach. The researcher should question every step of the research process: Was the sample of participants selected with care? Did the data collection methods encourage open and sincere responses? Did personal beliefs or expectations influence the interpretation of the results? External peer reviews can also be helpful in recognizing bias, as others might spot potential issues that the original researcher missed.

The subsequent sections of this chapter will delve into the impacts of research bias and strategies to avoid it. Through these discussions, researchers will be better equipped to handle bias in their work and contribute to building more credible knowledge.

Unconscious biases, also known as implicit biases, are attitudes or stereotypes that influence our understanding, actions, and decisions in an unconscious manner. These biases can inadvertently infiltrate the research process, skewing the results and conclusions. This section aims to delve deeper into understanding unconscious bias, its impact on research, and strategies to mitigate it.

What is unconscious bias?

Unconscious bias refers to prejudices or social stereotypes about certain groups that individuals form outside their conscious awareness. Everyone holds unconscious beliefs about various social and identity groups, and these biases stem from a tendency to organize social worlds into categories.

research study biases

How does unconscious bias infiltrate research?

Unconscious bias can infiltrate research in several ways. It can affect how researchers formulate their research questions or hypotheses , how they interact with participants, their data collection methods, and how they interpret their data . For instance, a researcher might unknowingly favor participants who share similar characteristics with them, which could lead to biased results.

Implications of unconscious bias

The implications of unconscious research bias are far-reaching. It can compromise the validity of research findings , influence the choice of research topics, and affect peer review processes . Unconscious bias can also lead to a lack of diversity in research, which can severely limit the value and impact of the findings.

Strategies to mitigate unconscious research bias

While it's challenging to completely eliminate unconscious bias, several strategies can help mitigate its impact. These include being aware of potential unconscious biases, practicing reflexivity , seeking diverse perspectives for your study, and engaging in regular bias-checking activities, such as bias training and peer debriefing .

By understanding and acknowledging unconscious bias, researchers can take steps to limit its impact on their work, leading to more robust findings.

Why is researcher bias an issue?

Research bias is a pervasive issue that researchers must diligently consider and address. It can significantly impact the credibility of findings. Here, we break down the ramifications of bias into two key areas.

How bias affects validity

Research validity refers to the accuracy of the study findings, or the coherence between the researcher’s findings and the participants’ actual experiences. When bias sneaks into a study, it can distort findings and move them further away from the realities that were shared by the research participants. For example, if a researcher's personal beliefs influence their interpretation of data , the resulting conclusions may not reflect what the data show or what participants experienced.

The transferability problem

Transferability is the extent to which your study's findings can be applied beyond the specific context or sample studied. Applying knowledge from one context to a different context is how we can progress and make informed decisions. In quantitative research , the generalizability of a study is a key component that shapes the potential impact of the findings. In qualitative research , all data and knowledge that is produced is understood to be embedded within a particular context, so the notion of generalizability takes on a slightly different meaning. Rather than assuming that the study participants are statistically representative of the entire population, qualitative researchers can reflect on which aspects of their research context bear the most weight on their findings and how these findings may be transferable to other contexts that share key similarities.

How does bias affect research?

Research bias, if not identified and mitigated, can significantly impact research outcomes. The ripple effects of research bias extend beyond individual studies, impacting the body of knowledge in a field and influencing policy and practice. Here, we delve into three specific ways bias can affect research.

Distortion of research results

Bias can lead to a distortion of your study's findings. For instance, confirmation bias can cause a researcher to focus on data that supports their interpretation while disregarding data that contradicts it. This can skew the results and create a misleading picture of the phenomenon under study.

Undermining scientific progress

When research is influenced by bias, it not only misrepresents participants’ realities but can also impede scientific progress. Biased studies can lead researchers down the wrong path, resulting in wasted resources and efforts. Moreover, it could contribute to a body of literature that is skewed or inaccurate, misleading future research and theories.

Influencing policy and practice based on flawed findings

Research often informs policy and practice. If the research is biased, it can lead to the creation of policies or practices that are ineffective or even harmful. For example, a study with selection bias might conclude that a certain intervention is effective, leading to its broad implementation. However, suppose the transferability of the study's findings was not carefully considered. In that case, it may be risky to assume that the intervention will work as well in different populations, which could lead to ineffective or inequitable outcomes.

research study biases

While it's almost impossible to eliminate bias in research entirely, it's crucial to mitigate its impact as much as possible. By employing thoughtful strategies at every stage of research, we can strive towards rigor and transparency , enhancing the quality of our findings. This section will delve into specific strategies for avoiding bias.

How do you know if your research is biased?

Determining whether your research is biased involves a careful review of your research design, data collection , analysis , and interpretation . It might require you to reflect critically on your own biases and expectations and how these might have influenced your research. External peer reviews can also be instrumental in spotting potential bias.

Strategies to mitigate bias

Minimizing bias involves careful planning and execution at all stages of a research study. These strategies could include formulating clear, unbiased research questions , ensuring that your sample meaningfully represents the research problem you are studying, crafting unbiased data collection instruments, and employing systematic data analysis techniques. Transparency and reflexivity throughout the process can also help minimize bias.

Mitigating bias in data collection

To mitigate bias in data collection, ensure your questions are clear, neutral, and not leading. Triangulation, or using multiple methods or data sources, can also help to reduce bias and increase the credibility of your findings.

Mitigating bias in data analysis

During data analysis , maintaining a high level of rigor is crucial. This might involve using systematic coding schemes in qualitative research or appropriate statistical tests in quantitative research . Regularly questioning your interpretations and considering alternative explanations can help reduce bias. Peer debriefing , where you discuss your analysis and interpretations with colleagues, can also be a valuable strategy.

By using these strategies, researchers can significantly reduce the impact of bias on their research, enhancing the quality and credibility of their findings and contributing to a more robust and meaningful body of knowledge.

Impact of cultural bias in research

Cultural bias is the tendency to interpret and judge phenomena by standards inherent to one's own culture. Given the increasingly multicultural and global nature of research, understanding and addressing cultural bias is paramount. This section will explore the concept of cultural bias, its impacts on research, and strategies to mitigate it.

What is cultural bias in research?

Cultural bias refers to the potential for a researcher's cultural background, experiences, and values to influence the research process and findings. This can occur consciously or unconsciously and can lead to misinterpretation of data, unfair representation of cultures, and biased conclusions.

How does cultural bias infiltrate research?

Cultural bias can infiltrate research at various stages. It can affect the framing of research questions , the design of the study, the methods of data collection , and the interpretation of results . For instance, a researcher might unintentionally design a study that does not consider the cultural context of the participants, leading to a biased understanding of the phenomenon being studied.

Implications of cultural bias

The implications of cultural bias are profound. Cultural bias can skew your findings, limit the transferability of results, and contribute to cultural misunderstandings and stereotypes. This can ultimately lead to inaccurate or ethnocentric conclusions, further perpetuating cultural bias and inequities.

As a result, many social science fields like sociology and anthropology have been critiqued for cultural biases in research. Some of the earliest research inquiries in anthropology, for example, have had the potential to reduce entire cultures to simplistic stereotypes when compared to mainstream norms. A contemporary researcher respecting ethical and cultural boundaries, on the other hand, should seek to properly place their understanding of social and cultural practices in sufficient context without inappropriately characterizing them.

Strategies to mitigate cultural bias

Mitigating cultural bias requires a concerted effort throughout the research study. These efforts could include educating oneself about other cultures, being aware of one's own cultural biases, incorporating culturally diverse perspectives into the research process, and being sensitive and respectful of cultural differences. It might also involve including team members with diverse cultural backgrounds or seeking external cultural consultants to challenge assumptions and provide alternative perspectives.

By acknowledging and addressing cultural bias, researchers can contribute to more culturally competent, equitable, and valid research. This not only enriches the scientific body of knowledge but also promotes cultural understanding and respect.

research study biases

Ready to jumpstart your research with ATLAS.ti?

Conceptualize your research project with our intuitive data analysis interface. Download a free trial today.

Keep in mind that bias is a force to be mitigated, not a phenomenon that can be eliminated altogether, and the subjectivities of each person are what make our world so complex and interesting. As things are continuously changing and adapting, research knowledge is also continuously being updated as we further develop our understanding of the world around us.

research study biases

Ready to analyze your data with ATLAS.ti?

See how our intuitive software can draw key insights from your data with a free trial today.

  • Research Bias: Definition, Types + Examples

busayo.longe

Sometimes, in the cause of carrying out a systematic investigation, the researcher may influence the process intentionally or unknowingly. When this happens, it is termed as research bias, and like every other type of bias , it can alter your findings. 

Research bias is one of the dominant reasons for the poor validity of research outcomes. There are no hard and fast rules when it comes to research bias and this simply means that it can happen at any time; if you do not pay adequate attention. 

The spontaneity of research bias means you must take care to understand what it is, be able to identify its feature, and ultimately avoid or reduce its occurrence to the barest minimum. In this article, we will show you how to handle bias in research and how to create unbiased research surveys with Formplus. 

What is Research Bias? 

Research bias happens when the researcher skews the entire process towards a specific research outcome by introducing a systematic error into the sample data. In other words, it is a process where the researcher influences the systematic investigation to arrive at certain outcomes. 

When any form of bias is introduced in research, it takes the investigation off-course and deviates it from its true outcomes. Research bias can also happen when the personal choices and preferences of the researcher have undue influence on the study. 

For instance, let’s say a religious conservative researcher is conducting a study on the effects of alcohol. If the researcher’s conservative beliefs prompt him or her to create a biased survey or have sampling bias , then this is a case of research bias. 

Types of Research Bias 

  • Design Bias

Design bias has to do with the structure and methods of your research. It happens when the research design, survey questions, and research method is largely influenced by the preferences of the researcher rather than what works best for the research context. 

In many instances, poor research design or a pack of synergy between the different contributing variables in your systematic investigation can infuse bias into your research process. Research bias also happens when the personal experiences of the researcher influence the choice of the research question and methodology. 

Example of Design Bias  

A researcher who is involved in the manufacturing process of a new drug may design a survey with questions that only emphasize the strengths and value of the drug in question. 

  • Selection or Participant Bias

Selection bias happens when the research criteria and study inclusion method automatically exclude some part of your population from the research process. When you choose research participants that exhibit similar characteristics, you’re more likely to arrive at study outcomes that are uni-dimensional. 

Selection bias manifests itself in different ways in the context of research. Inclusion bias is particularly popular in quantitative research and it happens when you select participants to represent your research population while ignoring groups that have alternative experiences. 

Examples of Selection Bias  

  • Administering your survey online; thereby limiting it to internet savvy individuals and excluding members of your population without internet access. 
  • Collecting data about parenting from a mother’s group. The findings in this type of research will be biased towards mothers while excluding the experiences of the fathers. 
  • Publication Bias

Peer-reviewed journals and other published academic papers, in many cases, have some degree of bias. This bias is often imposed on them by the publication criteria for research papers in a particular field. Researchers work their papers to meet these criteria and may ignore information or methods that are not in line with them. 

For example, research papers in quantitative research are more likely to be published if they contain statistical information. On the other hand, Non-publication in qualitative studies is more likely to occur because of a lack of depth when describing study methodologies and findings are not presented. 

  • Analysis Bias

This is a type of research bias that creeps in during data processing. Many times, when sorting and analyzing data, the researcher may focus on data samples that confirm his or her thoughts, expectations, or personal experiences; that is, data that favors the research hypothesis. 

This means that the researcher, albeit deliberately or unintentionally, ignores data samples that are inconsistent and suggest research outcomes that differ from the hypothesis. Analysis bias can be far-reaching because it alters the research outcomes significantly and provides a false presentation of what is obtainable in the research environment. 

Example of Analysis Bias  

While researching cannabis, a researcher pays attention to data samples that reinforce the negative effects of cannabis while ignoring data that suggests positives.

  • Data Collection Bias

Data collection bias is also known as measurement bias and it happens when the researcher’s personal preferences or beliefs affect how data samples are gathered in the systematic investigation. Data collection bias happens in both q ualitative and quantitative research methods. 

In quantitative research, data collection methods can occur when you use a data-gathering tool or method that is not suitable for your research population. For example, asking individuals who do not have access to the internet, to complete a survey via email or your website. 

In qualitative research, data collection bias happens when you ask bad survey questions during a semi-structured or unstructured interview . Bad survey questions are questions that nudge the interviewee towards implied assumptions. Leading and loaded questions are common examples of bad survey questions. 

  • Procedural Bias

Procedural is a type of research bias that happens when the participants in a study are not given enough time to complete surveys. The result is that respondents end up providing half-thoughts and incomplete information that does not provide a true representation of their thoughts. 

There are different ways to subject respondents to procedural respondents. For instance, asking respondents to complete a survey quickly to access an incentive, may force them to fill in false information to simply get things over with. 

Example of Procedural Bias

  • Asking employees to complete an employee feedback survey during break time. This timeframe puts respondents under undue pressure and can affect the validity of their responses.  

Bias in Quantitative Research

In quantitative research, the researcher often tries to deny the existence of any bias, by eliminating any type of bias in the systematic investigation. Sampling bias is one of the most types of quantitative research biases and it is concerned with the samples you omit and/or include in your study. 

Types of Quantitative Research Bias

Design bias occurs in quantitative research when the research methods or processes alter the outcomes or findings of a systematic investigation. It can occur when the experiment is being conducted or during the analysis of the data to arrive at a valid conclusion. 

Many times, design biases result from the failure of the researchers to take into account the likely impact of the bias in the research they conduct. This makes the researcher ignore the needs of the research context and instead, prioritize his or her preferences. 

  • Sampling Bias

Sampling bias in quantitative research occurs when some members of the research population are systematically excluded from the data sample during research. It also means that some groups in the research population are more likely to be selected in a sample than the others. 

Sampling bias in quantitative research mainly occurs in systematic and random sampling. For example, a study about breast cancer that has just male participants can be said to have sampling bias since it excludes the female group in the research population. 

Bias in Qualitative Research

In qualitative research, the researcher accepts and acknowledges the bias without trying to deny its existence. This makes it easier for the researcher to clearly define the inherent biases and outline its possible implications while trying to minimize its effects. 

Qualitative research defines bias in terms of how valid and reliable the research results are. Bias in qualitative research distorts the research findings and also provides skewed data that defeats the validity and reliability of the systematic investigation. 

Types of Bias in Qualitative Research

  • Bias from Moderator

The interviewer or moderator in qualitative data collection can impose several biases on the process. The moderator can introduce bias in the research based on his or her disposition, expression, tone, appearance, idiolect, or relation with the research participants. 

  • Biased Questions

The framing and presentation of the questions during the research process can also lead to bias. Biased questions like leading questions , double- barrelled questions, negative questions, and loaded questions , can influence the way respondents provide answers and the authenticity of the responses they present. 

The researcher must identify and eliminate biased questions in qualitative research or rephrase them if they cannot be taken out altogether. Remember that questions form the main basis through which information is collected in research and so, biased questions can lead to invalid research findings. 

  • Biased Reporting

Biased reporting is yet another challenge in qualitative research. It happens when the research results are altered due to personal beliefs, customs, attitudes, culture, and errors among many other factors. It also means that the researcher must have analyzed the research data based on his/her beliefs rather than the views perceived by the respondents. 

Bias in Psychology

Cognitive biases can affect research and outcomes in psychology. For example, during a stop-and-search exercise, law enforcement agents may profile certain appearances and physical dispositions as law-abiding. Due to this cognitive bias, individuals who do not exhibit these outlined behaviors can be wrongly profiled as criminals. 

Another example of cognitive bias in psychology can be observed in the classroom. During a class assessment, an invigilator who is looking for physical signs of malpractice might mistakenly classify other behaviors as evidence of malpractice; even though this may not be the case. 

Bias in Market Research

There are 5 common biases in market research – social desirability bias, habituation bias, sponsor bias, confirmation bias, and cultural bias. Let’s find out more about them.

  • Social desirability bias happens when respondents fill in incorrect information in market research surveys because they want to be accepted or liked. It happens when respondents are seeking social approval and so, fail to communicate how they truly feel about the statement or question being considered. 

A good example will be market research to find out preferred sexual enhancement methods for adults. Some persons may not want to admit that they use sexual enhancement drugs to avoid criticism or disapproval.

  • Habituation bias happens when respondents give similar answers to questions that are structured in the same way. Lack of variety in survey questions can make respondents lose interest, become non-responsive, and simply regurgitate answers.  

For example, multiple-choice questions with the same set of answer options can cause habituation bias in your survey. What you get is that respondents just choose answer options without reflecting on how well their choices represent their thoughts, feelings, and ideas. 

  • Sponsor bias takes place when respondents have an idea of the brand or organization that is conducting the research. In this case, their perceptions, opinions, experiences, and feelings about the sponsor may influence how they answer the questions about that particular brand. 

For example, let’s say Formplus is carrying out a study to find out what the market’s preferred form builder is. Respondents may mention the sponsor for the survey (Formplus) as their preferred form builder out of obligation; especially when the survey has some incentives.

  • Confirmation bias happens when the overall research process is aimed at confirming the researcher’s perception or hypothesis about the research subjects. In other words, the research process is merely a formality to reinforce the researcher’s existing beliefs. 

Electoral polls often fall into the confirmation bias trap. For example, civil society organizations that are in support of one candidate can create a survey that paints the opposing candidate in a bad light to reinforce beliefs about their preferred candidate. 

  • Cultural bias arises from the assumptions we have about other cultures based on the values and standards we have for our own culture . For example, when asked to complete a survey about our culture, we may tilt towards positive answers. In the same vein, we are more likely to provide negative responses in a survey for a culture we do not like. 

How to Identify Bias in a Research

  • Pay attention to research design and methods. 
  • Observe the data collection process. Does it lean overwhelmingly towards a particular group in the survey population? 
  • Look out for bad survey questions like loaded questions and negative questions. 
  • Observe the data sample you have to confirm if it is a fair representation of your research population.

How to Avoid Research Bias 

  • Gather data from multiple sources: Be sure to collect data samples from the different groups in your research population. 
  • Verify your data: Before going ahead with the data analysis, try to check in with other data sources, and confirm if you are on the right track. 
  • If possible, ask research participants to help you review your findings: Ask the people who provided the data whether your interpretations seem to be representative of their beliefs. 
  • Check for alternative explanations: Try to identify and account for alternative reasons why you may have collected data samples the way you did. 
  • Ask other members of your team to review your results: Ask others to review your conclusions. This will help you see things that you missed or identify gaps in your argument that need to be addressed.

How to Create Unbiased Research Surveys with Formplus 

Formplus has different features that would help you create unbiased research surveys. Follow these easy steps to start creating your Formplus research survey today: 

  • Go to your Formplus dashboard and click on the “create new form” button. You can access the Formplus dashboard by signing into your Formplus account here. 

research study biases

  • After you click on the “create new form” button, you’d be taken to the form builder. This is where you can add different fields into your form and edit them accordingly. Formplus has over 30 form fields that you can simply drag and drop into your survey including rating fields and scales. 

logo-testing-survey-builder

  • After adding form fields and editing them, save your form to access the builder’s customization features. You can tweak the appearance of your form here by changing the form theme and adding preferred background images to it. 

research study biases

  • Copy the form link and share it with respondents. 

research study biases

Conclusion 

The first step to dealing with research bias is having a clear idea of what it is and also, being able to identify it in any form. In this article, we’ve shared important information about research bias that would help you identify it easily and work on minimizing its effects to the barest minimum. 

Formplus has many features and options that can help you deal with research bias as you create forms and questionnaires for quantitative and qualitative data collection. To take advantage of these, you can sign up for a Formplus account here. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • examples of research bias
  • types of research bias
  • what is research bias
  • busayo.longe

Formplus

You may also like:

Selection Bias in Research: Types, Examples & Impact

In this article, we’ll discuss the effects of selection bias, how it works, its common effects and the best ways to minimize it.

research study biases

Systematic Errors in Research: Definition, Examples

In this article, we are going to explore the types of systematic error, the causes of this error, how to identify, and how to avoid it.

Quota Sampling: Definition, Types, Pros, Cons & Examples

In this article, we’ll explore the concept of quota sampling, its types, and some real-life examples of it can be applied in rsearch

How to do a Meta Analysis: Methodology, Pros & Cons

In this article, we’ll go through the concept of meta-analysis, what it can be used for, and how you can use it to improve how you...

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research study biases

Home Market Research Research Tools and Apps

Research bias: What it is, Types & Examples

Research bias is a technique where the researchers conducting the experiment modify the findings in order to present a specific consequence.

The researcher sometimes unintentionally or actively affects the process while executing a systematic inquiry. It is known as research bias, and it can affect your results just like any other sort of bias.

When it comes to studying bias, there are no hard and fast guidelines, which simply means that it can occur at any time. Experimental mistakes and a lack of concern for all relevant factors can lead to research bias.

One of the most common causes of study results with low credibility is study bias. Because of its informal nature, you must be cautious when characterizing bias in research. To reduce or prevent its occurrence, you need to be able to recognize its characteristics. 

This article will cover what it is, its type, and how to avoid it.

Content Index

What is research bias?

How does research bias affect the research process, types of research bias with examples, how questionpro helps in reducing bias in a research process.

Research bias is a technique in which the researchers conducting the experiment modify the findings to present a specific consequence. It is often known as experimenter bias.

Bias is a characteristic of the research technique that makes it rely on experience and judgment rather than data analysis. The most important thing to know about bias is that it is unavoidable in many fields. Understanding research bias and reducing the effects of biased views is an essential part of any research planning process.

For example, it is much easier to become attracted to a certain point of view when using social research subjects, compromising fairness.

Research bias can majorly affect the research process, weakening its integrity and leading to misleading or erroneous results. Here are some examples of how this bias might affect the research process:

Distorted research design

When bias is present, study results can be skewed or wrong. It can make the study less trustworthy and valid. If bias affects how a study is set up, how data is collected, or how it is analyzed, it can cause systematic mistakes that move the results away from the true or unbiased values.

Invalid conclusions

It can make it hard to believe that the findings of a study are correct. Biased research can lead to unjustified or wrong claims because the results may not reflect reality or give a complete picture of the research question.

Misleading interpretations

Bias can lead to inaccurate interpretations of research findings. It can alter the overall comprehension of the research issue. Researchers may be tempted to interpret the findings in a way that confirms their previous assumptions or expectations, ignoring alternate explanations or contradictory evidence.

Ethical concerns

This bias poses ethical considerations. It can have negative effects on individuals, groups, or society as a whole. Biased research can misinform decision-making processes, leading to ineffective interventions, policies, or therapies.

Damaged credibility

Research bias undermines scientific credibility. Biased research can damage public trust in science. It may reduce reliance on scientific evidence for decision-making.

Bias can be seen in practically every aspect of quantitative research and qualitative research , and it can come from both the survey developer and the participants. The sorts of biases that come directly from the survey maker are the easiest to deal with out of all the types of bias in research. Let’s look at some of the most typical research biases.

research study biases

Design bias

Design bias happens when a researcher fails to capture biased views in most experiments. It has something to do with the organization and its research methods. The researcher must demonstrate that they realize this and have tried to mitigate its influence.

Another design bias develops after the research is completed and the results are analyzed. It occurs when the researchers’ original concerns are not reflected in the exposure, which is all too often these days.

For example, a researcher working on a survey containing questions concerning health benefits may overlook the researcher’s awareness of the sample group’s limitations. It’s possible that the group tested was all male or all over a particular age.

Selection bias or sampling bias

Selection bias occurs when volunteers are chosen to represent your research population, but those with different experiences are ignored. 

In research, selection bias manifests itself in a variety of ways. When the sampling method puts preference into the research, this is known as sampling bias . Selection bias is also referred to as sampling bias.

For example, research on a disease that depended heavily on white male volunteers cannot be generalized to the full community, including women and people of other races or communities.

Procedural bias

Procedural bias is a sort of research bias that occurs when survey respondents are given insufficient time to complete surveys. As a result, participants are forced to submit half-thoughts with misinformation, which does not accurately reflect their thinking.

Another sort of study bias is using individuals who are forced to participate, as they are more likely to complete the survey fast, leaving them with enough time to accomplish other things.

For Example, If you ask your employees to survey their break, they may be pressured, which may compromise the validity of their results.

Publication or reporting bias

A sort of bias that influences research is publication bias. It is also known as reporting bias. It refers to a condition in which favorable outcomes are more likely to be reported than negative or empty ones. Analysis bias can also make it easier for reporting bias to happen.

The publication standards for research articles in a specific area frequently reflect this bias on them. Researchers sometimes choose not to disclose their outcomes if they believe the data do not reflect their theory.

As an example, there was seven research on the antidepressant drug Reboxetine. Among them, only one got published, and the others were unpublished.

Measurement of data collecting bias

A defect in the data collection process and measuring technique causes measurement bias. Data collecting bias is also known as measurement bias. It occurs in both qualitative and quantitative research methodologies. 

Data collection methods might occur in quantitative research when you use an approach that is not appropriate for your research population. Instrument bias is one of the most common forms of measurement bias in quantitative investigations. A defective scale would generate instrument bias and invalidate the experimental process in a quantitative experiment.

For example, you may ask those who do not have internet access to survey by email or on your website.

Data collection bias occurs in qualitative research when inappropriate survey questions are asked during an unstructured interview. Bad survey questions are those that lead the interviewee to make presumptions. Subjects are frequently hesitant to provide socially incorrect responses for fear of criticism.

For example, a topic can avoid coming across as homophobic or racist in an interview.

Some more types of bias in research include the ones listed here. Researchers must understand these biases and reduce them through rigorous study design, transparent reporting, and critical evidence review: 

  • Confirmation bias: Researchers often search for, evaluate, and prioritize material that supports their existing hypotheses or expectations, ignoring contradictory data. This can lead to a skewed perception of results and perhaps biased conclusions.
  • Cultural bias: Cultural bias arises when cultural norms, attitudes, or preconceptions influence the research process and the interpretation of results.
  • Funding bias: Funding bias takes place when powerful motives support research. It can bias research design, data collecting, analysis, and interpretation toward the funding source.
  • Observer bias: Observer bias arises when the researcher or observer affects participants’ replies or behavior. Collecting data might be biased by accidental clues, expectations, or subjective interpretations.

LEARN ABOUT: Theoretical Research

QuestionPro offers several features and functionalities that can contribute to reducing bias in the research process. Here’s how QuestionPro can help:

Randomization

QuestionPro allows researchers to randomize the order of survey questions or response alternatives. Randomization helps to remove order effects and limit bias from the order in which participants encounter the items.

Branching and skip logic

Branching and skip logic capabilities in QuestionPro allow researchers to design customized survey pathways based on participants’ responses. It enables tailored questioning, ensuring that only pertinent questions are asked of participants. Bias generated by such inquiries is reduced by avoiding irrelevant or needless questions.

Diverse question types

QuestionPro supports a wide range of questions kinds, including multiple-choice, Likert scale, matrix, and open-ended questions. Researchers can choose the most relevant question kinds to get unbiased data while avoiding leading or suggestive questions that may affect participants’ responses.

Anonymous responses

QuestionPro enables researchers to collect anonymous responses, protecting the confidentiality of participants. It can encourage participants to provide more unbiased and equitable feedback, especially when dealing with sensitive or contentious issues.

Data analysis and reporting

QuestionPro has powerful data analysis and reporting options, such as charts, graphs, and statistical analysis tools. These properties allow researchers to examine and interpret obtained data objectively, decreasing the role of bias in interpreting results.

Collaboration and peer review

QuestionPro supports peer review and researcher collaboration. It helps uncover and overcome biases in research planning, questionnaire formulation, and data analysis by involving several researchers and soliciting external opinions.

You must comprehend biases in research and how to deal with them. Knowing the different sorts of biases in research allows you to readily identify them. It is also necessary to have a clear idea to recognize it in any form.

QuestionPro provides many research tools and settings that can assist you in dealing with research bias. Try QuestionPro today to undertake your original bias-free quantitative or qualitative research.

FREE TRIAL         LEARN MORE

Frequently Asking Questions

Research bias affects the validity and dependability of your research’s findings, resulting in inaccurate interpretations of the data and incorrect conclusions.

Bias should be avoided in research to ensure that findings are accurate, valid, and objective.

 To avoid research bias, researchers should take proactive steps throughout the research process, such as developing a clear research question and objectives, designing a rigorous study, following standardized protocols, and so on.

MORE LIKE THIS

customer advocacy software

21 Best Customer Advocacy Software for Customers in 2024

Apr 19, 2024

quantitative data analysis software

10 Quantitative Data Analysis Software for Every Data Scientist

Apr 18, 2024

Enterprise Feedback Management software

11 Best Enterprise Feedback Management Software in 2024

online reputation management software

17 Best Online Reputation Management Software in 2024

Apr 17, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 17, Issue 4
  • Bias in research
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Joanna Smith 1 ,
  • Helen Noble 2
  • 1 School of Human and Health Sciences, University of Huddersfield , Huddersfield , UK
  • 2 School of Nursing and Midwifery, Queens's University Belfast , Belfast , UK
  • Correspondence to : Dr Joanna Smith , School of Human and Health Sciences, University of Huddersfield, Huddersfield HD1 3DH, UK; j.e.smith{at}hud.ac.uk

https://doi.org/10.1136/eb-2014-101946

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The aim of this article is to outline types of ‘bias’ across research designs, and consider strategies to minimise bias. Evidence-based nursing, defined as the “process by which evidence, nursing theory, and clinical expertise are critically evaluated and considered, in conjunction with patient involvement, to provide the delivery of optimum nursing care,” 1 is central to the continued development of the nursing professional. Implementing evidence into practice requires nurses to critically evaluate research, in particular assessing the rigour in which methods were undertaken and factors that may have biased findings.

What is bias in relation to research and why is understanding bias important?

Although different study designs have specific methodological challenges and constraints, bias can occur at each stage of the research process ( table 1 ). In quantitative research, the validity and reliability are assessed using statistical tests that estimate the size of error in samples and calculating the significance of findings (typically p values or CIs). The tests and measures used to establish the validity and reliability of quantitative research cannot be applied to qualitative research. However, in the broadest context, these terms are applicable, with validity referring to the integrity and application of the methods and the precision in which the findings accurately reflect the data, and reliability referring to the consistency within the analytical processes. 4

  • View inline

Types of research bias

How is bias minimised when undertaken research?

Bias exists in all study designs, and although researchers should attempt to minimise bias, outlining potential sources of bias enables greater critical evaluation of the research findings and conclusions. Researchers bring to each study their experiences, ideas, prejudices and personal philosophies, which if accounted for in advance of the study, enhance the transparency of possible research bias. Clearly articulating the rationale for and choosing an appropriate research design to meet the study aims can reduce common pitfalls in relation to bias. Ethics committees have an important role in considering whether the research design and methodological approaches are biased, and suitable to address the problem being explored. Feedback from peers, funding bodies and ethics committees is an essential part of designing research studies, and often provides valuable practical guidance in developing robust research.

In quantitative studies, selection bias is often reduced by the random selection of participants, and in the case of clinical trials randomisation of participants into comparison groups. However, not accounting for participants who withdraw from the study or are lost to follow-up can result in sample bias or change the characteristics of participants in comparison groups. 7 In qualitative research, purposeful sampling has advantages when compared with convenience sampling in that bias is reduced because the sample is constantly refined to meet the study aims. Premature closure of the selection of participants before analysis is complete can threaten the validity of a qualitative study. This can be overcome by continuing to recruit new participants into the study during data analysis until no new information emerges, known as data saturation. 8

In quantitative studies having a well-designed research protocol explicitly outlining data collection and analysis can assist in reducing bias. Feasibility studies are often undertaken to refine protocols and procedures. Bias can be reduced by maximising follow-up and where appropriate in randomised control trials analysis should be based on the intention-to-treat principle, a strategy that assesses clinical effectiveness because not everyone complies with treatment and the treatment people receive may be changed according to how they respond. Qualitative research has been criticised for lacking transparency in relation to the analytical processes employed. 4 Qualitative researchers must demonstrate rigour, associated with openness, relevance to practice and congruence of the methodological approach. Although other researchers may interpret the data differently, appreciating and understanding how the themes were developed is an essential part of demonstrating the robustness of the findings. Reducing bias can include respondent validation, constant comparisons across participant accounts, representing deviant cases and outliers, prolonged involvement or persistent observation of participants, independent analysis of the data by other researchers and triangulation. 4

In summary, minimising bias is a key consideration when designing and undertaking research. Researchers have an ethical duty to outline the limitations of studies and account for potential sources of bias. This will enable health professionals and policymakers to evaluate and scrutinise study findings, and consider these when applying findings to practice or policy.

  • Wakefield AJ ,
  • Anthony A ,
  • ↵ The Lancet . Retraction—ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children . Lancet 2010 ; 375 : 445 . OpenUrl CrossRef PubMed Web of Science
  • Easterbrook PJ ,
  • Berlin JA ,
  • Gopalan R ,
  • Petticrew M ,
  • Thomson H ,
  • Francis J ,
  • Johnston M ,
  • Robertson C ,

Competing interests None.

Read the full text or download the PDF:

Advertisement

Issue Cover

  • Next Article

1. Introduction

2. from “science and values” to “values vs. biases”, 3. biases in science, 4. cognitive biases in science, 5. how to counteract biases in science, 6. conclusions, methodological and cognitive biases in science: issues for current research and ways to counteract them.

The author would like to thank two anonymous reviewers for their careful reading and revision of this paper. Special thanks to Torsten Wilholt and Jamie Shaw for our discussions about biases in science, and their thoughtful comments on earlier drafts. Thanks also to the audiences of the Conferencia Inaugural de Posgrados de Filosofía at Universidad San Buenaventura (Colombia) and the Seminario del Departamento de Lógica, Historia y Filosofía de la Ciencia at UNED (Spain), where earlier versions of the paper were presented. This paper is a product of the research project “A Philosophical Investigation of Biasing Mechanisms in Scientific Research” based at Leibniz Universität Hannover and Universidad de los Andes and funded by the Deutsche Forschungsgemeinschaft, Grant no. 462891071.

  • Cite Icon Cite
  • Open the PDF for in another window
  • Permissions
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Search Site

Manuela Fernández Pinto; Methodological and Cognitive Biases in Science: Issues for Current Research and Ways to Counteract Them. Perspectives on Science 2023; 31 (5): 535–554. doi: https://doi.org/10.1162/posc_a_00589

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Arguments discrediting the value-free ideal of science have left us with the question of how to distinguish desirable values from biases that compromise the reliability of research. In this paper, I argue for a characterization of cognitive biases as deviations of thought processes that systematically lead scientists to the wrong conclusions. In particular, cognitive biases could help us understand a crucial issue in science today: how systematic error is introduced in research outcomes, even when research is evaluated as of good quality. To conclude, I suggest that some debiasing mechanisms have great potential for countering implicit methodological biases in science.

Philosophers of science have traditionally understood problems of biases in science in terms of the negative influence that personal, social, or political interests have for the research process. According to this Baconian view, science ought to be value-free in order to warrant the objectivity of the results. As philosophy of science moves away from this value-free ideal, acknowledging the different ways in which non-epistemic values play an inevitable role in scientific reasoning and practice (Longino 2002 ; Douglas 2009 ), different questions arise regarding biasing mechanisms in scientific inquiry. How should we understand the distinction between the inevitable values involved in scientific practice and the biases that steer scientific research away from its epistemic goals? Which types of bias are involved in scientific inquiry? How do they operate (e.g., are they implicit or explicit)? How do these biases impact the epistemic goals of research? How can such biases be identified and what are potential measures to counteract them?

From a very different perspective, behavioral and cognitive psychologists have shown how the evolution of cognitive mechanisms have helped our species navigate efficiently a sea of information in order to survive. They have also uncovered the ways in which such cognitive mechanisms might be leading us today to constantly arriving at the wrong conclusions. Amos Tversky and Daniel Kahneman ( 1974 ) famously argued that humans do not make decisions rationally most of the time, but instead systematically commit the same types of mistakes, due to our cognitive biases. If this is correct, how can we make better decisions when we are “wired” to commit such mistakes? And in the case of science, how are scientists supposed to overcome such biasing mechanisms so that they do not compromise their research results? Are there any successful debiasing techniques that can help us move in the right direction?

In this paper, I argue that in order to identify and manage biases that compromise scientific results it is crucial to pay attention to research from contemporary psychology, which points us to cognitive mechanisms that systematically deviate our decisions, leading us to the wrong conclusions. 1 Acknowledging that cognitive biases affect scientific decision-making, much more than has been previously admitted, is key for understanding some of the problems that confront research today, and finding ways to counteract them. To conclude, I suggest that debiasing mechanisms, such as cognitive forcing tools, have great potential for countering at least some biases in science.

The paper is divided as follows. The second section presents a brief historical overview of the science and values debate leading to the distinction between values and biases. The third section presents a crucial issue with biases in science today: to understand how systematic error is introduced in research outcomes, even when research is evaluated as of good quality. In the fourth section, I introduce cognitive biases as a possible explanation for the problem highlighted in the third section. Finally, the fifth section analyses recent research on debiasing techniques and suggests how to implement them in scientific research and education.

According to the traditional conception of science, science ought to be value-free. In this view, the objectivity of scientific results can only be achieved when scientists leave their values “at the door,” for values are seen as corruptive, compromising the epistemic goals of research. For the past thirty years, philosophy of science has moved slowly but steadily away from this value-free ideal, acknowledging the different ways in which non-epistemic values play an inevitable role in scientific reasoning and practice.

A number of philosophers of science have argued against the value-free ideal. One of their strategies has been to challenge the conceptual framework that the ideal presupposes. For instance, some authors have challenged the distinction between epistemic and non-epistemic values (Longino 1995 ; Solomon 2001 ; Douglas 2009 ), others have denied we can make a clear-cut division between the internal and the external aspects of scientific research (Rudner 1953 ; Anderson 2004 ; Douglas 2009 ), and even more radically, some have questioned the fact/value distinction (Dewey 1938 ; Anderson 2004 ; Clough 2015 ).

A more straightforward strategy, not necessarily incompatible with the previous one, has been to show that social and political values (should) play a legitimate role in scientific research, not only before and after the development of scientific research (e.g., when making decision about funding certain lines of research over others, or determining how to apply research products or results), but also during scientific practice as such (e.g., when framing hypotheses, designing experiments collecting and interpreting data, etc.). With respect to this second strategy, two arguments (or lines of argument) have been particularly influential: arguments from the underdetermination of theories by evidence, according to which social and political values are needed to close the gap between theory and evidence that underdetermination leaves open (Nelson 1990 ; Longino 1990 ; Kourany 2003 ) and inductive risk arguments, according to which scientists use social or ethical values to judge the risk of erring when accepting or rejecting a hypothesis (Rudner 1953 ; Douglas 2009 ). More recently, a third line of argument has been developed, challenging the lexical priority of evidence, i.e., challenging the privileged epistemic status that arguments from underdetermination and inductive risk give to evidence over values, and adjudicating social values a primary role in scientific practice (Anderson 2004 ; Kourany 2010 ; Brown 2013 ).

Feminist philosophers of science, in particular, have been crucial to the critique of the value-free ideal. From important reformulations of the underdetermination argument (Nelson 1990 ; Longino 1990 ; Harding 1998 ; Kourany 2003 ) to the more radical challenge of the lexical priority of evidence (Anderson 2004 ; Kourany 2010 ), feminist philosophers have shown that many sexist and androcentric values have historically permeated scientific research and that more diverse and feminist values are needed for the improvement of scientific knowledge. Thus, most feminist philosophers of science have defended the importance of identifying appropriate social and political values for scientific research.

Deep down, what the objectors find worrisome about allowing value judgments to guide scientific inquiry is not that they have evaluative content, but that these judgments might be held dogmatically, so as to preclude the recognition of evidence that might undermine them. We need to ensure that value judgments do not operate to drive inquiry to a predetermined conclusion. This is our fundamental criterion for distinguishing legitimate from illegitimate uses of values in science. (Anderson 2004 , p. 11)

Accordingly, the acceptance of the value-ladeness of science comes hand-in-hand with new questions about the legitimacy of values in science: which values are legitimate for scientific inquiry?, what roles can values legitimately play in scientific research?, and how should we understand the distinction between the inevitable values involved in scientific practice and the biases that drive scientific research away from its epistemic goals? In this way, the conceptual distinction between values and biases can be useful for better understanding the new challenges that the rejection of the value-free ideal introduces for philosophers of science.

Wilholt ( 2009 ) has made an important contribution to this debate showing that biases can compromise research results, and thus be regarded as epistemologically detrimental, even when acknowledging that science is value laden. His analysis of preference bias, i.e., when research results unduly reflect the researcher’s preference over other possible results ( 2009 , p. 92), as an epistemic shortcoming will help guide our broader analysis of biases in science. Although Wilholt acknowledges that the concept “bias” is polysemic, being used in different ways both in science and philosophy ( 2009 , p. 92), in general, the concept implies some sort of epistemic shortcoming, more specifically, an introduction of error that deviates the scientific process from legitimate results. Wilholt, for instance, characterizes preference bias as “the infringement of an explicit or implicit conventional standard of the respective research community in order to increase the likelihood of arriving at a preferred result” ( 2009 , p. 99). Other authors have defined bias as “systematic error” (Gluud 2006 ; Greenland and Lash 2008 ), “deviations beyond chance” (Ioannidis 2005 ), or “deviation from the truth beyond random error” (Ioannidis 2017 ). 2

Of course, the science and values debate, in which different arguments against the value-free ideal have been provided, has been framed in terms of “values” rather than “biases.” After all, the main argument of the critics of the ideal is that values have a role to play in scientific research, and that they can even be epistemically beneficial. Feminist philosophers of science, for example, have made a clear defense of feminist values, as important vehicles for the achievement of scientific knowledge, e.g., through a feminist standpoint (Harding 1986 ), by diversifying the values of the scientific community (Longino 2002 ), or by promoting more general democratic values (Kourany 2010 ). So, we can understand why it was important for the critics of the value-free ideal to shift the conversation from talking about any values as bias, to acknowledging a proper role for values in science.

However, once we move beyond the value-free ideal, questions regarding the epistemically detrimental effects of some values in scientific inquiry remain. For it is still the case that some values, sometimes, have a negative influence in the research process, insofar as they introduce systematic errors, deviating research from its epistemic goals. In this sense, some values (or preferences, or things we privilege), can have a biasing effect in the research process. 3

Accordingly, one lesson we can take from the science and values debate is that while values can play a legitimate role in scientific inquiry, they can also play an illegitimate role, moving scientists away from their epistemic goals (as feminist philosophers have shown). In such cases, values have a biasing effect in research and should be properly handled so that they don’t compromise the production of scientific knowledge. Holman and Wilholt call this “the new demarcation problem” ( 2022 ). In this sense, philosophers of science who have argued against the value-free ideal can also argue for debiasing mechanisms in scientific inquiry without being inconsistent.

Resnik ( 2000 ) provides a preliminary taxonomy of biases in research: distinguishing (i) biases that emerge from human values (e.g., political ideologies or religious beliefs), (ii) psychological prejudices (e.g., anchoring bias), (iii) biases from social, cultural or economic conditions (e.g., financial biases), and (iv) biases from flawed methodological assumptions (e.g., craniometry’s assumption that intelligence depends on brain size and shape). While I consider this taxonomy an important first attempt at categorizing biases in science, there is room for improvement. First, the category of psychological biases might be better understood in terms of cognitive biases, as the current literature in social psychology suggests (see, e.g., Mercier and Sperber 2017 ), to emphasize that they are the result of the evolution of our cognitive capacities, i.e., of how our brains work and not other, broader psychological factors. Second, categories (i) and (iii) actually refer to broad social values that are not clearly distinguished in Resnik’s taxonomy, they also individuate biases on the basis of their cause, rather than stipulating where biases inhere (psychology) or what the content of a bias is (iv). And finally, category (iv) needs to be expanded to include not only flawed assumptions, but also flawed methodological decisions more generally.

As previously mentioned, scientists usually understand biases as “deviations beyond chance” (Ioannidis 2005 ) or “systematic errors” (Greenland and Lash 2008 ), stemming from choices made during the research process. For the purposes of this paper, I will call these biases proper of the scientific context, methodological biases , following Resnik’s type (iv) ( 2000 ). Examples of methodological biases include confounding bias (distortion of results due to a confounding variable), selection bias (violation of the selection validity conditions), publication bias (selection of more papers with positive outcomes for publication), response bias (tendency to answer untruthfully in surveys), and the like. Scientists have classified, studied, and learned to manage this sort of methodological biases (e.g., Sackett 1979 ; Vineis 2002 ; Chavalarias and Ioannidis 2010 ; Lash et al. 2014 ). However, other methodological biases, such as biases introduced through non-representative samples or misleading data presentation, are less understood (Bero and Rennie 1996 ), given that they cannot be identified through quality assessment statistical tools.

As the latest meta-analyses have systematically showed, industry-sponsored studies are significantly more likely to obtain results favoring sponsors than independently funded research (Bekelman et al. 2003 ; Lexchin et al. 2003 ; Sismondo 2008 ; Lundh et al. 2017 ). Surprisingly, the same meta-analyses have also shown that industry-sponsored studies have lower risk of bias (e.g., of biases being introduced in the process of double-blinding the study), and their methodological quality is at least as good as, sometimes even better than, the quality of independent studies. 4 We know then that financial conflicts of interest do not necessarily lead to scientific fraud, but the precise mechanisms through which biased results enter the scientific process here is not yet clear. Of course, scholars have offered different hypotheses or possible explanations of how this could be happening (inferior comparators, biased coding of events, selective report of favorable outcomes, spined conclusions, publication bias, and so on), and suggested that multiple factors (social, political, economic, etc.) could be playing a role in biasing research results beyond quality assessment tools (Lexchin et al. 2003 , p. 1169; Lundh et al. 2013 , p. 13). The precise mechanisms at play in a particular case are much harder to prove. However, the fact that scientific fraud is not necessarily related to these biases is also coherent with the results of empirical studies showing that scientists are less likely to commit overt fraud, and that a great number of cases of questionable research practices might not be intentional (Fanelli 2009 ).

When conducting research, scientists address a number of methodological decisions, emerging from different stages of the research process: establishing the central question that the study aims to answer, designing a study to answer such question, conducting the study, drawing conclusions from the study, and finally publishing the study (Bero and Rennie 1996 , p. 211). When designing a clinical study, for example, scientists have to choose the specific patient population for the trial, the specific comparator (e.g., a placebo or an existing available treatment) against which the new treatment will be tried, they also have to determine the dosage for both the control and the treatment groups, and they have to specify an outcome or endpoint for measuring, among others. Different considerations have to be taken into account in order to make such decisions: time and budget constraints, geographical constraints, laboratory constraints, scientific talent and expertise available, what is the best and most efficient combination of choices to answer the question posed, etc. Despite the multiplicity of options, decisions regarding experimental design have a spectrum of epistemically legitimate choices , and methodological biases appear “precisely when making decisions beyond the spectrum of what is epistemically (or methodologically, if you prefer) appropriate, jeopardizing the reliability of the results” (Fernández Pinto 2019 , p. 204).

The following example might help illustrate this problem. Comparator bias is a type of methodological bias that arises when choosing comparison groups and doses. In particular, comparator bias emerges “when treatments known to be beneficial are withheld from patients participating in controlled trials” (Mann and Djulbegovic 2013 , p. 30). Given that new treatments are not compared to the best available therapies, comparator bias leads to suboptimal trial results, and thus represents an epistemic shortcoming.

Comparator bias can appear in many forms. To begin with, new treatments in clinical trials can be compared to a placebo or to an effective available treatment. Placebo-controlled trials are important when trying to determine the efficacy of new treatments, but they are not recommended when effective alternative treatments are already available. Accordingly, the Helsinki Declaration of Ethical Principles for Medical Research (1964) states that new treatments should always be tested against the best proven interventions, with only a few exceptions: (i) when no proven treatment is available, or when (ii) “for compelling and scientifically sound methodological reasons the use of any intervention less effective than the best proven one, the use of placebo, or no intervention is necessary to determine the efficacy or safety of an intervention” (WMA 1964 ).

Despite this, only about half of the drugs approved by the FDA present evidence of comparison with an already existing alternative treatment for market authorization (Goldberg et al. 2011 ). In addition, recent studies reveal that authors of clinical trials are not aware of relevant systematic reviews and previous trials when designing studies on new treatments (Mann and Djulbegovic 2013 , p. 32).

Issues with placebo-controlled trials involve not only ethical issues regarding the fact that control subjects are not given in many cases the best available therapy, but also epistemic issues regarding the fact that trial results won’t tell us whether the new treatments are better or not to the ones already on the market. In other words, placebo-controlled trials might tell us whether a treatment is better than nothing, but this is not optimal knowledge when other effective treatments are available: “What we really want to know, what would be significant in terms of advancing current knowledge, is whether the new treatment is better than the best available one” (Fernández Pinto 2019 , p. 204).

Comparator bias also arises when choosing the available treatment to compare with the new treatment, as well as when choosing the relevant doses for comparison. The use of suboptimal alternative therapies, i.e., available therapies that have been proven not to be the best on the market, as well as the use of higher or lower doses than the standard doses in the alternative treatments have been tactics successfully used to prove either the efficacy or the benefits of new treatments (Bero and Rennie 1996 ; Smith 2005 ; Mann and Djulbegovic 2013 ). In such cases, comparator bias moves us away from achieving relevant knowledge as well.

However, many times a comparator bias cannot be clearly identified through quality assessment tools, making it possible for a research study to appear of good or even excellent quality even when it has this bias. Since comparators are chosen during the design process, and they aren’t commonly justified in the publication process, the decision introducing the comparator bias is very likely to remain hidden from scrutiny. Granting that scientists do not want to bias their results deliberately, and thus leaving cases of overt scientific fraud aside, comparator bias might be unconsciously introduced by scientists who are not even aware of the biasing effects of their comparator choices. This is a disastrous combination epistemically speaking: scientists are likely unaware of their biases and the decision is kept hidden from third-party scrutiny. A more detailed understanding of cognitive biases and their mechanisms in the context of scientific inquiry would perhaps contribute to untangling this problem, as will become clear in the next section.

We must acknowledge that science is prone to the same cognitive biases that affect human behavior. The same cognitive system that allows us to understand and explain the world around us, also sets limits to the possibilities of our knowing. Human cognitive capacities have adapted to make optimal decisions under environmental pressures, using different heuristics and biases (Tversky and Kahneman 1974 ; Greenwald and Banaji 1995 ; Fazio 2007 ; Gendler 2008 ; Payne and Gawronski 2010 ; Mercier and Sperber 2017 ). While this seems to work well most of the time—we are after all efficient decision-makers—such mechanisms can easily be misapplied, leading us to unwarranted and sometimes blatantly wrong conclusions (Lilienfeld et al. 2009 ). Cognitive biases affect scientific research in different ways. For instance, scientists are prone to asymmetric attention bias —double-checking unexpected results, while giving a free-pass to expected ones—and to just-so storytelling—giving ungranted post hoc justifications, or “stories,” for results of data-analysis (Nuzzo 2015 ).

According to the traditional view of science, scientists ought to evaluate the evidence supporting or rejecting a hypothesis independently from their previous beliefs. The philosophy of Karl Popper ( 1963 ) clearly illustrated this idea in his treatment of the demarcation problem: proper science, contrary to pseudoscience, is falsifiable; proper scientific theories should make risky predictions, i.e., hypothesis contrary to expectations, which should withstand the most rigorous attempts to refute them. Scientists, thus, should in principle aim hard to falsify their hypotheses instead of trying to confirm them. As we have learned from contemporary psychology, however, the human mind works in a very different way. In fact, our previous beliefs greatly influence our appreciation of new beliefs.

In this respect, a well-studied example of a cognitive bias that has a clear influence in scientific research is confirmation bias, also known as expectation bias. Confirmation bias is the tendency to believe or pay attention to evidence that confirms our expectations or beliefs, while ignoring or rejecting evidence that disconfirms or goes against our beliefs or expectations. As a cognitive bias, confirmation bias affects all human reasoning, including scientific reasoning.

Thus, contrary to Popper’s view, scientists might be more likely to design and conduct studies that confirm their hypotheses, than to find evidence that disconfirms them.

Explanations for the underlying mechanisms of confirmation bias include the desire to believe, information-processing biases, positive-test strategies, conditional reference frames, and error avoidance (Nickerson 1998 ). Evidence of the existence of confirmation bias in science comes both from the history of science (Nickerson 1998 ; Jeng 2006 ) as well as from empirical studies in different disciplines (Fugelsang et al. 2004 ; Marsh and Hanlon 2007 ). A good example of the former is Eddington’s expedition to confirm Einstein’s prediction that light would be bent by the gravitational field of the sun, a prediction that could be empirically verified by taking photographs of the sun during an eclipse. Accordingly, Eddington embarked on an expedition to West Africa to make the relevant observations of a total solar eclipse to occur on May 29, 1919.

As the official story goes, the evidence collected by Eddington during the eclipse, and later accepted by the Royal Society in London, was key in providing empirical confirmation of Einstein’s theory of general relativity and more generally for the acceptance of the new theory worldwide. However, later revisions of the historical record (e.g., Collins and Pinch 1993 ) have uncovered important measurement errors as well as the discarding of unfitting photographs, in particular the eighteen plates from the Sobral expedition to Brazil, where a second team was sent to register the 1919 eclipse from a different location, without proper justification.

… there was nothing inevitable about the observations themselves until Eddington, the Astronomer Royal, and the rest of the scientific community had finished with their after-the-fact determinations of what the observations were to be taken to be. Quite simply, they had to decide which observations to keep and which to throw out in order that it could be said that the observations had given rise to any numbers at all. (Collins and Pinch 1993 , p. 51)

If human decision-making is systematically prone to bias, scientific decision-making is prone to bias as well. Of course, scientists have developed many interesting mechanisms to deal with such bias, such as randomization, double-blinding, and peer-review (Resnik 2014 ). However, as I have tried to show, there are some instances of decision-making in the research process that are still hidden from any third-party scrutiny and that rely, mistakenly, on the individual scientist’s rational capacities. We now know, however, that individual scientists are bad judges of their own biases (Nuzzo 2015 ), and that they are left in a very vulnerable position when their decisions are left unchecked. In particular, they are prone to introducing biases in research, such as the comparator bias, not because they want to bias their results deliberately, but because they might be unaware of the cognitive biases implicitly guiding their decisions, e.g., a confirmation bias, guiding their choice of comparator. In this way, a systematic error might be introduced in the research process even without the scientist being aware of the problem, just because of the scientist’s own cognitive mechanisms.

Acknowledging that individual scientists are prone to cognitive biases, as any other human being, is the first step to understanding how a series of biases might be populating scientific research today, as some meta-analyses suggest (Lundh et al. 2017 ), but with no deliberately fraudulent behavior involved (Fanelli 2009 ). Of course, the fact that scientists are mostly unaware of such biases does not mean that these biases are not problematic. They systematically lead to inadequate results, compromising the epistemic goals of science. In this sense, they ought to be identified and managed. Their implicit character just makes it much more difficult to do so.

To start thinking about how to counteract biases in research, it is crucial to acknowledge that biases can be the result of implicit or explicit cognitive attitudes. While implicit attitudes tend to operate automatically and outside our awareness, explicit attitudes are the result of cognitive deliberation and agents are often aware of them (Briñol et al. 2009 ). Cognitive biases appear as intuitive evolutionary responses most of the time (Croskerry et al. 2013 ), and thus are mostly implicit and difficult to track. Even though methodological biases can be the result of implicit or explicit attitudes, for the purposes of this paper, I am mostly interested in methodological biases introduced through implicit attitudes. Methodological biases introduced through explicit attitudes, as I mentioned before, are cases of scientific fraud, and must be judged accordingly. Let us focus then on counteracting mechanisms for implicit methodological biases.

I find the influence of implicit biases in science particularly relevant for the purpose of understanding the more likely biasing mechanisms in research, given my assumption that most scientists are good professionals, and that they are unlikely to bias their research projects deliberately (for empirical evidence to support this claim, see Fanelli 2009 ). Nevertheless, they are they are certainly prone to biases due to automatic cognitive mechanisms, learned social stereotypes, or practice-entrenched methodological decisions.

The central question regarding the influence of implicit biasing mechanisms in research has to do with the possibility of counteracting biases that occur in such an apparently uncontrollable fashion: Is research inevitably prone to implicit bias, or are there effective debiasing techniques that scientists can implement to avoid this problem? Neuroscientists, cognitive psychologists, and social psychologists have been exploring this question in detail in recent decades.

Contrary to previous models (e.g., Rydell et al. 2006 ), recent research suggests that implicit attitudes have the potential to change through both associative (implicit) and deliberative (explicit) information. In 2009, Briñol and his colleagues conducted a study to measure if rational deliberation can impact automatic evaluations, in the context of faculty hiring. Participants in the study “received a persuasive message in favor of a new policy to integrate more African American professors into the university. This message was composed of either weak or strong arguments in favor of the proposal” ( 2009 , p. 294). By using arguments of different quality, researchers aimed to measure the influence of rational thinking in automatic responses, assuming that differentiating between weak and strong arguments requires more deliberation. In order to measure implicit racial attitudes among participants, researchers used the Implicit Association Test (IAT). 5 The conclusion of the study states: “we expected and found argument quality to influence automatic evaluations depending on the extent of message processing. That is, under high elaboration conditions, automatic evaluations were found to be more positive toward Blacks for the strong than the weak message” ( 2009 , p. 295). The study suggests then that the use of deliberative information prompting subjects to rational thinking has the potential of neutralizing implicit bias, at least during the timeframe of the experiment. 6

Other studies have also led to optimistic results regarding the possibility of counteracting implicit biases. Cognitive forcing tools, such as mnemonics (O’Sullivan and Schofield 2019 ), as well as implementation intentions, practice-based training, and goal priming (Sheeran et al. 2013 ) have also shown promising effects in modifying implicit bias. O’Sullivan and Schofield ( 2019 ), for example, conducted a randomized controlled study in which they gave doctors in the treatment group a cognitive mnemonic tool called “SLOW,” with the aim of slowing down for improving diagnostic accuracy. The SLOW tool was basically an acronym for a series of questions related to the diagnostic process: Sure about that?; what is Lacking?; what if the Opposite is true?; Worst case scenario?. Volunteers were given cases to diagnose, and those in the treatment group were asked to use the tool for making the diagnosis. SLOW produced “a subjectively positive impact on doctors’ accuracy and thoughtfulness in clinical cases” ( 2019 , p. 1). More generally, Croskerry ( 2002 ) has developed a catalog of biases and debiasing tools that have shown some effectiveness.

Even though debiasing mechanisms are costly, after all they require vigilance and reflection of our behavior (Croskerry 2015 ), they can be effective under the appropriate circumstances (Lilienfeld et al. 2009 ). In particular, given that the scientific environment is one of strict and rigorous controls, it seems especially well-adapted to implementing debiasing techniques. 7 A promising example in this respect comes from the field of medicine, and more particularly from Intensive Care Units (ICUs), where the implementation of simple check lists has proven extraordinarily successful in reducing human error that traditionally led to a high number of central line infections, cases of untreated pain, and stomach ulcers (Pronovost et al. 2003 ; Berenholtz et al. 2004 ; Pronovost et al. 2006 ). Taking advantage of the high levels of rigor and thoroughness expected from caregivers at ICUs, Dr. Pronovost’s simple checklists have made unprecedented improvements in patient care (Gawande 2009 ). I consider that similar cognitive forcing tools have tremendous potential in the, also rigorous and thorough, scientific research environment. Perhaps even simple tools, such as a 5-point checklist, implemented during the design phase of research studies could prevent the introduction of at least some implicit methodological biases in the research process. The actual design and implementation of such tools, as well as their empirical evaluation is still needed.

Arguments discrediting the value-free ideal in science have left us with the question of how to distinguish desirable values from biases that compromise the reliability of research. In this paper, I argued that cognitive biases could help us understand how systematic error is introduced in research outcomes, even when research is evaluated as of good quality. Using comparator bias as an example, I showed how cognitive mechanisms might be behind the introduction of such bias in contemporary clinical studies, and how this possibility becomes crucial for figuring out ways for countering such biases. To conclude, I suggest that debiasing mechanisms, such as cognitive forcing tools, have great potential for countering implicit methodological biases in science.

Notice that not everything we call “bias” falls within the domain of psychology. However, for the purposes of this paper, I will focus only on cognitive biases studied in psychology.

From a broader characterization of bias in philosophy of science, see Bueter ( 2022 ).

Notice that in this sense biases can result from the influences of values in scientific research, but also from other, perhaps accidental, causes. Although I am mostly interested in the relation between values and biases in this paper, it is important to keep in mind that biases might also emerge from other sources.

Of course, “research quality” in these studies is assessed according to the available quality assessment tools, which have been designed to measure specific risks of bias (e.g., blinding, drop-out, sample-size), while other risks are left completely unassessed (e.g., comparator choice, outcome reporting, publication bias). So while a study can appear to be high quality and low risk of bias according to the quality assessment tools, it can certainly be suffering from other biasing mechanisms that remain invisible even when checked with the traditional filters.

One must notice that IATs have received important critiques in two main areas. First, from the fact that the tests use the velocity of response as a proxy to determining the agent’s biases, some have argued that this proxy is not adequate (Mitchell and Tetlock 2006 ). Second, we have evidence that the tests are not stable in time for the same individual, i.e., factors such as the time of the day, the person’s mood, or even whether the person is hungry or not, can influence the test results (Cooley and Payne 2017 ). Despite these problems, we also have evidence that the IATs are stable at a group level, and even for same age groups within the larger population (Payne et al. 2017 ).

Brownstein ( 2018 , p. 170) has suggested that it might not be the logical or rational force of the argument, but perhaps the positive or negative feelings associated with the evaluation of the applicants, and in this case the bad feelings associated with the possibility of being biased towards African-American candidates, which prompts the unbiased response. In any case, there is an apparently successful debiasing mechanism in place here.

One might argue that the scientific process has already implemented several cognitive forcing tools, and that, for example, the quality assessment tools mentioned earlier are precisely an example of how scientists work to avoid biases in their research process. Strict record keeping in laboratory notebooks could be another example of such forcing mechanisms. While I agree in general with this argument, developments in cognitive psychology and debiasing mechanisms more specifically can serve us to further develop cognitive forcing tools for the research process, especially for those biasing mechanisms that we don’t handle properly yet, such as the methodological biases I have presented in this paper. I thank one anonymous reviewer for pointing this out.

Author notes

Email alerts, related articles, related book chapters, affiliations.

  • Online ISSN 1530-9274
  • Print ISSN 1063-6145

A product of The MIT Press

Mit press direct.

  • About MIT Press Direct

Information

  • Accessibility
  • For Authors
  • For Customers
  • For Librarians
  • Direct to Open
  • Open Access
  • Media Inquiries
  • Rights and Permissions
  • For Advertisers
  • About the MIT Press
  • The MIT Press Reader
  • MIT Press Blog
  • Seasonal Catalogs
  • MIT Press Home
  • Give to the MIT Press
  • Direct Service Desk
  • Terms of Use
  • Privacy Statement
  • Crossref Member
  • COUNTER Member  
  • The MIT Press colophon is registered in the U.S. Patent and Trademark Office

This Feature Is Available To Subscribers Only

Sign In or Create an Account

  • How it works

What is Research Bias - Types & Examples

Research is crucial in generating knowledge and understanding the world around us. However, the validity and reliability of research findings can be compromised by various factors, including bias in research. This comprehensive guide will explain the different examples and types of research bias. But before that, let’s look into the research bias definition.

What is Research Bias?

Research bias refers to the systematic errors or deviations from the truth that can occur during the research process, leading to inaccurate or misleading results. It arises from flaws in the research design , data collection , analysis, and interpretation, which can distort the findings and conclusions. Bias in research can occur at any stage of the research process and may be unintentional or deliberate. Recognising and addressing research bias is crucial for maintaining the integrity and credibility of scientific research.

Example of Bias in Research

Suppose a researcher wants to investigate the relationship between coffee consumption and heart disease risk. They recruit participants for their study and ask them to self-report their coffee intake through a questionnaire. Bias can occur in this scenario due to self-reporting bias, where participants may provide inaccurate or biased information about their coffee consumption.

For example, health-conscious individuals might underreport their coffee intake because they perceive it as unhealthy, while coffee enthusiasts might overreport their consumption due to their positive attitude towards coffee.

Types of Research Bias

There are many different types of research bias. Some of them are discussed below.

Information Bias

Publication bias, interviewer bias, response bias, researcher bias, selection bias, cognitive bias.

Information bias is also known as measurement bias. It refers to a type of research bias that occurs when there are errors or distortions in gathering, interpreting, or reporting information in a research study or any other form of data collection.

Example of Information Bias In Research

Let's say you are studying the effectiveness of a new weight loss program. You recruit participants and ask them to keep a daily food diary to track their caloric intake. However, the participants know that they are being monitored and may alter their eating habits, consciously or unconsciously, to present a more favourable image of themselves.

In this case, the participants' awareness of being observed can lead to information bias in research. They might underreport their consumption of high-calorie foods or overreport their consumption of healthy foods, skewing the data collected. This research bias could make the weight loss program appear more effective than it actually is because the reported dietary intake doesn't accurately reflect the participants' true behaviour.

Types of Information Bias In Research

Information bias can manifest in different ways, such as:

1. Measurement Bias

Measurement Bias occurs when the measurement instruments or techniques used to collect data are flawed or inaccurate. For example, if a survey question is poorly worded or ambiguous, it may generate biased responses or misinterpretations of the respondents' answers.

2. Recall Bias

Recall bias arises when participants in a study inaccurately remember or recall past events, experiences, or behaviours. It can happen due to various factors, such as selective memory, social desirability bias, or the passage of time. Recall bias causes distorted or unreliable data.

3. Reporting Bias

Reporting bias occurs when there is selective or incomplete reporting of study findings. It can happen if researchers or organisations only publish or publicise results that support their preconceived notions or desired outcomes while omitting or downplaying contradictory or unfavourable findings. Reporting bias can lead to a skewed perception of the true state of knowledge in a particular field.

4. Publication Bias

Publication bias refers to the tendency of researchers, journals, or other publishing entities to publish studies with statistically significant or positive results preferentially. Studies with null or negative findings are often less likely to be published, leading to an overrepresentation of positive results in the literature and potentially distorting the overall understanding of a research topic.

5. Language Bias

This bias can transpire if research is conducted and reported in a specific language, leading to limited accessibility and potential exclusion of relevant studies or data published in other languages. Language bias can introduce distortions in systematic reviews, meta-analyses, or other forms of evidence synthesis.

Publication bias occurs due to the systematic tendency of scientific journals and researchers to preferentially publish studies with positive or significant results while overlooking or rejecting studies with negative or non-significant findings. It transpires when the decision to publish a study is influenced by the nature or direction of its results rather than its methodological rigour or scientific merit.

Publication bias in research can arise due to various factors, such as researchers' and journals' preferences for novel or groundbreaking findings, the pressure to present positive results to secure funding or advance academic careers, and the tendency of studies with positive results to generate more attention and citations. This research bias can distort the overall body of scientific literature, leading to an overrepresentation of studies with positive outcomes and an underrepresentation of studies with negative or inconclusive findings.

Example of Publication Bias In Research

Let's say a pharmaceutical company conducts a clinical trial to test the effectiveness of a new drug for treating a certain medical condition. The company conducts several trials but only submits the results of the trials that show positive outcomes that states that the drug is effective to scientific journals for publication, as the negative results may lead to rejection in funding.

Interviewer bias means the potential for bias or prejudice to influence the outcome of an interview. It happens when the interviewer's personal beliefs, preferences, stereotypes, or prejudices affect their evaluation of the interviewee's qualifications, skills, or suitability for a position.

Example of Interviewer Bias In Research

Imagine there is an interviewer named James conducting interviews for a sales position in a company. During one interview, a candidate named Aisha, who is a woman, showcases exceptional knowledge about the products, demonstrates excellent communication skills, and presents a strong sales track record.

However, James thinks women are generally less assertive or aggressive in sales roles than men. Due to this stereotype bias in research, James may subconsciously underestimate Aisha's abilities or question her suitability for the position, despite her impressive qualifications.

Types of Interviewer Bias In Research

The main types of interviewer bias are:

1. Stereotyping

Stereotyping refers to holding preconceived notions or stereotypes about certain groups of people based on their race, gender, age, religion, or other characteristics. These biases can lead to unfair judgments or assumptions about the interviewee's abilities.

2. Confirmation Bias

In confirmation bias , Interviewers may subconsciously seek information that confirms their pre-existing beliefs or initial impressions about the interviewee. This results in selectively noticing and emphasising certain responses or behaviours that align with their biases while disregarding contradictory evidence.

3. Similarity Bias

Similarity bias means unconsciously favouring candidates with similar backgrounds, experiences, or characteristics, resulting in a preference for more relatable or familiar candidates. This leads to overlooking qualified candidates from diverse backgrounds.

4. Halo and Horns Effect

The halo effect occurs when an interviewer forms an overall positive impression of a candidate based on one favourable characteristic, leading to a bias in favour of that candidate. Conversely, the horns effect occurs when a negative impression of a candidate's single attribute influences the overall evaluation, resulting in a bias against the candidate.

5. Contrast Effect

The contrast effect leads to evaluating candidates relative to each other rather than based on objective criteria, leading to biased judgments. If the previous candidate was exceptionally strong or weak, the current candidate might be evaluated more harshly or leniently.

6. Implicit Bias

Interviewers may have unconscious biases influencing their perceptions and decision-making. Societal stereotypes often form these biases and can affect evaluations and decisions without the interviewer's conscious awareness.

Response bias arises from a systematic error or distortion in how individuals respond to survey questions or provide information in research studies. It occurs when respondents consistently tend to answer questions inaccurately or in a particular direction, leading to a skewed or biased dataset.

Example of Response Bias In Research

You conduct a survey asking people about their exercise habits and distribute the survey to a group of individuals. You ask them to report the number of times they exercise per week. However, some respondents may feel pressured to provide answers they believe are more socially acceptable. They might overstate their exercise frequency to present themselves as more active and health-conscious. This would result in an overestimation of exercise habits in the data.

Types of Response Bias In Research

We have discussed a few common types of response bias below. Other major types include courtesy bias and extreme responding.

1. Social Desirability Bias

This occurs when respondents provide answers that they perceive to be more socially acceptable or desirable than their true beliefs or behaviours. They may modify their responses to conform to societal norms or present themselves favourably.

2. Acquiescence Bias

Also known as "yea-saying" or "nay-saying," Acquiescence bias in research is the tendency of respondents to agree or disagree with statements or questions without carefully considering their content. Some individuals are predisposed to consistently agree (acquiesce) or disagree with items, leading to skewed responses.

3. Non-Response Bias

This bias emerges when individuals who choose not to participate in a study or survey have different characteristics or opinions compared to those who do participate.

Researcher bias, also known as experimenter bias or investigator bias, refers to the influence or distortion of research findings or interpretations due to the personal beliefs, preferences, or expectations of the researcher conducting the study. It occurs when the researcher's subjective biases or preconceived notions unconsciously affect the research process, leading to flawed or biased results.

Example of Researcher Bias In Research

Assume that a researcher is conducting a study on the effectiveness of a new teaching method for improving student performance in mathematics. The researcher strongly believes the new teaching method will significantly enhance students' mathematical abilities.

To test the method, the researcher divides students into two groups: the control group, which receives traditional teaching methods, and the experimental group, which receives the new teaching method.

During the study, the researcher spends more time interacting with the experimental group, providing additional support and encouragement. They unintentionally convey their enthusiasm for the new teaching method to the students in the experimental group while giving a different level of attention or encouragement to the control group.

When the post-test results come in, the experimental group shows a statistically significant improvement in mathematical performance compared to the control group. Influenced by their initial beliefs and unintentional differential treatment, the researcher concludes that the new teaching method is highly effective in enhancing students' mathematical abilities.

Hire an Expert Writer

Proposal and research paper orders completed by our expert writers are

  • Formally drafted in academic style
  • Plagiarism free
  • 100% Confidential
  • Never Resold
  • Include unlimited free revisions
  • Completed to match exact client requirements

Selection bias refers to a systematic error or distortion that occurs in a research study when the participants or subjects included in the study are not representative of the target population. This research bias arises when the process of selecting participants for the study is flawed or biased in some way, leading to a sample that does not accurately reflect the characteristics of the broader population.

Example of Selection Bias In Research

Suppose a research team wants to evaluate the weight loss program's effectiveness and recruits participants by placing an advertisement in a fitness magazine. The advertisement attracts health-conscious individuals who are actively seeking ways to lose weight. As a result, the study sample primarily consists of highly motivated individuals to lose weight and may have already tried other weight loss methods.

The sample is biased towards individuals more likely to succeed in weight loss due to their pre-existing motivation and experience.

Types of Selection Bias In Research

Selection bias can occur in various forms and impact both observational and experimental studies. Some common types of selection bias include:

1. Non-Response Bias

This occurs when individuals chosen for the study do not participate or respond, leading to a sample that differs from the target population. Non-response bias can introduce bias in research if those who choose not to participate have different characteristics from those who do participate.

2. Volunteer Bias

Volunteer bias happens when participants self-select or volunteer to participate in a study. This can lead to a sample not representative of the broader population because volunteers may have different characteristics, motivations, or experiences compared to those who do not volunteer.

3. Healthy User Bias

This research bias can occur in studies that examine the effects of a particular intervention or treatment. It arises when participants who follow a certain lifestyle or treatment regimen are healthier or have better health outcomes than the general population, leading to overestimating the treatment's effectiveness.

4. Berkson's Bias

This research bias occurs in hospital-based studies where patients are selected based on hospital admission. Since hospital-based studies typically exclude healthy individuals, the sample may consist of patients with multiple conditions or diseases, leading to an artificial association between certain variables.

5. Survivorship Bias

Survivorship bias happens when the sample includes only individuals or entities that have survived a particular process or undergone a specific experience. This bias can lead to an inaccurate understanding of the entire population since it neglects those who did not survive or dropped out.

A cognitive bias refers to systematic patterns of deviation from rational judgment or decision-making processes, often influenced by subjective factors and unconscious mental processes. These research biases can affect how we interpret information, judge, and form beliefs. Cognitive biases can be thought of as shortcuts or mental filters that our brains use to simplify complex information processing.

Example of Cognitive Bias In Research

Assume that you are investigating the effects of a new drug on a particular medical condition. Due to prior experiences or personal beliefs, the researcher has a positive view of the drug's effectiveness. During the research process, the researcher may unconsciously focus on collecting and analysing data that supports their preconceived notion of the drug's efficacy. They may pay less attention to data that suggests the drug has limited or no impact.

Types of Cognitive Bias In Research

Some of the most common types of cognitive bias are discussed below.

1. Confirmation Bias

The tendency to seek, interpret, or remember information in a way that confirms one's existing beliefs or hypotheses while disregarding or downplaying contradictory evidence.

2. Availability Heuristic

This research bias occurs when you overestimate the importance or likelihood of events based on how easily they come to mind or how vividly they are remembered.

3. Anchoring Bias

Relying too heavily on the first piece of information encountered (the " anchor ) when making decisions or estimations, even if it is irrelevant or misleading.

4. Halo Effect

The halo effect happens when you generalise positive or negative impressions of a person, company, or brand based on a single characteristic or initial experience.

5. Overconfidence Effect

The tendency to overestimate one's abilities, knowledge, or the accuracy of one's beliefs and predictions.

6. Bandwagon Effect

Preferencing to adopt certain beliefs or behaviours because others are doing so, often without critical evaluation or independent thinking.

7. Framing Effect

The framing effect refers to how the information presented or "framed" can influence decision-making, emphasising the potential gains or losses, leading to different choices even when the options are objectively the same.

How to Avoid Research Bias?

Avoiding research bias is crucial for maintaining the integrity and validity of your research findings. Here are some strategies on how to minimise research bias:

  • Formulate a clear and specific research question that outlines the objective of your study. This will help you stay focused and reduce the chances of introducing research bias.
  • Perform a thorough literature review on your topic before starting your research. This will help you understand the current state of knowledge and identify potential biases or gaps in the existing research.
  • Use randomisation and blinding techniques to ensure that participants or samples are assigned to groups unbiasedly. Blinding techniques, such as single-blind or double-blind procedures, can be used to prevent bias in data collection and analysis.
  • Ensure that your sample is representative of the target population by using random or stratified sampling methods . Avoid selecting participants based on convenience, as it can introduce selection bias.
  • Consider using random invitations or incentives to encourage a diverse range of participants.
  • Clearly define and document the methods and procedures used for data collection to ensure consistency. This includes using standardised measurement tools, following specific protocols, and training research assistants to minimise variability and observer bias.
  • Researchers can unintentionally introduce bias through preconceived notions, beliefs, or expectations. Be conscious of your biases and regularly reflect on how they influence your research process and interpretation of results.
  • Relying on a single source can introduce bias. Triangulate your findings by using multiple methods ( quantitative and qualitative ) and collecting data from diverse sources to ensure a more comprehensive and balanced perspective.
  • Use appropriate statistical techniques and avoid cherry-picking results that support your hypothesis. Be transparent about the limitations and uncertainties in your findings.

Frequently Asked Questions

What is bias in research.

Bias in research refers to systematic errors or preferences that can distort the results or conclusions of a study, leading to inaccuracies or unfairness due to factors such as sampling, measurement, or interpretation.

What causes bias in research?

Bias in research can be caused by various factors, such as the selection of participants, flawed study design, inadequate sampling methods, researcher's own beliefs or preferences, funding sources, publication bias, or the omission or manipulation of data.

How to avoid bias in research?

To avoid research bias, use random and representative sampling, blinding techniques, pre-registering hypotheses, conducting rigorous peer review, disclosing conflicts of interest, and promoting transparency in data collection and analysis.

How to address bias in research?

You can critically examine your biases, use diverse and inclusive samples, employ appropriate statistical methods, conduct robust sensitivity analyses, encourage replication studies, and engage in open dialogue about potential biases in your findings.

You May Also Like

Are you new to English or just want to revise your grammar skills? We have gathered the basic language rules that every person should know in English.

Unfamiliar with what plagiarism is? Learn the different types of plagiarism and how to avoid them in our comprehensive plagiarism guide.

Applying to a university and looking for a guide to write your UCAS personal statement? We have covered all aspects of the UCAS statement to help you get to your dream university.

More Interesting Articles

USEFUL LINKS

LEARNING RESOURCES

researchprospect-reviews-trust-site

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works

Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

President Biden.

Younger votes still lean toward Biden — but it’s complicated

Hahrie Han delivers the 2024 Tanner Lectures on Human Values.

Posting your opinion on social media won’t save democracy, but this might

research study biases

Environmental law expert voices warning over Supreme Court

Mahzarin Banaji opened the symposium on Tuesday by recounting the “implicit association” experiments she had done at Yale and at Harvard. The final talk is today at 9 a.m.

Kris Snibbe/Harvard Staff Photographer

Turning a light on our implicit biases

Brett Milano

Harvard Correspondent

Social psychologist details research at University-wide faculty seminar

Few people would readily admit that they’re biased when it comes to race, gender, age, class, or nationality. But virtually all of us have such biases, even if we aren’t consciously aware of them, according to Mahzarin Banaji, Cabot Professor of Social Ethics in the Department of Psychology, who studies implicit biases. The trick is figuring out what they are so that we can interfere with their influence on our behavior.

Banaji was the featured speaker at an online seminar Tuesday, “Blindspot: Hidden Biases of Good People,” which was also the title of Banaji’s 2013 book, written with Anthony Greenwald. The presentation was part of Harvard’s first-ever University-wide faculty seminar.

“Precipitated in part by the national reckoning over race, in the wake of George Floyd, Breonna Taylor and others, the phrase ‘implicit bias’ has almost become a household word,” said moderator Judith Singer, Harvard’s senior vice provost for faculty development and diversity. Owing to the high interest on campus, Banaji was slated to present her talk on three different occasions, with the final one at 9 a.m. Thursday.

Banaji opened on Tuesday by recounting the “implicit association” experiments she had done at Yale and at Harvard. The assumptions underlying the research on implicit bias derive from well-established theories of learning and memory and the empirical results are derived from tasks that have their roots in experimental psychology and neuroscience. Banaji’s first experiments found, not surprisingly, that New Englanders associated good things with the Red Sox and bad things with the Yankees.

She then went further by replacing the sports teams with gay and straight, thin and fat, and Black and white. The responses were sometimes surprising: Shown a group of white and Asian faces, a test group at Yale associated the former more with American symbols though all the images were of U.S. citizens. In a further study, the faces of American-born celebrities of Asian descent were associated as less American than those of white celebrities who were in fact European. “This shows how discrepant our implicit bias is from even factual information,” she said.

How can an institution that is almost 400 years old not reveal a history of biases, Banaji said, citing President Charles Eliot’s words on Dexter Gate: “Depart to serve better thy country and thy kind” and asking the audience to think about what he may have meant by the last two words.

She cited Harvard’s current admission strategy of seeking geographic and economic diversity as examples of clear progress — if, as she said, “we are truly interested in bringing the best to Harvard.” She added, “We take these actions consciously, not because they are easy but  because they are in our interest and in the interest of society.”

Moving beyond racial issues, Banaji suggested that we sometimes see only what we believe we should see. To illustrate she showed a video clip of a basketball game and asked the audience to count the number of passes between players. Then the psychologist pointed out that something else had occurred in the video — a woman with an umbrella had walked through — but most watchers failed to register it. “You watch the video with a set of expectations, one of which is that a woman with an umbrella will not walk through a basketball game. When the data contradicts an expectation, the data doesn’t always win.”

Expectations, based on experience, may create associations such as “Valley Girl Uptalk” is the equivalent of “not too bright.” But when a quirky way of speaking spreads to a large number of young people from certain generations,  it stops being a useful guide. And yet, Banaji said, she has been caught in her dismissal of a great idea presented in uptalk.  Banaji stressed that the appropriate course of action is not to ask the person to change the way she speaks but rather for her and other decision makers to know that using language and accents to judge ideas is something people at their own peril.

Banaji closed the talk with a personal story that showed how subtler biases work: She’d once turned down an interview because she had issues with the magazine for which the journalist worked.

The writer accepted this and mentioned she’d been at Yale when Banaji taught there. The professor then surprised herself by agreeing to the interview based on this fragment of shared history that ought not to have influenced her. She urged her colleagues to think about positive actions, such as helping that perpetuate the status quo.

“You and I don’t discriminate the way our ancestors did,” she said. “We don’t go around hurting people who are not members of our own group. We do it in a very civilized way: We discriminate by who we help. The question we should be asking is, ‘Where is my help landing? Is it landing on the most deserved, or just on the one I shared a ZIP code with for four years?’”

To subscribe to short educational modules that help to combat implicit biases, visit outsmartinghumanminds.org .

Share this article

You might like.

New IOP poll shows they still plan to show up to vote but are subject to ‘seismic mood swings’ over specific issues

Hahrie Han delivers the 2024 Tanner Lectures on Human Values.

Tanner Lectures explore models of engaged citizenry from ancient agoras to modern megachurches

research study biases

Richard Lazarus sees conservative majority as threat to protections developed over past half century

Exercise cuts heart disease risk in part by lowering stress, study finds

Benefits nearly double for people with depression

So what exactly makes Taylor Swift so great?

Experts weigh in on pop superstar's cultural and financial impact as her tours and albums continue to break records.

Good genes are nice, but joy is better

Harvard study, almost 80 years old, has proved that embracing community helps us live longer, and be happier

Bias in research studies

Affiliation.

  • 1 Harvard Vanguard Medical Associates, Boston, Mass., USA. [email protected]
  • PMID: 16505391
  • DOI: 10.1148/radiol.2383041109

Bias is a form of systematic error that can affect scientific investigations and distort the measurement process. A biased study loses validity in relation to the degree of the bias. While some study designs are more prone to bias, its presence is universal. It is difficult or even impossible to completely eliminate bias. In the process of attempting to do so, new bias may be introduced or a study may be rendered less generalizable. Therefore, the goals are to minimize bias and for both investigators and readers to comprehend its residual effects, limiting misinterpretation and misuse of data. Numerous forms of bias have been described, and the terminology can be confusing, overlapping, and specific to a medical specialty. Much of the terminology is drawn from the epidemiology literature and may not be common parlance for radiologists. In this review, various types of bias are discussed, with emphasis on the radiology literature, and common study designs in which bias occurs are presented.

Copyright RSNA, 2006.

Publication types

  • Models, Statistical
  • Publication Bias*
  • Research Design

Cancer is a Disease of Aging, but Studies of Older Adults Sorely Lacking

Environmental portrait of Nikesha Gilmore, PhD - March 2024

A systemic review of the current body of research shows that investigators have inadequately addressed the intersection of aging, health disparities, and cancer outcomes among older adults. This is the conclusion of a new paper published in the Journal of the American Geriatrics Society , and led by Nikesha Gilmore, PhD , a member of Wilmot Cancer Institute .

As the population of survivors of cancer 65 and older will likely double in size during the next two decades, the review reveals an urgent need for research to address biases impacting cancer outcomes in older people.

A lack of studies focused on disparities, as well as policies and targeted interventions to improve health equity, “perpetuates cancer inequities and leaves the cancer care system ill-equipped to address the unique needs of the rapidly growing and increasingly diverse older adult cancer population,” the team concluded.

Promoting and conducting this type of research is a central theme at Wilmot: The 27-county Rochester region from which the cancer center draws patients has a high percentage (18%) of people 65 and older, a rate that is higher than state and national averages. The region also has a higher cancer incidence rate than in New York state and the nation.

Gilmore, an assistant professor of Surgery at the University of Rochester Medical Center and member of Wilmot’s Cancer Prevention and Control (CPC) research program , is lead co-author of the report with Shakira J. Grant MBBS, of the University of North Carolina at Chapel Hill. The review team also included Gilmore’s mentor and senior co-author of the paper Supriya Mohile, MD , and members of the national Cancer and Aging Research Group. The scoping review included articles published between 2016 and 2023; Nancy Lundebjerg, CEO of the American Geriatrics Society, lauded the work.

Internally at URMC, Gilmore is also involved extensively in efforts to promote diversity and train the next generation to identify key areas of future investigation. For example, she launched an immersive  student enrichment program called EmREACh , in collaboration with a handful of peers at Wilmot and the CPC. The goal is to remove barriers for underrepresented undergraduate students who are interested in science and medicine by pairing them with mentors, teaching them how to write manuscripts, and introducing them to clinical research and professional development opportunities.

  • School of Medicine and Dentistry
  • patients and families

Headshot_Web_LeslieOrr photo

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Research bias

Types of Bias in Research | Definition & Examples

Research bias results from any deviation from the truth, causing distorted results and wrong conclusions. Bias can occur at any phase of your research, including during data collection , data analysis , interpretation, or publication. Research bias can occur in both qualitative and quantitative research .

Understanding research bias is important for several reasons.

  • Bias exists in all research, across research designs , and is difficult to eliminate.
  • Bias can occur at any stage of the research process.
  • Bias impacts the validity and reliability of your findings, leading to misinterpretation of data.

It is almost impossible to conduct a study without some degree of research bias. It’s crucial for you to be aware of the potential types of bias, so you can minimise them.

For example, the success rate of the program will likely be affected if participants start to drop out. Participants who become disillusioned due to not losing weight may drop out, while those who succeed in losing weight are more likely to continue. This in turn may bias the findings towards more favorable results.  

Table of contents

Actor–observer bias.

  • Confirmation bias

Information bias

Interviewer bias.

  • Publication bias

Researcher bias

Response bias.

Selection bias

How to avoid bias in research

Other types of research bias, frequently asked questions about research bias.

Actor–observer bias occurs when you attribute the behaviour of others to internal factors, like skill or personality, but attribute your own behaviour to external or situational factors.

In other words, when you are the actor in a situation, you are more likely to link events to external factors, such as your surroundings or environment. However, when you are observing the behaviour of others, you are more likely to associate behaviour with their personality, nature, or temperament.

One interviewee recalls a morning when it was raining heavily. They were rushing to drop off their kids at school in order to get to work on time. As they were driving down the road, another car cut them off as they were trying to merge. They tell you how frustrated they felt and exclaim that the other driver must have been a very rude person.

At another point, the same interviewee recalls that they did something similar: accidentally cutting off another driver while trying to take the correct exit. However, this time, the interviewee claimed that they always drive very carefully, blaming their mistake on poor visibility due to the rain.

Confirmation bias is the tendency to seek out information in a way that supports our existing beliefs while also rejecting any information that contradicts those beliefs. Confirmation bias is often unintentional but still results in skewed results and poor decision-making.

Let’s say you grew up with a parent in the military. Chances are that you have a lot of complex emotions around overseas deployments. This can lead you to over-emphasise findings that ‘prove’ that your lived experience is the case for most families, neglecting other explanations and experiences.

Information bias , also called measurement bias, arises when key study variables are inaccurately measured or classified. Information bias occurs during the data collection step and is common in research studies that involve self-reporting and retrospective data collection. It can also result from poor interviewing techniques or differing levels of recall from participants.

The main types of information bias are:

  • Recall bias
  • Observer bias

Performance bias

Regression to the mean (rtm).

Over a period of four weeks, you ask students to keep a journal, noting how much time they spent on their smartphones along with any symptoms like muscle twitches, aches, or fatigue.

Recall bias is a type of information bias. It occurs when respondents are asked to recall events in the past and is common in studies that involve self-reporting.

As a rule of thumb, infrequent events (e.g., buying a house or a car) will be memorable for longer periods of time than routine events (e.g., daily use of public transportation). You can reduce recall bias by running a pilot survey and carefully testing recall periods. If possible, test both shorter and longer periods, checking for differences in recall.

  • A group of children who have been diagnosed, called the case group
  • A group of children who have not been diagnosed, called the control group

Since the parents are being asked to recall what their children generally ate over a period of several years, there is high potential for recall bias in the case group.

The best way to reduce recall bias is by ensuring your control group will have similar levels of recall bias to your case group. Parents of children who have childhood cancer, which is a serious health problem, are likely to be quite concerned about what may have contributed to the cancer.

Thus, if asked by researchers, these parents are likely to think very hard about what their child ate or did not eat in their first years of life. Parents of children with other serious health problems (aside from cancer) are also likely to be quite concerned about any diet-related question that researchers ask about.

Observer bias is the tendency of research participants to see what they expect or want to see, rather than what is actually occurring. Observer bias can affect the results in observationa l and experimental studies, where subjective judgement (such as assessing a medical image) or measurement (such as rounding blood pressure readings up or down) is part of the data collection process.

Observer bias leads to over- or underestimation of true values, which in turn compromise the validity of your findings. You can reduce observer bias by using double-  and single-blinded research methods.

Based on discussions you had with other researchers before starting your observations, you are inclined to think that medical staff tend to simply call each other when they need specific patient details or have questions about treatments.

At the end of the observation period, you compare notes with your colleague. Your conclusion was that medical staff tend to favor phone calls when seeking information, while your colleague noted down that medical staff mostly rely on face-to-face discussions. Seeing that your expectations may have influenced your observations, you and your colleague decide to conduct interviews with medical staff to clarify the observed events. Note: Observer bias and actor–observer bias are not the same thing.

Performance bias is unequal care between study groups. Performance bias occurs mainly in medical research experiments, if participants have knowledge of the planned intervention, therapy, or drug trial before it begins.

Studies about nutrition, exercise outcomes, or surgical interventions are very susceptible to this type of bias. It can be minimized by using blinding , which prevents participants and/or researchers from knowing who is in the control or treatment groups. If blinding is not possible, then using objective outcomes (such as hospital admission data) is the best approach.

When the subjects of an experimental study change or improve their behaviour because they are aware they are being studied, this is called the Hawthorne (or observer) effect . Similarly, the John Henry effect occurs when members of a control group are aware they are being compared to the experimental group. This causes them to alter their behaviour in an effort to compensate for their perceived disadvantage.

Regression to the mean (RTM) is a statistical phenomenon that refers to the fact that a variable that shows an extreme value on its first measurement will tend to be closer to the centre of its distribution on a second measurement.

Medical research is particularly sensitive to RTM. Here, interventions aimed at a group or a characteristic that is very different from the average (e.g., people with high blood pressure) will appear to be successful because of the regression to the mean. This can lead researchers to misinterpret results, describing a specific intervention as causal when the change in the extreme groups would have happened anyway.

In general, among people with depression, certain physical and mental characteristics have been observed to deviate from the population mean .

This could lead you to think that the intervention was effective when those treated showed improvement on measured post-treatment indicators, such as reduced severity of depressive episodes.

However, given that such characteristics deviate more from the population mean in people with depression than in people without depression, this improvement could be attributed to RTM.

Interviewer bias stems from the person conducting the research study. It can result from the way they ask questions or react to responses, but also from any aspect of their identity, such as their sex, ethnicity, social class, or perceived attractiveness.

Interviewer bias distorts responses, especially when the characteristics relate in some way to the research topic. Interviewer bias can also affect the interviewer’s ability to establish rapport with the interviewees, causing them to feel less comfortable giving their honest opinions about sensitive or personal topics.

Participant: ‘I like to solve puzzles, or sometimes do some gardening.’

You: ‘I love gardening, too!’

In this case, seeing your enthusiastic reaction could lead the participant to talk more about gardening.

Establishing trust between you and your interviewees is crucial in order to ensure that they feel comfortable opening up and revealing their true thoughts and feelings. At the same time, being overly empathetic can influence the responses of your interviewees, as seen above.

Publication bias occurs when the decision to publish research findings is based on their nature or the direction of their results. Studies reporting results that are perceived as positive, statistically significant , or favoring the study hypotheses are more likely to be published due to publication bias.

Publication bias is related to data dredging (also called p -hacking ), where statistical tests on a set of data are run until something statistically significant happens. As academic journals tend to prefer publishing statistically significant results, this can pressure researchers to only submit statistically significant results. P -hacking can also involve excluding participants or stopping data collection once a p value of 0.05 is reached. However, this leads to false positive results and an overrepresentation of positive results in published academic literature.

Researcher bias occurs when the researcher’s beliefs or expectations influence the research design or data collection process. Researcher bias can be deliberate (such as claiming that an intervention worked even if it didn’t) or unconscious (such as letting personal feelings, stereotypes, or assumptions influence research questions ).

The unconscious form of researcher bias is associated with the Pygmalion (or Rosenthal) effect, where the researcher’s high expectations (e.g., that patients assigned to a treatment group will succeed) lead to better performance and better outcomes.

Researcher bias is also sometimes called experimenter bias, but it applies to all types of investigative projects, rather than only to experimental designs .

  • Good question: What are your views on alcohol consumption among your peers?
  • Bad question: Do you think it’s okay for young people to drink so much?

Response bias is a general term used to describe a number of different situations where respondents tend to provide inaccurate or false answers to self-report questions, such as those asked on surveys or in structured interviews .

This happens because when people are asked a question (e.g., during an interview ), they integrate multiple sources of information to generate their responses. Because of that, any aspect of a research study may potentially bias a respondent. Examples include the phrasing of questions in surveys, how participants perceive the researcher, or the desire of the participant to please the researcher and to provide socially desirable responses.

Response bias also occurs in experimental medical research. When outcomes are based on patients’ reports, a placebo effect can occur. Here, patients report an improvement despite having received a placebo, not an active medical treatment.

While interviewing a student, you ask them:

‘Do you think it’s okay to cheat on an exam?’

Common types of response bias are:

Acquiescence bias

Demand characteristics.

  • Social desirability bias

Courtesy bias

  • Question-order bias

Extreme responding

Acquiescence bias is the tendency of respondents to agree with a statement when faced with binary response options like ‘agree/disagree’, ‘yes/no’, or ‘true/false’. Acquiescence is sometimes referred to as ‘yea-saying’.

This type of bias occurs either due to the participant’s personality (i.e., some people are more likely to agree with statements than disagree, regardless of their content) or because participants perceive the researcher as an expert and are more inclined to agree with the statements presented to them.

Q: Are you a social person?

People who are inclined to agree with statements presented to them are at risk of selecting the first option, even if it isn’t fully supported by their lived experiences.

In order to control for acquiescence, consider tweaking your phrasing to encourage respondents to make a choice truly based on their preferences. Here’s an example:

Q: What would you prefer?

  • A quiet night in
  • A night out with friends

Demand characteristics are cues that could reveal the research agenda to participants, risking a change in their behaviours or views. Ensuring that participants are not aware of the research goals is the best way to avoid this type of bias.

On each occasion, patients reported their pain as being less than prior to the operation. While at face value this seems to suggest that the operation does indeed lead to less pain, there is a demand characteristic at play. During the interviews, the researcher would unconsciously frown whenever patients reported more post-op pain. This increased the risk of patients figuring out that the researcher was hoping that the operation would have an advantageous effect.

Social desirability bias is the tendency of participants to give responses that they believe will be viewed favorably by the researcher or other participants. It often affects studies that focus on sensitive topics, such as alcohol consumption or sexual behaviour.

You are conducting face-to-face semi-structured interviews with a number of employees from different departments. When asked whether they would be interested in a smoking cessation program, there was widespread enthusiasm for the idea.

Note that while social desirability and demand characteristics may sound similar, there is a key difference between them. Social desirability is about conforming to social norms, while demand characteristics revolve around the purpose of the research.

Courtesy bias stems from a reluctance to give negative feedback, so as to be polite to the person asking the question. Small-group interviewing where participants relate in some way to each other (e.g., a student, a teacher, and a dean) is especially prone to this type of bias.

Question order bias

Question order bias occurs when the order in which interview questions are asked influences the way the respondent interprets and evaluates them. This occurs especially when previous questions provide context for subsequent questions.

When answering subsequent questions, respondents may orient their answers to previous questions (called a halo effect ), which can lead to systematic distortion of the responses.

Extreme responding is the tendency of a respondent to answer in the extreme, choosing the lowest or highest response available, even if that is not their true opinion. Extreme responding is common in surveys using Likert scales , and it distorts people’s true attitudes and opinions.

Disposition towards the survey can be a source of extreme responding, as well as cultural components. For example, people coming from collectivist cultures tend to exhibit extreme responses in terms of agreement, while respondents indifferent to the questions asked may exhibit extreme responses in terms of disagreement.

Selection bias is a general term describing situations where bias is introduced into the research from factors affecting the study population.

Common types of selection bias are:

Sampling or ascertainment bias

  • Attrition bias

Volunteer or self-selection bias

  • Survivorship bias
  • Nonresponse bias
  • Undercoverage bias

Sampling bias occurs when your sample (the individuals, groups, or data you obtain for your research) is selected in a way that is not representative of the population you are analyzing. Sampling bias threatens the external validity of your findings and influences the generalizability of your results.

The easiest way to prevent sampling bias is to use a probability sampling method . This way, each member of the population you are studying has an equal chance of being included in your sample.

Sampling bias is often referred to as ascertainment bias in the medical field.

Attrition bias occurs when participants who drop out of a study systematically differ from those who remain in the study. Attrition bias is especially problematic in randomized controlled trials for medical research because participants who do not like the experience or have unwanted side effects can drop out and affect your results.

You can minimize attrition bias by offering incentives for participants to complete the study (e.g., a gift card if they successfully attend every session). It’s also a good practice to recruit more participants than you need, or minimize the number of follow-up sessions or questions.

You provide a treatment group with weekly one-hour sessions over a two-month period, while a control group attends sessions on an unrelated topic. You complete five waves of data collection to compare outcomes: a pretest survey , three surveys during the program, and a posttest survey.

Volunteer bias (also called self-selection bias ) occurs when individuals who volunteer for a study have particular characteristics that matter for the purposes of the study.

Volunteer bias leads to biased data, as the respondents who choose to participate will not represent your entire target population. You can avoid this type of bias by using random assignment – i.e., placing participants in a control group or a treatment group after they have volunteered to participate in the study.

Closely related to volunteer bias is nonresponse bias , which occurs when a research subject declines to participate in a particular study or drops out before the study’s completion.

Considering that the hospital is located in an affluent part of the city, volunteers are more likely to have a higher socioeconomic standing, higher education, and better nutrition than the general population.

Survivorship bias occurs when you do not evaluate your data set in its entirety: for example, by only analyzing the patients who survived a clinical trial.

This strongly increases the likelihood that you draw (incorrect) conclusions based upon those who have passed some sort of selection process – focusing on ‘survivors’ and forgetting those who went through a similar process and did not survive.

Note that ‘survival’ does not always mean that participants died! Rather, it signifies that participants did not successfully complete the intervention.

However, most college dropouts do not become billionaires. In fact, there are many more aspiring entrepreneurs who dropped out of college to start companies and failed than succeeded.

Nonresponse bias occurs when those who do not respond to a survey or research project are different from those who do in ways that are critical to the goals of the research. This is very common in survey research, when participants are unable or unwilling to participate due to factors like lack of the necessary skills, lack of time, or guilt or shame related to the topic.

You can mitigate nonresponse bias by offering the survey in different formats (e.g., an online survey, but also a paper version sent via post), ensuring confidentiality , and sending them reminders to complete the survey.

You notice that your surveys were conducted during business hours, when the working-age residents were less likely to be home.

Undercoverage bias occurs when you only sample from a subset of the population you are interested in. Online surveys can be particularly susceptible to undercoverage bias. Despite being more cost-effective than other methods, they can introduce undercoverage bias as a result of excluding people who do not use the internet.

While very difficult to eliminate entirely, research bias can be mitigated through proper study design and implementation. Here are some tips to keep in mind as you get started.

  • Clearly explain in your methodology section how your research design will help you meet the research objectives and why this is the most appropriate research design.
  • In quantitative studies , make sure that you use probability sampling to select the participants. If you’re running an experiment, make sure you use random assignment to assign your control and treatment groups.
  • Account for participants who withdraw or are lost to follow-up during the study. If they are withdrawing for a particular reason, it could bias your results. This applies especially to longer-term or longitudinal studies .
  • Use triangulation to enhance the validity and credibility of your findings.
  • Phrase your survey or interview questions in a neutral, non-judgemental tone. Be very careful that your questions do not steer your participants in any particular direction.
  • Consider using a reflexive journal. Here, you can log the details of each interview , paying special attention to any influence you may have had on participants. You can include these in your final analysis.

Cognitive bias

  • Baader–Meinhof phenomenon
  • Availability heuristic
  • Halo effect
  • Framing effect
  • Sampling bias
  • Ascertainment bias
  • Self-selection bias
  • Hawthorne effect
  • Omitted variable bias
  • Pygmalion effect
  • Placebo effect

Bias in research affects the validity and reliability of your findings, leading to false conclusions and a misinterpretation of the truth. This can have serious implications in areas like medical research where, for example, a new form of treatment may be evaluated.

Observer bias occurs when the researcher’s assumptions, views, or preconceptions influence what they see and record in a study, while actor–observer bias refers to situations where respondents attribute internal factors (e.g., bad character) to justify other’s behaviour and external factors (difficult circumstances) to justify the same behaviour in themselves.

Response bias is a general term used to describe a number of different conditions or factors that cue respondents to provide inaccurate or false answers during surveys or interviews . These factors range from the interviewer’s perceived social position or appearance to the the phrasing of questions in surveys.

Nonresponse bias occurs when the people who complete a survey are different from those who did not, in ways that are relevant to the research topic. Nonresponse can happen either because people are not willing or not able to participate.

Is this article helpful?

More interesting articles.

  • Attrition Bias | Examples, Explanation, Prevention
  • Demand Characteristics | Definition, Examples & Control
  • Hostile Attribution Bias | Definition & Examples
  • Observer Bias | Definition, Examples, Prevention
  • Regression to the Mean | Definition & Examples
  • Representativeness Heuristic | Example & Definition
  • Sampling Bias and How to Avoid It | Types & Examples
  • Self-Fulfilling Prophecy | Definition & Examples
  • The Availability Heuristic | Example & Definition
  • The Baader–Meinhof Phenomenon Explained
  • What Is a Ceiling Effect? | Definition & Examples
  • What Is Actor-Observer Bias? | Definition & Examples
  • What Is Affinity Bias? | Definition & Examples
  • What Is Anchoring Bias? | Definition & Examples
  • What Is Ascertainment Bias? | Definition & Examples
  • What Is Belief Bias? | Definition & Examples
  • What Is Bias for Action? | Definition & Examples
  • What Is Cognitive Bias? | Meaning, Types & Examples
  • What Is Confirmation Bias? | Definition & Examples
  • What Is Conformity Bias? | Definition & Examples
  • What Is Correspondence Bias? | Definition & Example
  • What Is Explicit Bias? | Definition & Examples
  • What Is Generalisability? | Definition & Examples
  • What Is Hindsight Bias? | Definition & Examples
  • What Is Implicit Bias? | Definition & Examples
  • What Is Information Bias? | Definition & Examples
  • What Is Ingroup Bias? | Definition & Examples
  • What Is Negativity Bias? | Definition & Examples
  • What Is Nonresponse Bias?| Definition & Example
  • What Is Normalcy Bias? | Definition & Example
  • What Is Omitted Variable Bias? | Definition & Example
  • What Is Optimism Bias? | Definition & Examples
  • What Is Outgroup Bias? | Definition & Examples
  • What Is Overconfidence Bias? | Definition & Examples
  • What Is Perception Bias? | Definition & Examples
  • What Is Primacy Bias? | Definition & Example
  • What Is Publication Bias? | Definition & Examples
  • What Is Recall Bias? | Definition & Examples
  • What Is Recency Bias? | Definition & Examples
  • What Is Response Bias? | Definition & Examples
  • What Is Selection Bias? | Definition & Examples
  • What Is Self-Selection Bias? | Definition & Example
  • What Is Self-Serving Bias? | Definition & Example
  • What Is Social Desirability Bias? | Definition & Examples
  • What Is Status Quo Bias? | Definition & Examples
  • What Is Survivorship Bias? | Definition & Examples
  • What Is the Affect Heuristic? | Example & Definition
  • What Is the Egocentric Bias? | Definition & Examples
  • What Is the Framing Effect? | Definition & Examples
  • What Is the Halo Effect? | Definition & Examples
  • What Is the Hawthorne Effect? | Definition & Examples
  • What Is the Placebo Effect? | Definition & Examples
  • What Is the Pygmalion Effect? | Definition & Examples
  • What Is Unconscious Bias? | Definition & Examples
  • What Is Undercoverage Bias? | Definition & Example
  • What Is Vividness Bias? | Definition & Examples
  • Skip to main content
  • Keyboard shortcuts for audio player

Shots - Health News

  • Your Health
  • Treatments & Tests
  • Health Inc.
  • Public Health

Perspective

Which scientists get mentioned in the news mostly ones with anglo names, says study.

research study biases

When the media covers scientific research, not all scientists are equally likely to be mentioned. A new study finds scientists with Asian or African names were 15% less likely to be named in a story. shironosov/Getty Images hide caption

When the media covers scientific research, not all scientists are equally likely to be mentioned. A new study finds scientists with Asian or African names were 15% less likely to be named in a story.

When one Chinese national recently petitioned the U.S. Citizenship and Immigration Services to become a permanent resident, he thought his chances were pretty good. As an accomplished biologist, he figured that news articles in top media outlets, including The New York Times , covering his research would demonstrate his "extraordinary ability" in the sciences, as called for by the EB-1A visa .

But when the immigration officers rejected his petition, they noted that his name did not appear anywhere in the news articles. News coverage of a paper he co-authored did not directly demonstrate his major contribution to the work.

As this biologist's close friend, I felt bad for him because I knew how much he had dedicated to the project. He even started the idea as one of his Ph.D. dissertation chapters. But as a scientist who studies topics related to scientific innovation , I understand the immigration officers' perspective: Research is increasingly done through teamwork , so it's hard to know individual contributions if a news article reports only the study findings.

This anecdote made me and my colleagues Misha Teplitskiy and David Jurgens curious about what affects journalists' decisions about which researchers to feature in their news stories.

There's a lot at stake for a scientist whose name is or isn't mentioned in journalistic coverage of their work. News media play a key role in disseminating new scientific findings to the public . The coverage of a particular study brings prestige to its research team and their institutions. The depth and quality of coverage then shapes public perception of who is doing good science . In some cases, as my friend's story suggests, the coverage can affect individual careers.

Do scientists' social identities, such as ethnicity or race, play a role in who gets named?

This question is not straightforward to answer. On the one hand, racial bias may exist, given the profound underrepresentation of minorities in U.S. mainstream media . On the other, science journalism is known for its high standard of objective reporting . We decided to investigate this question in a systematic fashion using large-scale observational data.

The least coverage? Chinese and African names

My colleagues and I analyzed 223,587 news stories from 288 U.S. media outlets, sourced from Altmetric.com, a website that monitors online posts about research papers . The news stories, published from 2011-2019, covered 100,486 scientific papers. For each paper, we focused on authors with the highest chance of being mentioned: the first author, last author and other designated corresponding authors. We calculated how often the authors were mentioned in the news articles reporting their research.

We used an algorithm to infer perceived ethnicity from authors' names . We figured that journalists may rely on such cues in the absence of scientists' self-reported information. We considered authors with Anglo names – like John Brown or Emily Taylor – as the majority group and then compared the average mention rates across nine broad ethnic groups.

Our methodology does not distinguish Black from white names because many African Americans have Anglo names, such as Michael Jackson. But since we focus on perceived identity across nine different groups based on names, the study's design is still meaningful.

We found that for the subset of first, last and corresponding authors on research papers, the overall chance of being credited by name in a news story was 40%. Authors with minority ethnicity names, however, were significantly less likely to be mentioned compared with authors with Anglo names. The disparity was most pronounced for authors with East Asian and African names; they were on average mentioned or quoted about 15% less in U.S. science media relative to those with Anglo names.

This association is consistent even after accounting for factors such as geographical location, corresponding author status, authorship position, affiliation rank, author prestige, research topics, journal impact and story length.

And the disparity held across different types of outlets, including publishers of press releases, general interest news and those with content focused on science and technology.

Pragmatic factors and language choices

Our results don't directly imply media bias. So what's going on?

First and foremost, the underrepresentation of scientists with East Asian and African names may be due to pragmatic challenges faced by U.S.-based journalists in interviewing them. Factors like time zone differences for researchers based overseas and actual or perceived English fluency could be at play as a journalist works under deadline to produce the story.

We isolated these factors by focusing on researchers affiliated with American institutions. Among U.S.-based researchers, pragmatic difficulties should be minimized because they're in the same geographic region as the journalists and they're likely to be proficient in English, at least in writing. In addition, these scientists would presumably be equally likely to respond to journalists' interview requests, given that media attention is increasingly valued by U.S. institutions .

Even when we looked just at U.S. institutions, we found significant disparities in mentions and quotations for non-Anglo-named authors, albeit slightly reduced. In particular, East Asian- and African-named authors experience a 4 to 5 percentage-point drop in mention rates compared with their Anglo-named counterparts. This result suggests that while pragmatic considerations can explain some disparities, they don't account for all of them.

We found that journalists were also more likely to substitute institutional affiliations for scientists with African and East Asian names – for instance, writing about "researchers from the University of Michigan." This institution-substitution effect underscores a potential bias in media representation, where scholars with minority ethnicity names may be perceived as less authoritative or deserving of formal recognition.

Why equity matters in the discourse on science

Part of the depth of science news coverage depends on how thoroughly and accurately researchers are portrayed in stories, including whether scientists are mentioned by name and the extent to which their contributions are highlighted via quotes. As science becomes increasingly globalized, with English as its primary language, our study highlights the importance of equitable representation in shaping public discourse and fostering diversity in the scientific community.

We suspect that disparities are even larger at an earlier point in science dissemination, when journalists are selecting which research papers to report. Understanding these disparities is complicated by decades or even centuries of bias ingrained in the whole science production pipeline, including whose research gets funded , who gets to publish in top journals and who is represented in the scientific workforce itself .

Journalists are picking from a later stage of a process that has a number of inequities built in. Thus, addressing disparities in scientists' media representation is only one way to foster inclusivity and equality in science. But it's a step toward sharing scientific knowledge with the public in a more equitable way.

Hao Peng is a postdoctoral fellow at the Kellogg School of Management, Northwestern University.

This story comes from The Conversation, a nonprofit, independent news organization dedicated to unlocking the knowledge of experts for the public good.

  • racial disparities
  • science journalism

What are you looking for?

Artificial intelligence can help people feel heard, new usc study finds.

New research from the USC Marshall School of Business reveals AI-generated responses can make humans “feel heard” but an underlying bias toward AI devalues its effectiveness.

Contact: Sabrina Skacan, [email protected] ; Nina Raffio, [email protected]

A new study published in the Proceedings of the National Academy of Sciences (PNAS) found AI-generated messages made recipients feel more “heard” than messages generated by untrained humans, and that AI was better at detecting emotions than these individuals. However, recipients reported feeling less heard when they learned a message came from AI.

As AI becomes more ubiquitous in daily life, understanding its potential and limitations in meeting human psychological needs becomes more pertinent. With dwindling empathetic connections in a fast-paced world, many are finding their human needs for feeling heard and validated increasingly unmet.

The research conducted by Yidan Yin, Nan Jia, and Cheryl J. Wakslak from the USC Marshall School of Business addresses a pivotal question: Can AI, which lacks human consciousness and emotional experience, succeed in making people feel heard and understood?

“In the context of an increasing loneliness epidemic, a large part of our motivation was to see whether AI can actually help people feel heard,” said the paper’s first author, Yidan Yin , a postdoctoral researcher at the Lloyd Greif Center for Entrepreneurial Studies at USC Marshall.

The team’s findings highlight not only the potential of AI to augment human capacity for understanding and communication, but raises important conceptual questions about the meaning of being heard and practical questions about how best to leverage AI’s strengths to support greater human flourishing.

In an experiment and subsequent follow-up study, “we identified that while AI demonstrates enhanced potential compared to non-trained human responders to provide emotional support, the devaluation of AI responses poses a key challenge for effectively deploying AI’s capabilities,” said Nan Jia , associate professor of strategic management.

The USC Marshall research team investigated people’s feelings of being heard and other related perceptions and emotions after receiving a response from either AI or a human. The survey varied both the actual source of the message and the ostensible source of the message: Participants received messages that were actually generated by an AI or by a human responder, with the information that it was either AI or human generated.

“What we found was that both the actual source of the message and the presumed source of the message played a role,” said Cheryl Wakslak , associate professor of management and organization at USC Marshall. “People felt more heard when they received an AI than a human message, but when they believed a message came from AI this made them feel less heard.”

Yin noted that their research “basically finds a bias against AI. It’s useful, but they don’t like it.”

Perceptions about AI are bound to change, added Wakslak, “Of course these effects may change over time, but one of the interesting things we found was that the two effects we observed were fairly similar in magnitude. Whereas there is a positive effect of getting an AI message, there is a similar degree of response bias when a message is identified as coming from AI, leading the two effects to essentially cancel each other out.”

Individuals further reported an “uncanny valley” response — a sense of unease when made aware that the empathetic response originated from AI, highlighting the complex emotional landscape navigated by AI-human interactions.

The research survey also asked participants about their general openness to AI, which moderated some of the effects, explained Wakslak.

“People who feel more positively toward AI don’t exhibit the response penalty as much and that’s intriguing because over time, will people gain more positive attitudes toward AI?” she posed. “That remains to be seen … but it will be interesting to see how this plays out as people’s familiarity and experience with AI grows.”

AI offers better emotional support

The study highlighted important nuances. Responses generated by AI were associated with increased hope and lessened distress, indicating a positive emotional effect on recipients. AI also demonstrated a more disciplined approach than humans in offering emotional support and refrained from making overwhelming practical suggestions.

Yin explained that, “Ironically, AI was better at using emotional support strategies that have been shown in prior research to be empathetic and validating. Humans may potentially learn from AI because a lot of times when our significant others are complaining about something, we want to provide that validation, but we don’t know how to effectively do so.”

Instead of AI replacing humans, the research points to different advantages of AI and human responses. The advanced technology could become a valuable tool, empowering humans to use AI to help them better understand one another and learn how to respond in ways that provide emotional support and demonstrate understanding and validation.

Overall, the paper’s findings have important implications for the integration of AI into more social contexts. Leveraging AI’s capabilities might provide an inexpensive scalable solution for social support, especially for those who might otherwise lack access to individuals who can provide them with such support. However, as the research team notes, their findings suggest that it is critical to give careful consideration to how AI is presented and perceived in order to maximize its benefits and reduce any negative responses.

(Photo/iStock)

Related Articles

A new frontier: why usc is investing $1 billion into advancing computing research, usc marshall ethics and leadership center builds on business school’s reputation, 5 ways artificial intelligence will change the world by 2050.

More From Forbes

New research reveals resumes with black names experience bias in the hiring process.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Name bias is still a pervasive issue in the hiring process.

A groundbreaking new study by economists Kline, Rose, and Walters reveals what many have known for decades—name bias is still a pervasive issue in the hiring process. The researchers sent out a series of identical resumes to analyze whether race and gender impacted callback rates of job applications at 97 U.S. employers. The study analyzed distinctly “Black” names and “white” names as well as male and female names, among other demographic differences. The results revealed that white and female names received the most callbacks followed by white male names. Black male names and Black female names were called back the least, respectively.

When assessing gender differences, the researchers did not find prominent differences in callback rates between male and female applicants overall—racial differences were more pronounced. Results did vary by industry and firm, with the automobile industry having more pronounced racial differences in callback rates than other industries. The research also revealed that the smallest estimated racial bias was within food stores. The study mirrored many of the same findings of previous studies on name bias and discrimination in hiring.

In a study from over two decades ago, researchers Bertrand and Mullainathan analyzed the callback rates for identical resumes sent out with either Black names or white names in Boston and Chicago. The results of their study provided evidence of pervasive racial discrimination against Black-sounding names during the hiring process. Name bias isn’t just a United States phenomenon. A Swedish study from 2007 found that job applicants with Swedish-sounding names received more callbacks than job applications with Arabic or African-sounding names across different occupations.

It is imperative for hiring professionals to not only be aware of these trends but to actively integrate safeguards to mitigate bias in the hiring process. Currently, there is a movement to defund and dismantle DEI , with DEI detractors getting louder and louder each day. DEI propaganda propels the myth that DEI is used primarily as a means to grant unearned privileges to marginalized groups. The aforementioned studies provide evidence to the contrary; despite the DEI disdain, biases in the workplace persist. DEI can be an effective tool to address these disparities.

What specific DEI strategies can be utilized to address name bias in hiring, particularly when it comes to Black-sounding names, which experience some of the most severe penalties during the hiring process? The first step is awareness. As mentioned, hiring professionals must be educated about name bias to create systems to overcome this type of bias. Offering DEI training and education specifically for hiring professionals, and continuing to share research, articles , and anecdotes about name bias in the hiring process can be helpful. We should be less worried about completely eradicating our unconscious biases (which is not realistic) and instead, focus on developing systems to make our hiring process more equitable.

One Of The Best TV Shows Ever Made Sets Sail On Netflix Today For The Very First Time

Netflix renews and also cancels the witcher as first glimpse of season 4 drops, bitcoin suddenly braced for a 35 trillion halving price earthquake.

There is some evidence that suggests that in some circumstances, anonymizing the resume by removing demographic information like a name, college graduation year or hobbies can address some of these initial biases in hiring. Organizations and institutions should consider integrating this strategy into their hiring process, especially within industries more susceptible to racial and other types of biases. In addition to anonymizing resumes, ensuring that there is an objective process to evaluate job candidates is vital. Utilizing a scorecard or a rubric and also interrogating and operationally defining your criteria for job candidate culture fit can be effective methods to ensure that equity is baked into the hiring process.

Janice Gassam Asare

  • Editorial Standards
  • Reprints & Permissions

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of springeropen

Protecting against researcher bias in secondary data analysis: challenges and potential solutions

Jessie r. baldwin.

1 Department of Clinical, Educational and Health Psychology, Division of Psychology and Language Sciences, University College London, London, WC1H 0AP UK

2 Social, Genetic and Developmental Psychiatry Centre, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London, UK

Jean-Baptiste Pingault

Tabea schoeler, hannah m. sallis.

3 MRC Integrative Epidemiology Unit at the University of Bristol, Bristol Medical School, University of Bristol, Bristol, UK

4 School of Psychological Science, University of Bristol, Bristol, UK

5 Centre for Academic Mental Health, Population Health Sciences, University of Bristol, Bristol, UK

Marcus R. Munafò

6 NIHR Biomedical Research Centre, University Hospitals Bristol NHS Foundation Trust and University of Bristol, Bristol, UK

Analysis of secondary data sources (such as cohort studies, survey data, and administrative records) has the potential to provide answers to science and society’s most pressing questions. However, researcher biases can lead to questionable research practices in secondary data analysis, which can distort the evidence base. While pre-registration can help to protect against researcher biases, it presents challenges for secondary data analysis. In this article, we describe these challenges and propose novel solutions and alternative approaches. Proposed solutions include approaches to (1) address bias linked to prior knowledge of the data, (2) enable pre-registration of non-hypothesis-driven research, (3) help ensure that pre-registered analyses will be appropriate for the data, and (4) address difficulties arising from reduced analytic flexibility in pre-registration. For each solution, we provide guidance on implementation for researchers and data guardians. The adoption of these practices can help to protect against researcher bias in secondary data analysis, to improve the robustness of research based on existing data.

Introduction

Secondary data analysis has the potential to provide answers to science and society’s most pressing questions. An abundance of secondary data exists—cohort studies, surveys, administrative data (e.g., health records, crime records, census data), financial data, and environmental data—that can be analysed by researchers in academia, industry, third-sector organisations, and the government. However, secondary data analysis is vulnerable to questionable research practices (QRPs) which can distort the evidence base. These QRPs include p-hacking (i.e., exploiting analytic flexibility to obtain statistically significant results), selective reporting of statistically significant, novel, or “clean” results, and hypothesising after the results are known (HARK-ing [i.e., presenting unexpected results as if they were predicted]; [ 1 ]. Indeed, findings obtained from secondary data analysis are not always replicable [ 2 , 3 ], reproducible [ 4 ], or robust to analytic choices [ 5 , 6 ]. Preventing QRPs in research based on secondary data is therefore critical for scientific and societal progress.

A primary cause of QRPs is common cognitive biases that affect the analysis, reporting, and interpretation of data [ 7 – 10 ]. For example, apophenia (the tendency to see patterns in random data) and confirmation bias (the tendency to focus on evidence that is consistent with one’s beliefs) can lead to particular analytical choices and selective reporting of “publishable” results [ 11 – 13 ]. In addition, hindsight bias (the tendency to view past events as predictable) can lead to HARK-ing, so that observed results appear more compelling.

The scope for these biases to distort research outputs from secondary data analysis is perhaps particularly acute, for two reasons. First, researchers now have increasing access to high-dimensional datasets that offer a multitude of ways to analyse the same data [ 6 ]. Such analytic flexibility can lead to different conclusions depending on the analytical choices made [ 5 , 14 , 15 ]. Second, current incentive structures in science reward researchers for publishing statistically significant, novel, and/or surprising findings [ 16 ]. This combination of opportunity and incentive may lead researchers—consciously or unconsciously—to run multiple analyses and only report the most “publishable” findings.

One way to help protect against the effects of researcher bias is to pre-register research plans [ 17 , 18 ]. This can be achieved by pre-specifying the rationale, hypotheses, methods, and analysis plans, and submitting these to either a third-party registry (e.g., the Open Science Framework [OSF]; https://osf.io/ ), or a journal in the form of a Registered Report [ 19 ]. Because research plans and hypotheses are specified before the results are known, pre-registration reduces the potential for cognitive biases to lead to p-hacking, selective reporting, and HARK-ing [ 20 ]. While pre-registration is not necessarily a panacea for preventing QRPs (Table ​ (Table1), 1 ), meta-scientific evidence has found that pre-registered studies and Registered Reports are more likely to report null results [ 21 – 23 ], smaller effect sizes [ 24 ], and be replicated [ 25 ]. Pre-registration is increasingly being adopted in epidemiological research [ 26 , 27 ], and is even required for access to data from certain cohorts (e.g., the Twins Early Development Study [ 28 ]). However, pre-registration (and other open science practices; Table ​ Table2) 2 ) can pose particular challenges to researchers conducting secondary data analysis [ 29 ], motivating the need for alternative approaches and solutions. Here we describe such challenges, before proposing potential solutions to protect against researcher bias in secondary data analysis (summarised in Fig.  1 ).

Limitations in the use of pre-registration to address QRPs

Challenges and potential solutions regarding sharing pre-existing data

An external file that holds a picture, illustration, etc.
Object name is 10654_2021_839_Fig1_HTML.jpg

Challenges in pre-registering secondary data analysis and potential solutions (according to researcher motivations). Note : In the “Potential solution” column, blue boxes indicate solutions that are researcher-led; green boxes indicate solutions that should be facilitated by data guardians

Challenges of pre-registration for secondary data analysis

Prior knowledge of the data.

Researchers conducting secondary data analysis commonly analyse data from the same dataset multiple times throughout their careers. However, prior knowledge of the data increases risk of bias, as prior expectations about findings could motivate researchers to pursue certain analyses or questions. In the worst-case scenario, a researcher might perform multiple preliminary analyses, and only pursue those which lead to notable results (perhaps posting a pre-registration for these analyses, even though it is effectively post hoc). However, even if the researcher has not conducted specific analyses previously, they may be biased (either consciously or subconsciously) to pursue certain analyses after testing related questions with the same variables, or even by reading past studies on the dataset. As such, pre-registration cannot fully protect against researcher bias when researchers have previously accessed the data.

Research may not be hypothesis-driven

Pre-registration and Registered Reports are tailored towards hypothesis-driven, confirmatory research. For example, the OSF pre-registration template requires researchers to state “specific, concise, and testable hypotheses”, while Registered Reports do not permit purely exploratory research [ 30 ], although a new Exploratory Reports format now exists [ 31 ]. However, much research involving secondary data is not focused on hypothesis testing, but is exploratory, descriptive, or focused on estimation—in other words, examining the magnitude and robustness of an association as precisely as possible, rather than simply testing a point null. Furthermore, without a strong theoretical background, hypotheses will be arbitrary and could lead to unhelpful inferences [ 32 , 33 ], and so should be avoided in novel areas of research.

Pre-registered analyses are not appropriate for the data

With pre-registration, there is always a risk that the data will violate the assumptions of the pre-registered analyses [ 17 ]. For example, a researcher might pre-register a parametric test, only for the data to be non-normally distributed. However, in secondary data analysis, the extent to which the data shape the appropriate analysis can be considerable. First, longitudinal cohort studies are often subject to missing data and attrition. Approaches to deal with missing data (e.g., listwise deletion; multiple imputation) depend on the characteristics of missing data (e.g., the extent and patterns of missingness [ 34 ]), and so pre-specifying approaches to dealing with missingness may be difficult, or extremely complex. Second, certain analytical decisions depend on the nature of the observed data (e.g., the choice of covariates to include in a multiple regression might depend on the collinearity between the measures, or the degree of missingness of different measures that capture the same construct). Third, much secondary data (e.g., electronic health records and other administrative data) were never collected for research purposes, so can present several challenges that are impossible to predict in advance [ 35 ]. These issues can limit a researcher’s ability to pre-register a precise analytic plan prior to accessing secondary data.

Lack of flexibility in data analysis

Concerns have been raised that pre-registration limits flexibility in data analysis, including justifiable exploration [ 36 – 38 ]. For example, by requiring researchers to commit to a pre-registered analysis plan, pre-registration could prevent researchers from exploring novel questions (with a hypothesis-free approach), conducting follow-up analyses to investigate notable findings [ 39 ], or employing newly published methods with advantages over those pre-registered. While this concern is also likely to apply to primary data analysis, it is particularly relevant to certain fields involving secondary data analysis, such as genetic epidemiology, where new methods are rapidly being developed [ 40 ], and follow-up analyses are often required (e.g., in a genome-wide association study to further investigate the role of a genetic variant associated with a phenotype). However, this concern is perhaps over-stated – pre-registration does not preclude unplanned analyses; it simply makes it more transparent that these analyses are post hoc. Nevertheless, another understandable concern is that reduced analytic flexibility could lead to difficulties in publishing papers and accruing citations. For example, pre-registered studies are more likely to report null results [ 22 , 23 ], likely due to reduced analytic flexibility and selective reporting. While this is a positive outcome for research integrity, null results are less likely to be published [ 13 , 41 , 42 ] and cited [ 11 ], which could disadvantage researchers’ careers.

In this section, we describe potential solutions to address the challenges involved in pre-registering secondary data analysis, including approaches to (1) address bias linked to prior knowledge of the data, (2) enable pre-registration of non-hypothesis-driven research, (3) ensure that pre-planned analyses will be appropriate for the data, and (4) address potential difficulties arising from reduced analytic flexibility.

Challenge: Prior knowledge of the data

Declare prior access to data.

To increase transparency about potential biases arising from knowledge of the data, researchers could routinely report all prior data access in a pre-registration [ 29 ]. This would ideally include evidence from an independent gatekeeper (e.g., a data guardian of the study) stating whether data and relevant variables were accessed by each co-author. To facilitate this process, data guardians could set up a central “electronic checkout” system that records which researchers have accessed data, what data were accessed, and when [ 43 ]. The researcher or data guardian could then provide links to the checkout histories for all co-authors in the pre-registration, to verify their prior data access. If it is not feasible to provide such objective evidence, authors could self-certify their prior access to the dataset and where possible, relevant variables—preferably listing any publications and in-preparation studies based on the dataset [ 29 ]. Of course, self-certification relies on trust that researchers will accurately report prior data access, which could be challenging if the study involves a large number of authors, or authors who have been involved on many studies on the dataset. However, it is likely to be the most feasible option at present as many datasets do not have available electronic records of data access. For further guidance on self-certifying prior data access when pre-registering secondary data analysis studies on a third-party registry (e.g., the OSF), we recommend referring to the template by Van den Akker, Weston [ 29 ].

The extent to which prior access to data renders pre-registration invalid is debatable. On the one hand, even if data have been accessed previously, pre-registration is likely to reduce QRPs by encouraging researchers to commit to a pre-specified analytic strategy. On the other hand, pre-registration does not fully protect against researcher bias where data have already been accessed, and can lend added credibility to study claims, which may be unfounded. Reporting prior data access in a pre-registration is therefore important to make these potential biases transparent, so that readers and reviewers can judge the credibility of the findings accordingly. However, for a more rigorous solution which protects against researcher bias in the context of prior data access, researchers should consider adopting a multiverse approach.

Conduct a multiverse analysis

A multiverse analysis involves identifying all potential analytic choices that could justifiably be made to address a given research question (e.g., different ways to code a variable, combinations of covariates, and types of analytic model), implementing them all, and reporting the results [ 44 ]. Notably, this method differs from the traditional approach in which findings from only one analytic method are reported. It is conceptually similar to a sensitivity analysis, but it is far more comprehensive, as often hundreds or thousands of analytic choices are reported, rather than a handful. By showing the results from all defensible analytic approaches, multiverse analysis reduces scope for selective reporting and provides insight into the robustness of findings against analytical choices (for example, if there is a clear convergence of estimates, irrespective of most analytical choices). For causal questions in observational research, Directed Acyclic Graphs (DAGs) could be used to inform selection of covariates in multiverse approaches [ 45 ] (i.e., to ensure that confounders, rather than mediators or colliders, are controlled for).

Specification curve analysis [ 46 ] is a form of multiverse analysis that has been applied to examine the robustness of epidemiological findings to analytic choices [ 6 , 47 ]. Specification curve analysis involves three steps: (1) identifying all analytic choices – termed “specifications”, (2) displaying the results graphically with magnitude of effect size plotted against analytic choice, and (3) conducting joint inference across all results. When applied to the association between digital technology use and adolescent well-being [ 6 ], specification curve analysis showed that the (small, negative) association diminished after accounting for adequate control variables and recall bias – demonstrating the sensitivity of results to analytic choices.

Despite the benefits of the multiverse approach in addressing analytic flexibility, it is not without limitations. First, because each analytic choice is treated as equally valid, including less justifiable models could bias the results away from the truth. Second, the choice of specifications can be biased by prior knowledge (e.g., a researcher may choose to omit a covariate to obtain a particular result). Third, multiverse analysis may not entirely prevent selective reporting (e.g., if the full range of results are not reported), although pre-registering multiverse approaches (and specifying analytic choices) could mitigate this. Last, and perhaps most importantly, multiverse analysis is technically challenging (e.g., when there are hundreds or thousands of analytic choices) and can be impractical for complex analyses, very large datasets, or when computational resources are limited. However, this burden can be somewhat reduced by tutorials and packages which are being developed to standardise the procedure and reduce computational time [see 48 , 49 ].

Challenge: Research may not be hypothesis-driven

Pre-register research questions and conditions for interpreting findings.

Observational research arguably does not need to have a hypothesis to benefit from pre-registration. For studies that are descriptive or focused on estimation, we recommend pre-registering research questions, analysis plans, and criteria for interpretation. Analytic flexibility will be limited by pre-registering specific research questions and detailed analysis plans, while post hoc interpretation will be limited by pre-specifying criteria for interpretation [ 50 ]. The potential for HARK-ing will also be minimised because readers can compare the published study to the original pre-registration, where a-priori hypotheses were not specified.

Detailed guidance on how to pre-register research questions and analysis plans for secondary data is provided in Van den Akker’s [ 29 ] tutorial. To pre-specify conditions for interpretation, it is important to anticipate – as much as possible – all potential findings, and state how each would be interpreted. For example, suppose that a researcher aims to test a causal relationship between X and Y using a multivariate regression model with longitudinal data. Assuming that all potential confounders have been fully measured and controlled for (albeit a strong assumption) and statistical power is high, three broad sets of results and interpretations could be pre-specified. First, an association between X and Y that is similar in magnitude to the unadjusted association would be consistent with a causal relationship. Second, an association between X and Y that is attenuated after controlling for confounders would suggest that the relationship is partly causal and partly confounded. Third, a minimal, non-statistically significant adjusted association would suggest a lack of evidence for a causal effect of X on Y. Depending on the context of the study, criteria could also be provided on the threshold (or range of thresholds) at which the effect size would justify different interpretations [ 51 ], be considered practically meaningful, or the smallest effect size of interest for equivalence tests [ 52 ]. While researcher biases might still affect the pre-registered criteria for interpreting findings (e.g., toward over-interpreting a small effect size as meaningful), this bias will at least be transparent in the pre-registration.

Use a holdout sample to delineate exploratory and confirmatory research

Where researchers wish to integrate exploratory research into a pre-registered, confirmatory study, a holdout sample approach can be used [ 18 ]. Creating a holdout sample refers to the process of randomly splitting the dataset into two parts, often referred to as ‘training’ and ‘holdout’ datasets. To delineate exploratory and confirmatory research, researchers can first conduct exploratory data analysis on the training dataset (which should comprise a moderate fraction of the data, e.g., 35% [ 53 ]. Based on the results of the discovery process, researchers can pre-register hypotheses and analysis plans to formally test on the holdout dataset. This process has parallels with cross-validation in machine learning, in which the dataset is split and the model is developed on the training dataset, before being tested on the test dataset. The approach enables a flexible discovery process, before formally testing discoveries in a non-biased way.

When considering whether to use the holdout sample approach, three points should be noted. First, because the training dataset is not reusable, there will be a reduced sample size and loss of power relative to analysing the whole dataset. As such, the holdout sample approach will only be appropriate when the original dataset is large enough to provide sufficient power in the holdout dataset. Second, when the training dataset is used for exploration, subsequent confirmatory analyses on the holdout dataset may be overfitted (due to both datasets being drawn from the same sample), so replication in independent samples is recommended. Third, the holdout dataset should be created by an independent data manager or guardian, to ensure that the researcher does not have knowledge of the full dataset. However, it is straightforward to randomly split a dataset into a holdout and training sample and we provide example R code at: https://github.com/jr-baldwin/Researcher_Bias_Methods/blob/main/Holdout_script.md .

Challenge: Pre-registered analyses are not appropriate for the data

Use blinding to test proposed analyses.

One method to help ensure that pre-registered analyses will be appropriate for the data is to trial the analyses on a blinded dataset [ 54 ], before pre-registering. Data blinding involves obscuring the data values or labels prior to data analysis, so that the proposed analyses can be trialled on the data without observing the actual findings. Various types of blinding strategies exist [ 54 ], but one method that is appropriate for epidemiological data is “data scrambling” [ 55 ]. This involves randomly shuffling the data points so that any associations between variables are obscured, whilst the variable distributions (and amounts of missing data) remain the same. We provide a tutorial for how to implement this in R (see https://github.com/jr-baldwin/Researcher_Bias_Methods/blob/main/Data_scrambling_tutorial.md ). Ideally the data scrambling would be done by a data guardian who is independent of the research, to ensure that the main researcher does not access the data prior to pre-registering the analyses. Once the researcher is confident with the analyses, the study can be pre-registered, and the analyses conducted on the unscrambled dataset.

Blinded analysis offers several advantages for ensuring that pre-registered analyses are appropriate, with some limitations. First, blinded analysis allows researchers to directly check the distribution of variables and amounts of missingness, without having to make assumptions about the data that may not be met, or spend time planning contingencies for every possible scenario. Second, blinded analysis prevents researchers from gaining insight into the potential findings prior to pre-registration, because associations between variables are masked. However, because of this, blinded analysis does not enable researchers to check for collinearity, predictors of missing data, or other covariances that may be necessary for model specification. As such, blinded analysis will be most appropriate for researchers who wish to check the data distribution and amounts of missingness before pre-registering.

Trial analyses on a dataset excluding the outcome

Another method to help ensure that pre-registered analyses will be appropriate for the data is to trial analyses on a dataset excluding outcome data. For example, data managers could provide researchers with part of the dataset containing the exposure variable(s) plus any covariates and/or auxiliary variables. The researcher can then trial and refine the analyses ahead of pre-registering, without gaining insight into the main findings (which require the outcome data). This approach is used to mitigate bias in propensity score matching studies [ 26 , 56 ], as researchers use data on the exposure and covariates to create matched groups, prior to accessing any outcome data. Once the exposed and non-exposed groups have been matched effectively, researchers pre-register the protocol ahead of viewing the outcome data. Notably though, this approach could help researchers to identify and address other analytical challenges involving secondary data. For example, it could be used to check multivariable distributional characteristics, test for collinearity between multiple predictor variables, or identify predictors of missing data for multiple imputation.

This approach offers certain benefits for researchers keen to ensure that pre-registered analyses are appropriate for the observed data, with some limitations. Regarding benefits, researchers will be able to examine associations between variables (excluding the outcome), unlike the data scrambling approach described above. This would be helpful for checking certain assumptions (e.g., collinearity or characteristics of missing data such as whether it is missing at random). In addition, the approach is easy to implement, as the dataset can be initially created without the outcome variable, which can then be added after pre-registration, minimising burden on data guardians. Regarding limitations, it is possible that accessing variables in advance could provide some insight into the findings. For example, if a covariate is known to be highly correlated with the outcome, testing the association between the covariate and the exposure could give some indication of the relationship between the exposure and the outcome. To make this potential bias transparent, researchers should report the variables that they already accessed in the pre-registration. Another limitation is that researchers will not be able to identify analytical issues relating to the outcome data in advance of pre-registration. Therefore, this approach will be most appropriate where researchers wish to check various characteristics of the exposure variable(s) and covariates, rather than the outcome. However, a “mixed” approach could be applied in which outcome data is provided in scrambled format, to enable researchers to also assess distributional characteristics of the outcome. This would substantially reduce the number of potential challenges to be considered in pre-registered analytical pipelines.

Pre-register a decision tree

If it is not possible to access any of the data prior to pre-registering (e.g., to enable analyses to be trialled on a dataset that is blinded or missing outcome data), researchers could pre-register a decision tree. This defines the sequence of analyses and rules based on characteristics of the observed data [ 17 ]. For example, the decision tree could specify testing a normality assumption, and based on the results, whether to use a parametric or non-parametric test. Ideally, the decision tree should provide a contingency plan for each of the planned analyses, if assumptions are not fulfilled. Of course, it can be challenging and time consuming to anticipate every potential issue with the data and plan contingencies. However, investing time into pre-specifying a decision tree (or a set of contingency plans) could save time should issues arise during data analysis, and can reduce the likelihood of deviating from the pre-registration.

Challenge: Lack of flexibility in data analysis

Transparently report unplanned analyses.

Unplanned analyses (such as applying new methods or conducting follow-up tests to investigate an interesting or unexpected finding) are a natural and often important part of the scientific process. Despite common misconceptions, pre-registration does not permit such unplanned analyses from being included, as long as they are transparently reported as post-hoc. If there are methodological deviations, we recommend that researchers should (1) clearly state the reasons for using the new method, and (2) if possible, report results from both methods, to ideally show that the change in methods was not due to the results [ 57 ]. This information can either be provided in the manuscript or in an update to the original pre-registration (e.g., on the third-party registry such as the OSF), which can be useful when journal word limits are tight. Similarly, if researchers wish to include additional follow-up analyses to investigate an interesting or unexpected finding, this should be reported but labelled as “exploratory” or “post-hoc” in the manuscript.

Ensure a paper’s value does not depend on statistically significant results

Researchers may be concerned that reduced analytic flexibility from pre-registration could increase the likelihood of reporting null results [ 22 , 23 ], which are harder to publish [ 13 , 42 ]. To address this, we recommend taking steps to ensure that the value and success of a study does not depend on a significant p-value. First, methodologically strong research (e.g., with high statistical power, valid and reliable measures, robustness checks, and replication samples) will advance the field, whatever the findings. Second, methods can be applied to allow for the interpretation of statistically non-significant findings (e.g., Bayesian methods [ 58 ] or equivalence tests, which determine whether an observed effect is surprisingly small [ 52 , 59 , 60 ]. This means that the results will be informative whatever they show, in contrast to approaches relying solely on null hypothesis significance testing, where statistically non-significant findings cannot be interpreted as meaningful. Third, researchers can submit the proposed study as a Registered Report, where it will be evaluated before the results are available. This is arguably the strongest way to protect against publication bias, as in-principle study acceptance is granted without any knowledge of the results. In addition, Registered Reports can improve the methodology, as suggestions from expert reviewers can be incorporated into the pre-registered protocol.

Under a system that rewards novel and statistically significant findings, it is easy for subconscious human biases to lead to QRPs. However, researchers, along with data guardians, journals, funders, and institutions, have a responsibility to ensure that findings are reproducible and robust. While pre-registration can help to limit analytic flexibility and selective reporting, it involves several challenges for epidemiologists conducting secondary data analysis. The approaches described here aim to address these challenges (Fig.  1 ), to either improve the efficacy of pre-registration or provide an alternative approach to address analytic flexibility (e.g., a multiverse analysis). The responsibility in adopting these approaches should not only fall on researchers’ shoulders; data guardians also have an important role to play in recording and reporting access to data, providing blinded datasets and hold-out samples, and encouraging researchers to pre-register and adopt these solutions as part of their data request. Furthermore, wider stakeholders could incentivise these practices; for example, journals could provide a designated space for researchers to report deviations from the pre-registration, and funders could provide grants to establish best practice at the cohort level (e.g., data checkout systems, blinded datasets). Ease of adoption is key to ensure wide uptake, and we therefore encourage efforts to evaluate, simplify and improve these practices. Steps that could be taken to evaluate these practices are presented in Box 1.

More broadly, it is important to emphasise that researcher biases do not operate in isolation, but rather in the context of wider publication bias and a “publish or perish” culture. These incentive structures not only promote QRPs [ 61 ], but also discourage researchers from pre-registering and adopting other time-consuming reproducible methods. Therefore, in addition to targeting bias at the individual researcher level, wider initiatives from journals, funders, and institutions are required to address these institutional biases [ 7 ]. Systemic changes that reward rigorous and reproducible research will help researchers to provide unbiased answers to science and society’s most important questions.

Box 1. Evaluation of approaches

To evaluate, simplify and improve approaches to protect against researcher bias in secondary data analysis, the following steps could be taken.

Co-creation workshops to refine approaches

To obtain feedback on the approaches (including on any practical concerns or feasibility issues) co-creation workshops could be held with researchers, data managers, and wider stakeholders (e.g., journals, funders, and institutions).

Empirical research to evaluate efficacy of approaches

To evaluate the effectiveness of the approaches in preventing researcher bias and/or improving pre-registration, empirical research is needed. For example, to test the extent to which the multiverse analysis can reduce selective reporting, comparisons could be made between effect sizes from multiverse analyses versus effect sizes from meta-analyses (of non-pre-registered studies) addressing the same research question. If smaller effect sizes were found in multiverse analyses, it would suggest that the multiverse approach can reduce selective reporting. In addition, to test whether providing a blinded dataset or dataset missing outcome variables could help researchers develop an appropriate analytical protocol, researchers could be randomly assigned to receive such a dataset (or no dataset), prior to pre-registration. If researchers who received such a dataset had fewer eventual deviations from the pre-registered protocol (in the final study), it would suggest that this approach can help ensure that proposed analyses are appropriate for the data.

Pilot implementation of the measures

To assess the practical feasibility of the approaches, data managers could pilot measures for users of the dataset (e.g., required pre-registration for access to data, provision of datasets that are blinded or missing outcome variables). Feedback could then be collected from researchers and data managers via about the experience and ease of use.

Acknowledgements

The authors are grateful to Professor George Davey for his helpful comments on this article.

Author contributions

JRB and MRM developed the idea for the article. The first draft of the manuscript was written by JRB, with support from MRM and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

J.R.B is funded by a Wellcome Trust Sir Henry Wellcome fellowship (grant 215917/Z/19/Z). J.B.P is a supported by the Medical Research Foundation 2018 Emerging Leaders 1 st Prize in Adolescent Mental Health (MRF-160–0002-ELP-PINGA). M.R.M and H.M.S work in a unit that receives funding from the University of Bristol and the UK Medical Research Council (MC_UU_00011/5, MC_UU_00011/7), and M.R.M is also supported by the National Institute for Health Research (NIHR) Biomedical Research Centre at the University Hospitals Bristol National Health Service Foundation Trust and the University of Bristol.

Declarations

Author declares that they have no conflict of interest.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

IMAGES

  1. Research bias: What it is, Types & Examples

    research study biases

  2. Types of Bias in Research.

    research study biases

  3. Understanding Bias Explain the Different Types

    research study biases

  4. 6 Types Of Bias

    research study biases

  5. Investigating and addressing publication and other biases in meta

    research study biases

  6. 78 Cognitive Bias Examples (2024)

    research study biases

VIDEO

  1. The Power of Expectations in Psychological Research: Unveiling Biases and Interpretations #data

  2. Sampling Bias in Research

  3. Are Male or Female Employers More Biased?

  4. Cognitive Biases

  5. Mitigating Biases in Research: Ensuring Reliable Conclusions #medicalstudent

  6. Minimizing Gender Biases in the Workplace with Stanford Professor Shelley Correll

COMMENTS

  1. Types of Bias in Research

    Information bias occurs during the data collection step and is common in research studies that involve self-reporting and retrospective data collection. It can also result from poor interviewing techniques or differing levels of recall from participants. The main types of information bias are: Recall bias. Observer bias.

  2. Identifying and Avoiding Bias in Research

    Abstract. This narrative review provides an overview on the topic of bias as part of Plastic and Reconstructive Surgery 's series of articles on evidence-based medicine. Bias can occur in the planning, data collection, analysis, and publication phases of research. Understanding research bias allows readers to critically and independently review ...

  3. Study Bias

    Outside of medicine, significant bias can result in erroneous conclusions in academic research, leading to future fruitless studies in the same field. [55] When combined with the knowledge that most studies are never replicated or verified, this can lead to a deleterious cycle of biased, unverified research leading to more research.

  4. Moving towards less biased research

    Introduction. Bias, perhaps best described as 'any process at any stage of inference which tends to produce results or conclusions that differ systematically from the truth,' can pollute the entire spectrum of research, including its design, analysis, interpretation and reporting. 1 It can taint entire bodies of research as much as it can individual studies. 2 3 Given this extensive ...

  5. Revisiting Bias in Qualitative Research: Reflections on Its

    Bias—commonly understood to be any influence that provides a distortion in the results of a study (Polit & Beck, 2014)—is a term drawn from the quantitative research paradigm.Most (though perhaps not all) of us would recognize the concept as being incompatible with the philosophical underpinnings of qualitative inquiry (Thorne, Stephens, & Truant, 2016).

  6. Best Available Evidence or Truth for the Moment: Bias in Research

    Abstract. The subject of this column is the nature of bias in both quantitative and qualitative research. To that end, bias will be defined and then both the processes by which it enters into research will be entertained along with discussions on how to ameliorate this problem. Clinicians, who are in practice, frequently are called upon to make ...

  7. Research Bias 101: Definition + Examples

    Research bias refers to any instance where the researcher, or the research design, negatively influences the quality of a study's results, whether intentionally or not. The three common types of research bias we looked at are: Selection bias - where a skewed sample leads to skewed results. Analysis bias - where the analysis method and/or ...

  8. Quantifying and addressing the prevalence and bias of study ...

    Future research is needed to refine our methodology, but our empirically grounded form of bias-adjusted meta-analysis could be implemented as follows: 1.) collate studies for the same true effect ...

  9. How bias affects scientific research

    Students will study types of bias in scientific research and in applications of science and engineering, and will identify the effects of bias on research conclusions and on society. Then ...

  10. Reducing bias and improving transparency in medical research: a

    'Reporting bias' encompasses several sub-biases caused by selective disclosure or withholding of information, either intentionally or unintentionally, related to study design, methods and/or findings. 25 While several types of reporting biases have been described, we will focus on two of the most widely studied: publication bias and spin.

  11. Bias in Research

    Research bias can affect the validity and credibility of research findings, leading to erroneous conclusions. It can emerge from the researcher's subconscious preferences or the methodological design of the study itself. For instance, if a researcher unconsciously favors a particular outcome of the study, this preference could affect how they interpret the results, leading to a type of bias ...

  12. Research Bias: Definition, Types + Examples

    In quantitative research, the researcher often tries to deny the existence of any bias, by eliminating any type of bias in the systematic investigation. Sampling bias is one of the most types of quantitative research biases and it is concerned with the samples you omit and/or include in your study. Types of Quantitative Research Bias. Design Bias

  13. Research bias: What it is, Types & Examples

    Research bias is a technique in which the researchers conducting the experiment modify the findings to present a specific consequence. It is often known as experimenter bias. Bias is a characteristic of the research technique that makes it rely on experience and judgment rather than data analysis. The most important thing to know about bias is ...

  14. Bias in research

    The aim of this article is to outline types of 'bias' across research designs, and consider strategies to minimise bias. Evidence-based nursing, defined as the "process by which evidence, nursing theory, and clinical expertise are critically evaluated and considered, in conjunction with patient involvement, to provide the delivery of optimum nursing care,"1 is central to the continued ...

  15. Methodological and Cognitive Biases in Science: Issues for Current

    Of course, "research quality" in these studies is assessed according to the available quality assessment tools, which have been designed to measure specific risks of bias (e.g., blinding, drop-out, sample-size), while other risks are left completely unassessed (e.g., comparator choice, outcome reporting, publication bias). So while a study ...

  16. Reducing bias and improving transparency in medical research: a

    'Reporting bias' encompasses several sub-biases caused by selective disclosure or withholding of information, either intentionally or unintentionally, related to study design, methods and/or findings. 25 While several types of reporting biases have been described, we will focus on two of the most widely studied: publication bias and spin.

  17. What is Research Bias

    Avoiding research bias is crucial for maintaining the integrity and validity of your research findings. Here are some strategies on how to minimise research bias: Formulate a clear and specific research question that outlines the objective of your study. This will help you stay focused and reduce the chances of introducing research bias.

  18. Taking a hard look at our implicit biases

    Banaji opened on Tuesday by recounting the "implicit association" experiments she had done at Yale and at Harvard. The assumptions underlying the research on implicit bias derive from well-established theories of learning and memory and the empirical results are derived from tasks that have their roots in experimental psychology and ...

  19. Bias in research studies

    A biased study loses validity in relation to the degree of the bias. While some study designs are more prone to bias, its presence is universal. ... Bias in research studies Radiology. 2006 Mar;238(3):780-9. doi: 10.1148/radiol.2383041109. Author Gregory T Sica 1 Affiliation 1 Harvard Vanguard Medical Associates, Boston, Mass., USA. gsica ...

  20. Cancer is a Disease of Aging, but Studies of Older Adults Sorely

    A systemic review of the current body of research shows that investigators have inadequately addressed the intersection of aging, health disparities, and cancer outcomes among older adults. This is the conclusion of a new paper published in the Journal of the American Geriatrics Society , and led by Nikesha Gilmore, PhD , a member of Wilmot ...

  21. Types of Bias in Research

    Information bias occurs during the data collection step and is common in research studies that involve self-reporting and retrospective data collection. It can also result from poor interviewing techniques or differing levels of recall from participants. The main types of information bias are: Recall bias. Observer bias.

  22. Propagation of societal gender inequality by internet search ...

    To compute a measure of algorithmic gender bias, a research assistant, blind to the study's hypotheses, coded the ratio of women- to men-presenting images of the first 100 results in Google image search using the gender-neutral keyword "person," translated (with Google Translate and using back-translation verification) in the language of ...

  23. Where to look for the most frequent biases?

    This non‐causal pathway distorts the obesity‐mortality relationship by introducing confounding by the unmeasured risk factors and may be responsible for the seemingly protective effect of obesity in ESKD. 3.6. Publication bias. Most types of bias are the consequence of flaws in study design or study conduct.

  24. Scientists with African, Asian names less likely to be mentioned in

    A new study finds that in news stories about scientific research, U.S. media were less likely to mention a scientist if they had an East Asian or African name, as compared to one with an Anglo name.

  25. Artificial intelligence can help people feel heard, new USC study finds

    Yin noted that their research "basically finds a bias against AI. It's useful, but they don't like it." Perceptions about AI are bound to change, added Wakslak, "Of course these effects may change over time, but one of the interesting things we found was that the two effects we observed were fairly similar in magnitude.

  26. Bias in research

    A research would then be biased and it would not allow generalization of conclusions to the rest of the population. Generally speaking, whenever cross-sectional or case control studies are done exclusively in hospital settings, there is a good chance that such study will be biased. This is called admission bias. Bias exists because the ...

  27. New Research Reveals Resumes With Black Names Experience Bias ...

    The research also revealed that the smallest estimated racial bias was within food stores. The study mirrored many of the same findings of previous studies on name bias and discrimination in hiring.

  28. Asian Americans and COVID-19 discrimination

    The number of federally recognized hate crime incidents of anti-Asian bias increased from 158 in 2019 to 279 in 2020 and 746 in 2021, according to hate crime statistics published by the FBI. In 2022, ... In the 2021 Pew Research Center focus groups of Asian Americans, participants discussed their experiences of being discriminated against ...

  29. Protecting against researcher bias in secondary data analysis

    The scope for these biases to distort research outputs from secondary data analysis is perhaps particularly acute, for two reasons. ... This is arguably the strongest way to protect against publication bias, as in-principle study acceptance is granted without any knowledge of the results. In addition, Registered Reports can improve the ...