Catalogue of Bias

CEBM and Oxford University Logos

Reporting biases

A systematic distortion that arises from the selective disclosure or withholding of information by parties involved in the design, conduct, analysis, or dissemination of a study or research findings

Table of Contents

Preventive steps.

  • Further resources

Reporting biases is an umbrella term that covers a range of different types of biases. It is described as the most significant form of scientific misconduct ( Al-Marzouki et al. 2005 ). Reporting biases have been recognised for hundreds of years, dating back to the 17th century ( Dickersin & Chambers, 2010 ). Since then, various definitions of reporting biases have been proposed:

  • The Dictionary of Epidemiology defines reporting bias as the “s elective revelation or suppression of information (e.g., about past medical history, smoking, sexual experiences) or of study results .”
  • The Cochrane Handbook states it arises “ when the dissemination of research findings is influenced by the nature and direction of results .”
  • The James Lind Library states “ biased reporting of research occurs when the direction or statistical significance of results influence whether and how research is reported. ”

Our definition of reporting biases is a distortion of presented information from research due to the selective disclosure or withholding of information by parties involved with regards to the topic selected for study and the design, conduct, analysis, or dissemination of study methods, findings or both. Researchers have previously described seven types of reporting biases, including publication bias , time-lag bias, multiple (duplicate) publication bias, location bias, citation bias, language bias and outcome reporting bias ( Higgins & Green. 2011 ). Figure 1 illustrates where reporting biases can occur in the lifecycle of research and provides several examples of reporting biases.

what is reporting bias in research

Download the powerpoint: Reporting biases

A narrative review conducted by McGauran and colleagues (2010) found reporting biases are a widespread phenomenon in the medical literature. They identified reporting biases in 50 types of pharmacological, surgical, diagnostic and preventative interventions which included the withholding of study data or the active attempt by manufacturers to suppress the publication of findings.

A systematic review by  Jones and colleagues (2015) compared the outcomes of randomised controlled trials specified in registered protocols with those in subsequent peer-reviewed journal articles. There were discrepancies between prespecified and reported outcomes in a third of the studies. Thirteen per cent of trials introduced a new outcome in the published articles compared with those specified in the registered protocols.

In a cohort study of Cochrane systematic reviews, Saini and colleagues (2014) found 86% of reviews did not report data on the main harm outcome of interest.

Another cohort study found considerable inconsistency in the reporting of adverse events when comparing sponsors databases with study protocols ( Scharf & Colevas, 2006 ). In 14 of the 22 included studies, the number of adverse events in the sponsor’s database differed from the published articles by 20% or more.

When more detailed information for interventions was analysed for oseltamivir trials , over half (55%) of the previous risk of bias assessments were reclassified from ‘low’ risk of bias to ‘high’ ( Jefferson et al. 2014 ).

Trials and systematic reviews are used by clinicians and policymakers to develop evidence-based guidelines and make decisions about treatment or prevention of health problems. When the evidence base available to clinicians, policymakers or patients is incomplete or distorted, healthcare decisions and recommendations are made on biased evidence.

Vioxx (Rofecoxib), a Cox-2 inhibitor prescribed for osteoarthritis pain, provides an important example of under-reporting and misreporting of data which led to significant patient harm.

The first safety analysis of the largest study of Rofecoxib found a 79% greater risk of death or serious cardiovascular event in one treatment group compared with the other ( Krumholz et al., 2007 ). This information was not disclosed by the manufacturer (Merck), and the trial continued. The cardiovascular risk associated with Rofecoxib was obscured in several ways.

A number of significant conflicts of interest among board members of Merck were undisclosed, and not made public while the trial was in progress or when it was published ( Krumholz et al., 2007 ). Merck now faces legal claims from nearly 30,000 people who experienced adverse cardiovascular events while taking Rofecoxib.

If benefits are over-reported and harms are under-reported, clinicians, patients and the public will have a false sense of security about the safety of treatments. This results in unnecessary suffering and death ( Cowley et al. 1993 ), perpetuates research waste and misguides future research ( Glasziou & Chalmers, 2018 ).

Transparency is the most important action to safeguard health research.

Pre-study: The results of prospectively registered trials are significantly more likely to be published than those of unregistered trials (adjusted OR 4.53, 95% CI 1.12-18.34; Chan et al., 2017 ). Prospective registration of all clinical trials should be required, and encouraged for other study designs, by journal editors, regulators, research ethics committees, funders, and sponsors.  

During the study: Open science practices, such as making de-identified data and analytical code publicly available through platforms like GitHub or the Open Science Framework aids reproducibility, prevents duplication, reduces waste, accelerates innovation, identifies errors and prevents reporting biases.

Post-study: Reporting guidelines such as CONSORT can help guide researchers to improve their reporting of randomised trials ( Moher et al., 2010 ).

Other checklists and tools have been developed to assess the risk of reporting biases in studies, including the Cochrane Risk of Bias Tool , GRADE and ORBIT-II ( Page et al., 2018 ).

Catalogue of Bias. Richards GC, Onakpoya IJ. Reporting biases. In: Catalog Of Bias 2019: www.catalogueofbiases.org/reportingbiases

Related biases

  • Outcome reporting bias
  • Publication bias
  • Selection bias

Al-Marzouki et al. The effect of scientific misconduct on the results of clinical trials: a Delphi survey . Contemp Clin Trials. 2005. 26: 331-337.

Dickersin & Chalmers. Recognising, investigating and dealing with incomplete and biased reporting of clinical research: from Francis Bacon to the World Health Organisation . JLL Bulletin: Commentaries on the history of treatment evaluation. 2010. 

Higgins & Green . Definitions of some types of reporting biases. Cochrane Handbook for Systematic Reviews of Interventions, v5.1.0 . 2011.

McGauran et al. Reporting bias in medical research – a narrative review . Trials 2010. 11:37.

Jones et al. Comparison of registered and published outcomes in randomized controlled trials: a systematic review . BMC Med. 2015; 13:282.

Saini et al. Selective reporting bias of harm outcomes within studies: findings from a cohort of systematic reviews . BMJ 2014; 349

Scharf & Colevas. Adverse event reporting in publications compared with sponsor database for cancer clinical trials . J Clin Oncol 2006. 24:24 pp 3933-8.

Jefferson et al. Risk of bias in industry-funded oseltamivir trials: comparison of core reports versus full clinical study reports . BMJ Open 2014;4:e005253. 

Krumholz et al. What have we learnt from Vioxx? BMJ, 2007. 334(7585): 120-123

Cowley et al. The effect of lorcainide on arrhythmias and survival in patients with acute myocardial infarction: an example of publication bias . Int J Cardiol 1993. 1;40(2): 161-6.

Glasziou & Chalmers. Research waste is still a scandal . The BMJ 2018; 363:k4645.

Chan et al. Association of trial registration with reporting of primary outcomes in protocols and publications . JAMA 2017; 318:17, 1709-1711.

Moher et al. CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials . 2010. 340:c869

PubMed feed

  • Natali Caroline da Silva. Pathophysiology of non-cystic fibrosis bronchiectasis in children and adolescents with asthma: A protocol for systematic review and meta-analysis
  • Bin Chen. Exenatide for obesity in children and adolescents: Systematic review and meta-analysis
  • Mohammed Ali Alvi. Accuracy of Intraoperative Neuromonitoring in the Diagnosis of Intraoperative Neurological Decline in the Setting of Spinal Surgery-A Systematic Review and Meta-Analysis
  • Elizabeth Wootton. Dysnatremia in a changing climate: A global systematic review of the association between serum sodium and ambient temperature
  • Spyridon Siafis. Trace amine-associated receptor 1 (TAAR1) agonists for psychosis: protocol for a living systematic review and meta-analysis of human and non-human studies

View more →

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Book cover

Principles and Practice of Clinical Trials pp 2045–2071 Cite as

Reporting Biases

  • S. Swaroop Vedula 3 ,
  • Asbjørn Hróbjartsson 4 &
  • Matthew J. Page 5  
  • Reference work entry
  • First Online: 20 July 2022

176 Accesses

Clinical trials are experiments in human beings. Findings from these experiments, either by themselves or within research syntheses, are often meant to evidence-based clinical decision-making. These decisions can be misled when clinical trials are reported in a biased manner. For clinical trials to inform healthcare decisions without bias, their reporting should be complete, timely, transparent, and accessible. Reporting of clinical trials is biased when it is influenced by the nature and direction of its results. Reporting biases in clinical trials may manifest in different ways, including results not being reported at all, reported in part, with delay, or in sources of scientific literature that are harder to access. Biased reporting of clinical trials in turn can introduce bias into research syntheses, with the eventual consequence being misinformed healthcare decisions. Clinical trial registration, access to protocols and statistical analysis plans, and guidelines for transparent and complete reporting are critical to prevent reporting biases.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Abraha I, Cherubini A, Cozzolino F, De Florio R, Luchetta ML, Rimland JM, Folletti I, Marchesi M, Germani A, Orso M, Eusebi P, Montedori A (2015) Deviation from intention to treat analysis in randomised trials and treatment effect estimates: meta-epidemiological study. BMJ 350:h2445

Article   Google Scholar  

Barbour V, Burch D, Godlee F, Heneghan C, Lehman R, Perera R et al (2016) Characterisation of trials where marketing purposes have been influential in study design: a descriptive study. Trials 17:31

Begg CB (1985) A measure to aid in the interpretation of published clinical trials. Stat Med 4(1):1–9

Article   MathSciNet   Google Scholar  

Bekelman JE, Li Y, Gross CP (2003) Scope and impact of financial conflicts of interest in biomedical research: a systematic review. JAMA 289(4):454–465

Bero L (2017) Addressing bias and conflict of interest among biomedical researchers. JAMA 317(17):1723–1724

Bombardier C, Laine L, Reicin A, Shapiro D, Burgos-Vargas R, Davis B et al (2000) Comparison of upper gastrointestinal toxicity of Rofecoxib and Naproxen in patients with rheumatoid arthritis. N Engl J Med 343(21):1520–1528

Boutron I, Ravaud P (2018) Misrepresentation and distortion of research in biomedical literature. Proc Natl Acad Sci 115(11):2613–2619

Boutron I, Dutton S, Ravaud P, Altman DG (2010) Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for primary outcomes. JAMA 303(20):2058–2064

Boutron I, Altman DG, Hopewell S, Vera-Badillo F, Tannock I, Ravaud P (2014) Impact of spin in the abstracts of articles reporting results of randomized controlled trials in the field of cancer: the SPIIN randomized controlled trial. J Clin Oncol 32(36):4120–4126

Boutron I, Page MJ, Higgins JPT, Altman DG, Lundh A, Hróbjartsson A (2019) Chapter 7: considering bias and conflicts of interest among the included studies. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) Cochrane handbook for systematic reviews of interventions. Version 6.0 (updated July 2019). Available from www.training.cochrane.org/handbook. Cochrane

Brassington I (2017) The ethics of reporting all the results of clinical trials. Br Med Bull 121(1):19–29

Cardiac Arrhythmia Suppression Trial (CAST) Investigators (1989) Preliminary report: effect of encainide and flecainide on mortality in a randomized trial of arrhythmia suppression after myocardial infarction. N Engl J Med 321(6):406–412

Chan AW, Hróbjartsson A (2018) Promoting public access to clinical trial protocols: challenges and recommendations. Trials 19(1):116

Chan AW, Hróbjartsson A, Haahr MT, Gøtzsche PC, Altman DG (2004a) Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA 291(20):2457–2465

Chan AW, Krleza-Jerić K, Schmid I, Altman DG (2004b) Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research. CMAJ 171(7):735–740

Chan AW, Hróbjartsson A, Jørgensen KJ, Gøtzsche PC, Altman DG (2008) Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols. BMJ 337:a2299

Chan A-W, Tetzlaff JM, Altman DG, Laupacis A, Gøtzsche PC, Krleža-Jerić K et al (2013) SPIRIT 2013 statement: defining standard protocol items for clinical trials. Ann Intern Med 158(3):200–207

Chan A-W, Song F, Vickers A, Jefferson T, Dickersin K, Gøtzsche PC et al (2014) Increasing value and reducing waste: addressing inaccessible research. Lancet 383(9913):257–266

Chiu K, Grundy Q, Bero L (2017) ‘Spin’ in published biomedical literature: a methodological systematic review. PLoS Biol 15(9):e2002173

Cowley AJ, Skene A, Stainer K, Hampton JR (1993) The effect of lorcainide on arrhythmias and survival in patients with acute myocardial infarction: an example of publication bias. Int J Cardiol 40(2):161–166

Cronin E, Sheldon T (2004) Factors influencing the publication of health research. Int J Technol Assess Health Care 20(3):351–355

de Vries YA, Roest AM, Turner EH, de Jonge P (2019) Hiding negative trials by pooling them: a secondary analysis of pooled-trials publication bias in FDA-registered antidepressant trials. Psychol Med 49(12):2020–2026

Dechartres A, Atal I, Riveros C, Meerpohl J, Ravaud P (2018) Association between publication characteristics and treatment effect estimates: a meta-epidemiologic study. Ann Intern Med 169:385–393

Dickersin K (1990) The existence of publication bias and risk factors for its occurrence. JAMA 263(10):1385–1389

Dickersin K, Chalmers I (2011) Recognizing, investigating and dealing with incomplete and biased reporting of clinical research: from Francis Bacon to the WHO. J R Soc Med 104(12):532–538

Dickersin K, Min YI (1993) NIH clinical trials and publication bias. Online J Curr Clin Trials Doc No 50

Google Scholar  

Dickersin K, Chan S, Chalmers TC, Sacks HS, Smith H (1987) Publication bias and clinical trials. Control Clin Trials 8(4):343–353

Dickersin K, Min Y-I, Meinert CL (1992) Factors influencing publication of research results: follow-up of applications submitted to two Institutional Review Boards. JAMA 267(3):374–378

Duyx B, Urlings MJE, Swaen GMH, Bouter LM, Zeegers MP (2017) Scientific citations favor positive results: a systematic review and meta-analysis. J Clin Epidemiol 88:92–101

Dwan K, Gamble C, Williamson PR, Kirkham JJ (2013) Systematic review of the empirical evidence of study publication bias and outcome reporting bias – an updated review. PLoS One 8(7):1–37

Easterbrook PJ, Gopalan R, Berlin JA, Matthews DR (1991) Publication bias in clinical research. Lancet 337(8746):867–872

Egger M, Smith GD, Schneider M, Minder C (1997) Bias in meta-analysis detected by a simple, graphical test. BMJ 315(7109):629–634

Eyding D, Lelgemann M, Grouven U, Härter M, Kromp M, Kaiser T et al (2010) Reboxetine for acute treatment of major depression: systematic review and meta-analysis of published and unpublished placebo and selective serotonin reuptake inhibitor controlled trials. BMJ 341:c4737

Ferguson JM, Mendels J, Schwartz GE (2002) Effects of reboxetine on Hamilton depression rating scale factors from randomized, placebo-controlled trials in major depression. Int Clin Psychopharmacol 17(2):45–51

Frank RA, Sharifabadi AD, Salameh J-P, McGrath TA, Kraaijpoel N, Dang W et al (2019) Citation bias in imaging research: are studies with higher diagnostic accuracy estimates cited more often? Eur Radiol 29(4):1657–1664

Gehr BT, Weiss C, Porzsolt F (2006) The fading of reported effectiveness. A meta-analysis of randomised controlled trials. BMC Med Res Methodol 6(1):25

Glasziou P, Altman DG, Bossuyt P, Boutron I, Clarke M, Julious S et al (2014) Reducing waste from incomplete or unusable reports of biomedical research. Lancet 383(9913):267–276

Gøtzsche PC (1987) Reference bias in reports of drug trials. Br Med J Clin Res Ed 295(6599):654–656

Hahn S, Williamson PR, Hutton JL (2002) Investigation of within-study selective reporting in clinical research: follow-up of applications submitted to a local research ethics committee. J Eval Clin Pract 8(3):353–359

Hall R, de Antueno C, Webber A (2007) Publication bias in the medical literature: a review by a Canadian research ethics board. Can J Anesth 54(5):380–388

Hart B, Lundh A, Bero L (2012) Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta-analyses. BMJ 344:d7202

Hartling L, Craig WR, Russell K, Stevens K, Klassen TP (2004) Factors influencing the publication of randomized controlled trials in child health research. Arch Pediatr Adolesc Med 158(10):983–987

Hartling L, Featherstone R, Nuspl M, Shave K, Dryden D, Vandermeer B (2017) Grey literature in systematic reviews: a cross-sectional study of the contribution of non-English reports, unpublished studies and dissertations to the results of meta-analyses in child-relevant reviews. Syst Rev 17:64

Heres S, Wagenpfeil S, Hamann J, Kissling W, Leucht S (2004) Language bias in neuroscience – is the Tower of Babel located in Germany? Eur Psychiatry 19(4):230–232

Hetherington J, Dickersin K, Chalmers I, Meinert CL (1989) Retrospective and prospective identification of unpublished controlled trials: lessons from a survey of obstetricians and pediatricians. Pediatrics 84(2):374–380

Hill KP, Ross JS, Egilman DS, Krumholz HM (2008) The ADVANTAGE seeding trial: a review of internal documents. Ann Intern Med 149(4):251

Hopewell S, Clarke M, Stewart L, Tierney J (2007) Time to publication for results of clinical trials. Cochrane Database Syst Rev 2007(2):MR000011

Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin K (2009) Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane Database Syst Rev 1:MR000006

Jefferson T, Doshi P, Boutron I, Golder S, Heneghan C, Hodkinson A et al (2018) When to include clinical study reports and regulatory documents in systematic reviews. BMJ Evid-Based Med 23(6):210–217

Jennions MD, Møller AP (2002) Relationships fade with time: a meta-analysis of temporal trends in publication in ecology and evolution. Proc R Soc Lond B Biol Sci 269(1486):43–48

Jones CW, Keil LG, Holland WC, Caughey MC, Platts-Mills TF (2015) Comparison of registered and published outcomes in randomized controlled trials: a systematic review. BMC Med 13:282

Jüni P, Holenstein F, Sterne J, Bartlett C, Egger M (2002) Direction and impact of language bias in meta-analyses of controlled trials: empirical study. Int J Epidemiol 31(1):115–123

Kicinski M (2014) How does under-reporting of negative and inconclusive results affect the false-positive rate in meta-analysis? A simulation study. BMJ Open 4(8):e004831

Kirkham JJ, Dwan KM, Altman DG, Gamble C, Dodd S, Smyth R, Williamson PR (2010) The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. BMJ 340:c365

Kirsch I, Deacon BJ, Huedo-Medina TB, Scoboria A, Moore TJ, Johnson BT (2008) Initial severity and antidepressant benefits: a meta-analysis of data submitted to the Food and Drug Administration. PLoS Med 5(2):e45

Kjaergard LL, Gluud C (2002) Citation bias of hepato-biliary randomized clinical trials. J Clin Epidemiol 55(4):407–410

Lee KP, Boyd EA, Holroyd-Leduc JM, Bacchetti P, Bero LA (2006) Predictors of publication: characteristics of submitted manuscripts associated with acceptance at major biomedical journals. Med J Aust 184(12):621–626

Lexchin J, Bero LA, Djulbegovic B, Clark O (2003) Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ 326(7400):1167–1170

Li T, Mayo-Wilson E, Fusco N, Hong H, Dickersin K (2018) Caveat emptor: the combined effects of multiplicity and selective reporting. Trials 19(1):497

Liebeskind DS, Kidwell CS, Sayre JW, Saver JL (2006) Evidence of publication bias in reporting acute stroke clinical trials. Neurology 67(6):973–979

Lundh A, Lexchin J, Mintzes B, Schroll JB, Bero L (2017) Industry sponsorship and research outcome. Cochrane Database Syst Rev 2(2):MR000033

Mahoney MJ (1977) Publication prejudices: an experimental study of confirmatory bias in the peer review system. Cogn Ther Res 1(2):161–175

Malberg JE, Eisch AJ, Nestler EJ, Duman RS (2000) Chronic antidepressant treatment increases neurogenesis in adult rat hippocampus. J Neurosci 20(24):9104–9110

Marks-Anglin A, Chen Y (2020) A historical review of publication bias. Res Synth Methods 11(6):725–742

Matheson A (2017) Marketing trials, marketing tricks – how to spot them and how to stop them. Trials 18:105

Mayo-Wilson E, Li T, Fusco N, Bertizzolo L, Canner JK, Cowley T et al (2017) Cherry-picking by trialists and meta-analysts can drive conclusions about intervention efficacy. J Clin Epidemiol 91(Suppl C):95–110

Mayo-Wilson E, Fusco N, Li T, Hong H, Canner JK, Dickersin K et al (2019) Harms are assessed inconsistently and reported inadequately Part 2: nonsystematic adverse events. J Clin Epidemiol 113:11–19

McCormack JP, Rangno R (2002) Digging for data from the COX-2 trials. CMAJ 166(13):1649–1650

McGauran N, Wieseler B, Kreis J, Schüler Y-B, Kölsch H, Kaiser T (2010) Reporting bias in medical research – a narrative review. Trials 11:37

Melander H, Ahlqvist-Rastad J, Meijer G, Beermann B (2003) Evidence b(i)ased medicine – selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications. BMJ 326(7400):1171–1173

Morrison A, Polisena J, Husereau D, Moulton K, Clark M, Fiander M, Mierzwinski-Urban M, Clifford T, Hutton B, Rabb D (2012) The effect of English-language restriction on systematic review-based meta-analyses: a systematic review of empirical studies. Int J Technol Assess Health Care 28:138–144

Mukherjee D, Nissen SE, Topol EJ (2001) Risk of cardiovascular events associated with selective COX-2 inhibitors. JAMA 286(8):954–959

Olson CM, Rennie D, Cook D, Dickersin K, Flanagin A, Hogan JW et al (2002) Publication bias in editorial decision making. JAMA 287(21):2825–2828

Østengaard L, Lundh A, Tjørnhøj-Thomsen T, Abdi S, Gelle MHA, Stewart LA, Boutron I, Hróbjartsson A (2020) Influence and management of conflicts of interest in randomised clinical trials: qualitative interview study. BMJ 371:m3764

Ottenbacher K, DiFabio RP (1985) Efficacy of spinal manipulation/mobilization therapy. A meta-analysis. Spine 10(9):833–837

Page MJ, McKenzie JE, Higgins JPT (2018) Tools for assessing risk of reporting biases in studies and syntheses of studies: a systematic review. BMJ Open 8(3):e019703

Page MJ, Higgins JPT, Sterne JAC (2019) Chapter 13: assessing risk of bias due to missing results in a synthesis. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) Cochrane handbook for systematic reviews of interventions. Version 6.0 (updated July 2019). Available from www.training.cochrane.org/handbook . Cochrane

Psaty BM, Kronmal RA (2008) Reporting mortality findings in trials of Rofecoxib for Alzheimer disease or cognitive impairment: a case study based on documents from Rofecoxib litigation. JAMA 299(15):1813–1817

Scherer RW, Ugarte-Gil C, Schmucker C, Meerpohl JJ (2015) Authors report lack of time as main reason for unpublished research presented at biomedical conferences: a systematic review. J Clin Epidemiol 68(7):803–810

Scherer RW, Meerpohl JJ, Pfeifer N, Schmucker C, Schwarzer G, von Elm E (2018) Full publication of results initially presented in abstracts. Cochrane Database Syst Rev 11(11):MR000005

Schmucker C, Schell LK, Portalupi S, Oeller P, Cabrera L, Bassler D, Schwarzer G, Scherer RW, Antes G, von Elm E, Meerpohl JJ (2014) Extent of non-publication in cohorts of studies approved by research ethics committees or included in trial registries. PLoS One 9:e114023

Schmucker CM, Blumle A, Schell LK, Schwarzer G, Oeller P, Cabrera L, von Elm E, Briel M, Meerpohl JJ (2017) Systematic review finds that study data not published in full text articles have unclear impact on meta-analyses results in medical research. PLoS One 12:e0176210

Simes RJ (1986) Publication bias: the case for an international registry of clinical trials. J Clin Oncol 4(10):1529–1541

Sismondo S (2007) Ghost management: how much of the medical literature is shaped behind the scenes by the pharmaceutical industry? PLoS Med 4(9):e286

Sismondo S (2008a) Pharmaceutical company funding and its consequences: a qualitative systematic review. Contemp Clin Trials 29(2):109–113

Sismondo S (2008b) How pharmaceutical industry funding affects trial outcomes: causal structures and responses. Soc Sci Med 66(9):1909–1914

Smith ML (1980) Publication bias and meta-analysis. Eval Educ 4:22–24

Smyth RM, Kirkham JJ, Jacoby A, Altman DG, Gamble C, Williamson PR (2011) Frequency and reasons for outcome reporting bias in clinical trials: interviews with trialists. BMJ 342:c7153

Song F, Parekh S, Hooper L, Loke YK, Ryder J, Sutton AJ, Hing C, Kwok CS, Pang C, Harvey I (2010) Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess 14(8):iii, ix–xi, 1–193

Song F, Loke Y, Hooper L (2014) Why are medical and health-related studies not being published? A systematic review of reasons given by investigators. PLoS One 9(10):e110418

Sterling TD (1959) Publication decisions and their possible effects on inferences drawn from tests of significance – or vice versa. J Am Stat Assoc 54(285):30–34

Stern JM, Simes RJ (1997) Publication bias: evidence of delayed publication in a cohort study of clinical research projects. BMJ 315(7109):640–645

Tendal B, Nüesch E, Higgins JPT, Jüni P, Gøtzsche PC (2011) Multiplicity of data in trial reports and the reliability of meta-analyses: empirical study. BMJ 343:d4829

Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R (2008) Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 358(3):252–260

van Lent M, Overbeke J, Out HJ (2014) Role of editorial and peer review processes in publication bias: analysis of drug trials submitted to eight medical journals. PLoS One 9(8):e104846

Vandekerckhove P, O’Donovan PA, Lilford RJ, Harada TW (1993) Infertility treatment: from cookery to science. The epidemiology of randomised controlled trials. BJOG Int J Obstet Gynaecol 100(11):1005–1036

Vedula SS, Bero L, Scherer RW, Dickersin K (2009) Outcome reporting in industry-sponsored trials of gabapentin for off-label use. N Engl J Med 361(20):1963–1971

Vedula SS, Goldman PS, Rona IJ, Greene TM, Dickersin K (2012) Implementation of a publication strategy in the context of reporting biases. A case study based on new documents from Neurontin litigation. Trials 13:136

Vedula SS, Li T, Dickersin K (2013) Differences in reporting of analyses in internal company documents versus published trial reports: comparisons in industry-sponsored trials in off-label uses of gabapentin. PLoS Med 10(1):e1001378

Vera Badillo FE, Shapiro R, Ocana A, Amir E, Tannock I (2012) Bias in reporting of endpoints of efficacy and toxicity in randomized clinical trials (RCTs) for women with breast cancer (BC). J Clin Oncol 30(Suppl 15):6043–6043

Vevea JL, Coburn K, Sutton A (2019) Publication bias. In: Cooper H, Hedges LV, Valentine JC (eds) The handbook of research synthesis and meta-analysis. Russell Sage Foundation, pp 383–430, New York, USA

von Elm E, Poglia G, Walder B, Tramèr MR (2004) Different patterns of duplicate publication: an analysis of articles used in systematic reviews. JAMA 291(8):974–980

Wallach JD, Krumholz HM (2019) Not reporting results of a clinical trial is academic misconduct. Ann Intern Med 171(4):293

Wang AT, McCoy CP, Murad MH, Montori VM (2010) Association between industry affiliation and position on cardiovascular risk with rosiglitazone: cross sectional systematic review. BMJ 340:c1344

Download references

Author information

Authors and affiliations.

Malone Center for Engineering in Healthcare, Whiting School of Engineering, The Johns Hopkins University, Baltimore, MD, USA

S. Swaroop Vedula

Cochrane Denmark and Centre for Evidence-Based Medicine Odense, University of Southern Denmark, Odense, Denmark

Asbjørn Hróbjartsson

School of Public Health and Preventive Medicine, Monash University, Melbourne, VIC, Australia

Matthew J. Page

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to S. Swaroop Vedula .

Editor information

Editors and affiliations.

Department of Surgery, Division of Surgical Oncology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA

Steven Piantadosi

Department of Epidemiology, School of Public Health, Johns Hopkins University, Baltimore, MD, USA

Curtis L. Meinert

Section Editor information

Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, CO, USA

Tianjing Li

The Johns Hopkins Center for Clinical Trials and Evidence Synthesis, Johns Hopkins School of Public Health, Baltimore, MA, USA

Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this entry

Cite this entry.

Vedula, S.S., Hróbjartsson, A., Page, M.J. (2022). Reporting Biases. In: Piantadosi, S., Meinert, C.L. (eds) Principles and Practice of Clinical Trials. Springer, Cham. https://doi.org/10.1007/978-3-319-52636-2_183

Download citation

DOI : https://doi.org/10.1007/978-3-319-52636-2_183

Published : 20 July 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-52635-5

Online ISBN : 978-3-319-52636-2

eBook Packages : Mathematics and Statistics Reference Module Computer Science and Engineering

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Open access
  • Published: 13 April 2010

Reporting bias in medical research - a narrative review

  • Natalie McGauran 1 ,
  • Beate Wieseler 1 ,
  • Julia Kreis 1 ,
  • Yvonne-Beatrice Schüler 1 ,
  • Heike Kölsch 1 &
  • Thomas Kaiser 1  

Trials volume  11 , Article number:  37 ( 2010 ) Cite this article

99k Accesses

292 Citations

147 Altmetric

Metrics details

Reporting bias represents a major problem in the assessment of health care interventions. Several prominent cases have been described in the literature, for example, in the reporting of trials of antidepressants, Class I anti-arrhythmic drugs, and selective COX-2 inhibitors. The aim of this narrative review is to gain an overview of reporting bias in the medical literature, focussing on publication bias and selective outcome reporting. We explore whether these types of bias have been shown in areas beyond the well-known cases noted above, in order to gain an impression of how widespread the problem is. For this purpose, we screened relevant articles on reporting bias that had previously been obtained by the German Institute for Quality and Efficiency in Health Care in the context of its health technology assessment reports and other research work, together with the reference lists of these articles.

We identified reporting bias in 40 indications comprising around 50 different pharmacological, surgical (e.g. vacuum-assisted closure therapy), diagnostic (e.g. ultrasound), and preventive (e.g. cancer vaccines) interventions. Regarding pharmacological interventions, cases of reporting bias were, for example, identified in the treatment of the following conditions: depression, bipolar disorder, schizophrenia, anxiety disorder, attention-deficit hyperactivity disorder, Alzheimer's disease, pain, migraine, cardiovascular disease, gastric ulcers, irritable bowel syndrome, urinary incontinence, atopic dermatitis, diabetes mellitus type 2, hypercholesterolaemia, thyroid disorders, menopausal symptoms, various types of cancer (e.g. ovarian cancer and melanoma), various types of infections (e.g. HIV, influenza and Hepatitis B), and acute trauma. Many cases involved the withholding of study data by manufacturers and regulatory agencies or the active attempt by manufacturers to suppress publication. The ascertained effects of reporting bias included the overestimation of efficacy and the underestimation of safety risks of interventions.

In conclusion, reporting bias is a widespread phenomenon in the medical literature. Mandatory prospective registration of trials and public access to study data via results databases need to be introduced on a worldwide scale. This will allow for an independent review of research data, help fulfil ethical obligations towards patients, and ensure a basis for fully-informed decision making in the health care system.

Peer Review reports

The reporting of research findings may depend on the nature and direction of results, which is referred to as "reporting bias" [ 1 , 2 ]. For example, studies in which interventions are shown to be ineffective are sometimes not published, meaning that only a subset of the relevant evidence on a topic may be available [ 1 , 2 ]. Various types of reporting bias exist (Table 1 ), including publication bias and outcome reporting bias, which concern bias from missing outcome data on 2 levels: the study level, i.e. "non-publication due to lack of submission or rejection of study reports", and the outcome level, i.e. "the selective non-reporting of outcomes within published studies" [ 3 ].

Reporting bias on a study level

Results of clinical research are largely underreported or reported with delay. Various analyses of research protocols submitted to institutional review boards and research ethics committees in Europe, the United States, and Australia found that on average, only about half of the protocols had been published, with higher publication rates in Anglo-Saxon countries [ 4 – 10 ].

Similar analyses have been performed of trials submitted to regulatory authorities: a cohort study of trials supporting new drugs approved by the Food and Drug Administration (FDA) identified over 900 trials of 90 new drugs in FDA reviews; only 43% of the trials were published [ 11 ]. Wide variations in publication rates have been shown for specific indications [ 12 – 16 ]. The selective submission of clinical trials with positive outcomes to regulatory authorities has also been described [ 17 ]. Even if trials are published, the time lapse until publication may be substantial [ 8 , 18 , 19 ].

There is no simple classification of a clinical trial into "published" or "unpublished", as varying degrees of publication exist. These range from full-text publications in peer-reviewed journals that are easily identifiable through a search in bibliographic databases, to study information entered in trial registries, so-called grey literature (e.g. abstracts and working papers), and data on file in drug companies and regulatory agencies, which may or may not be provided to health technology assessment (HTA) agencies or other researchers after being requested. If such data are transmitted, they may then be fully published or not (e.g. the German Institute for Quality and Efficiency in Health Care [Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen, IQWiG] publishes all data used in its assessment reports [ 20 ], whereas the UK National Institute for Clinical Excellence [NICE] may accept "commercial in confidence" data [ 21 ]).

Even if studies are presented at meetings, this does not necessarily mean subsequent full publication: an analysis of nearly 30,000 meeting abstracts from various disciplines found a publication rate of 63% for randomized or controlled clinical trials [ 22 ].

Reporting bias on an outcome level

Selective reporting within a study may involve (a) selective reporting of analyses or (b) selective reporting of outcomes. This may include, for example, the reporting of (a) per-protocol (PP) versus intention-to-treat (ITT) analyses or adjusted versus unadjusted analyses, and (b) outcomes from different time points or statistically significant versus non-significant outcomes [ 3 , 23 ].

Various reviews have found extensive selective reporting in study publications [ 3 , 14 , 24 – 28 ]. For example, comparisons of publications with study protocols have shown that primary outcomes had been newly introduced, omitted, or changed in about 40% to 60% of cases [ 3 , 24 ]. Selective reporting particularly concerns the underreporting of adverse events [ 12 , 29 – 32 ]. For example, an analysis of 192 randomized drug trials in various indications showed that only 46% of publications stated the frequency of specific reasons for treatment discontinuation due to toxicity [ 29 ]. Outcomes are not only selectively reported, but negative results are reported in a positive manner and conclusions are often not supported by results data [ 16 , 26 , 33 – 35 ]. For instance, a comparison of study characteristics reported in FDA reviews of New Drug Applications (NDAs) with those reported in publications found that 9 of 99 conclusions had been changed in the publications, all in favour of the new drug [ 26 ].

Factors associated with reporting bias

Characteristics of published studies.

The fact that studies with positive or favourable results are more likely to be published than those with negative or unfavourable results was already addressed in the 1950s [ 36 ], and has since been widely confirmed [ 3 , 6 – 8 , 14 , 37 – 40 ]. Studies with positive or favourable results have been associated with various other factors such as faster publication [ 8 , 18 , 19 , 37 ], publication in higher impact factor journals [ 7 , 41 ], a greater number of publications [ 7 ] (including covert duplicate publications [ 42 ]), more frequent citation [ 43 – 45 ], and more likely publication in English [ 46 ].

Several other factors have been linked to successful publication, for example, methodological quality [ 47 ], study type [ 47 ], sample size [ 5 , 7 , 48 ], multicentre status [ 5 , 6 , 41 ], and non-commercial funding [ 5 , 6 , 49 , 50 ]. However, for some factors, these associations are inconsistent [ 6 , 37 ].

Submission and rejection of studies

One of the main reasons for the non-publication of negative studies seems to be the non-submission of manuscripts by investigators, not the rejection of manuscripts by medical journals. A follow-up of studies approved by US institutional review boards showed that only 6 of 124 unpublished studies had actually been rejected for publication [ 6 ]. A prospective cohort study of 745 manuscripts submitted to JAMA showed no statistically significant difference in publication rates between studies with positive and those with negative results [ 51 ], which has been confirmed by further analyses of other journals [ 47 , 52 ]. Author surveys have shown that the most common reasons for not submitting papers were negative results and a lack of interest, time, or other resources [ 39 – 41 , 53 ].

The role of the pharmaceutical industry

An association has been shown between industry sponsorship or industry affiliation of authors and positive research outcomes and conclusions, both in publications of primary studies and in systematic reviews [ 49 , 54 – 63 ]. For example, in a systematic review of the scope and impact of financial conflicts of interest in biomedical research, an aggregation of the results of 8 analyses of the relation between industry sponsorship and outcomes showed a statistically significant association between industry sponsorship and pro-industry conclusions [ 55 ]. A comparison of the methodological quality and conclusions in Cochrane reviews with those in industry-supported meta-analyses found that the latter were less transparent, less critical of methodological limitations of the included trials, and drew more favourable conclusions [ 57 ]. In addition, publication constraints and active attempts to prevent publication have been identified in industry-sponsored research [ 55 , 64 – 68 ]. Other aspects of industry involvement, such as design bias, are beyond the scope of this paper.

Rationale, aim and procedure

IQWiG produces HTA reports of drug and non-drug interventions for the decision-making body of the statutory health care funds, the Federal Joint Committee. The process of report production includes requesting information on published and unpublished studies from manufacturers; unfortunately, compliance by manufacturers is inconsistent, as recently shown in the attempted concealment of studies on antidepressants [ 69 ]. Reporting bias in antidepressant research has been shown before [ 16 , 70 ]; other well-known cases include Class I anti-arrhythmic drugs [ 71 , 72 ] and selective COX-2 inhibitors [ 73 , 74 ].

The aim of this narrative review was to gain an overview of reporting bias in the medical literature, focussing on publication bias and selective outcome reporting. We wished to explore whether this type of bias has been shown in areas beyond the well-known cases noted above, in order to obtain an impression of how widespread this problem is. The review was based on the screening of full-text publications on reporting bias that had either been obtained by the Institute in the context of its HTA reports and other research work or were identified by the screening of the reference lists of the on-site publications. The retrieved examples were organized according to indications and interventions. We also discuss the effects of reporting bias, as well as the measures that have been implemented to solve this problem.

The term "reporting bias" traditionally refers to the reporting of clinical trials and other types of studies; if one extends this term beyond experimental settings, for example, to the withholding of information on any beneficial medical innovation, then an early example of reporting bias was noted by Rosenberg in his article "Secrecy in medical research", which describes the invention of the obstetrical forceps. This device was developed by the Chamberlen brothers in Europe in the 17th century; however, it was kept secret for commercial reasons for 3 generations and as a result, many women and neonates died during childbirth [ 75 ]. In the context of our paper, we also considered this extended definition of reporting bias.

We identified reporting bias in 40 indications comprising around 50 different interventions. Examples were found in various sources, e.g. journal articles of published versus unpublished data, reviews of reporting bias, editorials, letters to the editor, newspaper reports, expert and government reports, books, and online sources. The following text summarizes the information presented in these examples. More details and references to the background literature are included in Additional file 1 : Table S2.

Mental and behavioural disorders

Reporting bias is common in psychiatric research (see below). This also includes industry-sponsorship bias [ 76 – 82 ].

Turner et al compared FDA reviews of antidepressant trials including over 12,000 patients with the matching publications and found that 37 out of 38 trials viewed as positive by the FDA were published [ 16 ]. Of the 36 trials having negative or questionable results according to the FDA, 22 were unpublished and 11 of the 14 published studies conveyed a positive outcome. According to the publications, 94% of the trials had positive results, which was in contrast to the proportion reported by the FDA (51%). The overall increase in effect size in the published trials was 32%. In a meta-analysis of data from antidepressant trials submitted to the FDA, Kirsch et al requested data on 6 antidepressants from the FDA under the Freedom of Information Act. However, the FDA did not disclose relevant data from 9 of 47 trials, all of which failed to show a statistically significant benefit over placebo. Data from 4 of these trials were available on the GlaxoSmithKline (GSK) website. In total, the missing data represented 38% of patients in sertraline trials and 23% of patients in citalopram trials. The analysis of trials investigating the 4 remaining antidepressants showed that drug-placebo differences in antidepressant efficacy were relatively small, even for severely depressed patients [ 83 ].

Selective serotonin reuptake inhibitors (SSRIs)

One of the biggest controversies surrounding unpublished data was the withholding of efficacy and safety data from SSRI trials. In a lawsuit launched by the Attorney General of the State of New York it was alleged that GSK had published positive information about the paediatric use of paroxetine in major depressive disorder (MDD), but had concealed negative safety and efficacy data [ 84 ]. The company had conducted at least 5 trials on the off-label use of paroxetine in children and adolescents but published only one, which showed mixed results for efficacy. The results of the other trials, which did not demonstrate efficacy and suggested a possible increased risk of suicidality, were suppressed [ 84 ]. As part of a legal settlement, GSK agreed to establish an online clinical trials registry containing results summaries for all GSK-sponsored studies conducted after a set date [ 85 , 86 ].

Whittington et al performed a systematic review of published versus unpublished data on SSRIs in childhood depression. While published data indicated a favourable risk-benefit profile for some SSRIs, the inclusion of unpublished data indicated a potentially unfavourable risk-benefit profile for all SSRIs investigated except fluoxetine [ 70 ].

Newer antidepressants

IQWiG published the preliminary results of an HTA report on reboxetine, a selective norepinephrine reuptake inhibitor, and other antidepressants. At least 4600 patients had participated in 16 reboxetine trials, but the majority of data were unpublished. Despite a request for information the manufacturer Pfizer refused to provide these data. Only data on about 1600 patients were analysable and IQWiG concluded that due to the risk of publication bias, no statement on the benefit or harm of reboxetine could be made [ 69 , 87 ]. The preliminary HTA report mentioned above also included an assessment of mirtazapine, a noradrenergic and specific serotonergic antidepressant. Four potentially relevant trials were identified in addition to 27 trials included in the assessment, but the manufacturer Essex Pharma did not provide the study reports. Regarding the other trials, the manufacturer did not send the complete study reports, so the full analyses were not available. IQWiG concluded that the results of the assessment of mirtazapine may have been biased by unpublished data [ 69 , 87 ]. After the behaviour of Pfizer and Essex Pharma had been widely publicized, the companies provided the majority of study reports for the final HTA report. The preliminary report's conclusion on the effects of mirtazapine was not affected by the additional data. For reboxetine, the analysis of the published and unpublished data changed the conclusion from "no statement possible" to "no benefit proven" [ 88 ].

Bipolar disorder

Lamotrigine.

A review by Nassir Ghaemi et al of data on lamotrigine in bipolar disorder provided on the GSK website showed that data from negative trials were available on the website but that the studies had not been published in detail or publications emphasized positive secondary outcomes instead of negative primary ones. Outside of the primary area of efficacy (prophylaxis of mood episodes), the drug showed very limited efficacy in indications such as acute bipolar depression, for which clinicians were supporting its use [ 35 ].

Gabapentin, a GABA analogue, was approved by the FDA in 1993 for a certain type of epilepsy, and in 2002 for postherpetic neuralgia. As of February 1996, 83% of gabapentin use was for epilepsy, and 17% for off-label indications (see the expert report by Abramson [ 89 ]). As the result of a comprehensive marketing campaign by Pfizer, the number of patients in the US taking gabapentin rose from about 430,000 to nearly 6 million between 1996 and 2001; this increase was solely due to off-label use for indications, including bipolar disorder. As of September 2001, 93.5% of gabapentin use was for off-label indications [ 89 ]. In a further expert report, Dickersin noted "extensive evidence of reporting bias" [ 34 ], which she further analysed in a recent publication with Vedula et al [ 90 ]. Concerning the trials of gabapentin for bipolar disorders, 2 of the 3 trials (all published) were negative for the primary outcome. However, these publications showed "extensive spin and misrepresentation of data" [ 34 ].

Schizophrenia

The Washington Post reported that a trial on quetiapine, an atypical antipsychotic, was "silenced" in 1997, the same year it was approved by the FDA to treat schizophrenia. The study ("Study 15") was not published. Patients taking quetiapine had shown high rates of treatment discontinuations and had experienced significant weight increases. However, data presented by the manufacturer AstraZeneca in 1999 at European and US meetings actually indicated that the drug helped psychotic patients lose weight [ 91 ].

Panic disorder

Turner described an example of reporting bias in the treatment of panic disorder: according to a review article, 3 "well designed studies" had apparently shown that the controlled-release formulation of paroxetine had been effective in patients with this condition. However, according to the corresponding FDA statistical review, only one study was strongly positive, the second study was non-significant regarding the primary outcome (and marginally significant for a secondary outcome), and the third study was clearly negative [ 92 ].

Further examples of reporting bias in research on mental and behavioural disorders are included in Additional file 1 : Table S2.

Disorders of the nervous system

Alzheimer's disease.

Internal company analyses and information provided by the manufacturer Merck & Co to the FDA on rofecoxib, a selective COX-2 inhibitor, were released during litigation procedures. The documents referred to trials investigating the effects of rofecoxib on the occurrence or progression of Alzheimer's disease. Psaty and Kronmal performed a review of these documents and 2 trial publications and showed that, although presenting mortality data, the publications had not included analyses or statistical tests of these data and both had concluded that regarding safety, rofecoxib was "well tolerated". In contrast, in April 2001, Merck's internal ITT analyses of pooled data from these 2 trials showed a significant increase in total mortality. However, this information was neither disclosed to the FDA nor published in a timely fashion [ 74 ]. Rofecoxib was taken off the market by Merck in 2004 [ 93 ], among allegations that the company had been aware of the safety risks since 2000 [ 73 ].

In their article "An untold story?", Lenzer and Brownlee reported the case of valdecoxib, another selective COX-2 inhibitor withdrawn from the market due to cardiovascular concerns [ 94 , 95 ]. In 2001, the manufacturer Pfizer had applied for approval in 4 indications, including acute pain. The application for acute pain was rejected and some of the information about the corresponding trials removed from the FDA website for confidentiality reasons. Further examples of reporting bias in research on pain are presented in Additional file 1 : Table S2.

According to the expert report by Dickersin, all 3 trials on gabapentin for migraine showed negative results for the primary outcome. Substantial reporting bias was present. One trial was fully published (seemingly with a redefined primary outcome showing positive results in a subgroup of patients), one was unpublished, and preliminary (positive) results were presented for the third trial [ 34 ].

Disorders of the circulatory system

Coronary heart disease (bleeding prophylaxis during bypass surgery).

In his article on observational studies on drug safety, Hiatt reported the case of aprotinin, an antifibrinolytic drug formerly marketed to reduce bleeding during heart bypass graft surgery. In 2006, data from 2 published observational studies indicated serious concerns about the drug's safety [ 96 ]. The FDA subsequently convened an expert meeting in which the safety data presented by the manufacturer Bayer did not reveal any increased risk of fatal or nonfatal cardiovascular events. However, it turned out that Bayer had not presented additional observational data, which, according to an FDA review, indicated that aprotinin may be associated with an increased risk of death and other serious adverse events. In November 2007 Bayer suspended the worldwide marketing of aprotinin, after requests and advice from various drug regulating authorities [ 97 ].

Prevention of arrhythmia

Class i anti-arrhythmic drugs.

In a clinical trial conducted in 1980, 9 out of 49 patients with suspected acute myocardial infarction treated with a class Ic anti-arrhythmic drug (lorcainide) died, versus only one patient in the placebo group; the investigators interpreted this finding as an "effect of chance" [ 71 ]. The development of lorcainide was discontinued for commercial reasons, and the results of the trial were not published until 1993. The investigators then stated that if the trial had been published earlier, it "might have provided an early warning of trouble ahead" [ 71 ]. Instead, during the 1980s, class I drugs were widely used, even though concerns as to their lack of effect were published as early as 1983 [ 98 , 99 ]. Further reviews and trials confirmed this suspicion, as well as an increase in mortality [ 100 – 102 ]. In his book "Deadly Medicine", Moore described the consequences as "America's worst drug disaster", which had "produced a death toll larger than the United States' combat losses in wars such as Korea and Vietnam" [ 72 ]. Further examples of reporting bias in research on disorders of the circulatory system are presented in Additional file 1 : Table S2.

Disorders of the digestive system

Irritable bowel syndrome.

Barbehenn et al compared a published trial on alosetron, a 5-HT 3 antagonist, in women with irritable bowel syndrome with data obtained from the FDA [ 103 ]. She noted that according to the graphics in the publication, which presented relative differences in pain and discomfort scores, the drug seemed effective. However, when plotting the absolute data from the FDA review, the data points were almost superimposable. After discussions with the FDA about potential serious side effects, the drug was withdrawn from the market by the manufacturer in 2000, but reapproved with restrictions in 2002 [ 104 ]. A further example of reporting bias in research on disorders of the digestive system is presented in Additional file 1 : Table S2.

Disorders of the genitourinary system/Perinatal medicine

Urinary incontinence.

Lenzer and Brownlee also reported cases of suicide in a trial investigating the selective serotonin and noradrenalin reuptake inhibitor duloxetine for a new indication, urinary incontinence in women. However, the FDA refused to release data on these cases, citing trade secrecy laws. These laws "permit companies to withhold all information, even deaths, about drugs that do not win approval for a new indication, even when the drug is already on the market for other indications" [ 94 ]. Two examples of reporting bias in perinatal research are presented in Additional file 1 : Table S2.

Disorders of the musculoskeletal system

Osteoarthritis.

In 2000, a trial comparing upper gastrointestinal toxicity of rofecoxib, a selective COX-2 inhibitor, and naproxen in over 8000 patients with rheumatoid arthritis reported that rofecoxib was associated with significantly fewer clinically important upper gastrointestinal events. The significantly lower myocardial infarction rate in the naproxen group was attributed to a cardioprotective effect of naproxen (VIGOR trial, [ 105 ]). Concerns about the risk of selective COX-2-inhibitor-related cardiovascular events were raised as early as 2001 [ 106 ], and in 2002, an analysis including previously unpublished data from FDA reports of the VIGOR trial showed a statistically significant increase of serious cardiovascular thrombotic events in patients using rofecoxib [ 107 ].

In their article on access to pharmaceutical data at the FDA, Lurie and Zieve presented the example of the selective COX-2 inhibitor celecoxib: in a journal publication of a trial investigating the gastrointestinal toxicity with celecoxib versus other pain medications, the study authors concluded that the drug was associated with a lower incidence of gastrointestinal ulcers after 6 months of therapy [ 108 , 109 ]. However, they failed to disclose that at the time of publication they had already received data for the full study period (12 months), which showed no advantage over the comparator drugs for the above outcome [ 109 ].

Disorders of the skin

Atopic dermatitis, evening primrose oil.

In his editorial "Evening primrose oil for atopic dermatitis - Time to say goodnight", Williams reported that he and his colleague, who had performed an individual patient meta-analysis of evening primrose oil for atopic dermatitis commissioned by the UK Department of Health, were not given permission to publish their report, which included 10 previously unpublished studies. After submission of the report to the Department of Health, Searle, the company then responsible for product marketing, required the authors and referees to sign a written statement that the contents of the report had not been leaked. Other research had not shown convincing evidence of a benefit, and in 2002 the UK Medicines Control Agency withdrew marketing authorisation [ 66 ].

Endocrine and metabolic disorders

Diabetes mellitus type 2.

  • Rosiglitazone

The US cardiologist Steven Nissen commented on safety issues surrounding rosiglitazone, a thiazolidinedione used to treat type 2 diabetes. After the drug's approval, the FDA was informed in August 2005 by the manufacturer GSK that it had performed a meta-analysis of 42 randomized clinical trials of rosiglitazone, which suggested a 31% increase in the risk of ischaemic cardiovascular complications. GSK posted this finding on its website. However, neither GSK nor the FDA disseminated their findings in a broad way to the scientific community and the public [ 110 ]. The safety concerns were supported by a controversially discussed meta-analysis performed by Nissen and Wolski, who found that treatment with rosiglitazone was associated with a significantly increased risk of myocardial infarction and an increase in the risk of death from cardiovascular causes that had borderline significance [ 111 ]. More examples of reporting bias in diabetes research are presented in Additional file 1 : Table S2.

Hypercholesterolaemia

Ezetimibe and simvastatin.

In his article "Controversies surround heart drug study" Mitka described a trial that compared the 2 anticholesterol drugs ezetimibe and simvastatin versus simvastatin alone in patients with heterozygous familial hypercholesterolaemia [ 112 ]. No statistically significant difference between treatment groups was found for the primary outcome (mean change in the carotid intima-media thickness) after 2 years [ 113 ]. The trial, which was sponsored by Merck & Co. and Schering-Plough, was concluded in April 2006. A delay of almost 2 years in the reporting of results followed amidst allegations that the manufacturers had attempted to change the study endpoints prior to the publication of results [ 112 ]. A further case of reporting bias in research on ezetimibe is included in Additional file 1 : Table S2.

Cerivastatin

Psaty et al conducted a review of the published literature on the statin cerivastatin and also analysed internal company documents that became available during litigation procedures [ 114 ]. In the published literature, cerivastatin was associated with a substantially higher risk of rhabdomyolysis than other statins; this particularly referred to cerivastatin-gemfibrozil combination therapy. Cerivastatin was launched in the US in 1998 by Bayer, and within 3 to 4 months, internal documents indicated there had been multiple cases of cerivastatin-gemfibrozil interactions. However, it took more than 18 months until a contraindication about the concomitant use of the 2 drugs was added to the package insert. The unpublished data available in 1999 also suggested an association between high-dose cerivastatin monotherapy and rhabdomyolysis. In 1999/2000, the company analysed FDA adverse event reporting system data, which suggested that compared with atorvastatin, cerivastatin monotherapy substantially increased the risk of rhabdomyolysis. However, these findings were not disseminated or published. Cerivastatin was removed from the market in August 2001 [ 114 ]. In the same month, the German Ministry of Health accused Bayer of withholding vital information from its federal drug agency [ 115 ].

Thyroid disorders

Levothyroxine.

The Wall Street Journal reported the suppression of the results of a trial comparing the bioavailability of generic and brand-name levothyroxine products in the treatment of hypothyroidism; the investigators had concluded that the products were bioequivalent and in most cases interchangeable [ 116 , 117 ]. The trial was completed in 1990; over the next 7 years, the manufacturer of the brand-name product Synthroid, Boots pharmaceuticals, successfully delayed publication [ 65 ]. The manuscript was finally published in 1997.

Menopausal symptoms

A study investigating tibolone, a synthetic steroid, in breast-cancer patients with climacteric complaints was terminated prematurely after it was shown that this drug significantly increased the risk of cancer recurrence [ 118 ]. According to the German TV programme Frontal 21, the manufacturer (Schering-Plough, formerly NV Organon) informed regulatory authorities and ethics committees, as well as study centres and participants. However, the study results were not published until 1.5 years later [ 119 ].

Oncology is another area in which reporting bias is common [ 40 , 50 , 54 , 120 – 127 ]. A review of over 2000 oncology trials registered in ClinicalTrials.gov showed that less than 20% were available in PubMed, with substantial differences between trials sponsored by clinical trial networks and those sponsored by industry regarding both publication rates (59% vs. 6%) and the proportion of trials with positive results (50% vs. 75%) [ 50 ].

Ovarian cancer

Combination chemotherapy.

In one of the earliest publications measuring the effects of reporting bias, Simes compared published oncology trials and trials identified in cancer registries that investigated the survival impact of initial alkylating agent (AA) therapy versus combination chemotherapy (CC) in advanced ovarian cancer. A meta-analysis of the published trials showed a significant survival advantage for CC; however, no such advantage was shown in the meta-analysis of registered trials [ 121 ].

Multiple myeloma

The above study also investigated the survival impact of AA/prednisone versus CC in multiple myeloma. The meta-analysis of published trials demonstrated a significant survival advantage for CC. A survival benefit was also shown in the registered trials; however, the estimated magnitude of the benefit was reduced [ 121 ]. A further example of reporting bias in cancer research is presented in Additional file 1 : Table S2.

Disorders of the blood

Thalassaemia major, iron-chelation agents.

In his editorial "Thyroid storm", Rennie, among other things, discussed events surrounding a US researcher who had been involved in a trial investigating the effects of an oral iron-chelation agent in patients with thalassaemia major. She had initially published an optimistic article on the effects of this agent. However, further research showed a lack of effectiveness and a potential safety risk. She had signed a confidentiality agreement but, because of her concerns, decided to break confidentiality and report her results at a meeting; the manufacturer unsuccessfully attempted to block her presentation [ 128 ].

Bacterial, fungal, and viral infections

Oseltamivir.

The BMJ and Channel 4 News reported on the difficulties in obtaining data for an updated Cochrane review on neuraminidase inhibitors in influenza [ 129 ]. A previous analysis of oseltamivir, which was used in the prior Cochrane review [ 130 ], was based on 10 industry-sponsored trials of which only 2 had been published in peer-reviewed journals [ 131 ]. The manufacturer Roche initially declined to provide the necessary data to reproduce the analysis and then only provided a selection of files [ 129 ]. The Cochrane authors (Jefferson et al) subsequently concluded that "Evidence on the effects of oseltamivir in complications from lower respiratory tract infections, reported in our 2006 Cochrane review, may be unreliable" [ 132 ]. Roche has since agreed to provide public access to study summaries and password-protected access to the full study reports [ 129 ].

Anti-HIV agents

Ioannidis et al identified several examples of publication bias in trials investigating medications against HIV. At least 13 trials of 6 antiviral agents including at least 3779 patients had remained unpublished for more than 3 years from the time of their meeting presentation or completion. At least 9 of these trials had negative preliminary or final results. For example, 2 large negative isoprinosine trials were unpublished, whilst a positive trial had been published in a high impact journal [ 33 ]. Further examples of reporting bias in research on infections are presented in Additional file 1 : Table S2.

Acute trauma

Acute spinal cord injury, high-dose steroids.

Lenzer and Brownlee described the concerns of neurosurgeons regarding the use of high-dose steroids in patients with acute spinal cord injury. They noted that one neurosurgeon believed that several thousand patients had died as a result of this intervention; 2 surveys showed that many other neurosurgeons shared his concerns. The single available study, which had been funded by the NIH, was potentially flawed and several researchers had unsuccessfully lobbied for the release of the underlying data [ 94 ].

Human albumin infusion

In the UK Health Committee's 2004-2005 report on the influence of the pharmaceutical industry, Chalmers mentioned a systematic review of human albumin solution, which is used in the treatment of shock, e.g. in patients with burns. The results showed no evidence that albumin was helpful and suggested that this intervention may actually be harmful. Although the UK Medicines Control Agency subsequently slightly modified the labelling, it kept confidential the evidence upon which the drug had been re-licensed in 1993 [ 133 , 134 ].

Vaccinations

Hiv-1 vaccine.

McCarthy reported the case of an HIV-1 vaccine study that was terminated early when no difference in efficacy between the vaccine and placebo was found. After the lead investigators refused to include a post-hoc analysis arguing that it had not been part of the study protocol and that invalid statistical methods had been used, the manufacturer, Immune Response, filed an (unsuccessful) claim seeking to prevent publication. After publication, the manufacturer filed a claim against the study's lead investigators and their universities asking for US $7-10 million in damages [ 135 ].

Cancer vaccines

Rosenberg provided various examples of how researchers and companies withheld information on cancer vaccines for competitive reasons; for example, researchers were asked to keep information confidential that might have prevented cancer patients from receiving ineffective or even harmful doses of a new agent [ 75 ].

Other indications

Nocturnal leg cramps.

Man-Song-Hing et al performed a meta-analysis including unpublished individual patient data (IPD) obtained from the FDA on trials investigating quinine for the treatment of nocturnal leg cramps. They showed that the pooling only of published studies overestimated efficacy by more than 100% [ 136 ]. Further examples of reporting bias in other indications are presented in Additional file 1 : Table S2.

Further research areas

Reporting bias has also been shown in other research areas, such as genetics [ 137 , 138 ], effects of passive smoking [ 139 , 140 ] and nicotine [ 141 , 142 ], and effects of air pollution [ 143 ].

The numerous examples identified show that reporting bias concerns not only previously highlighted therapies such as antidepressants, pain medication, or cancer drugs, but affects a wide range of indications and interventions. Many cases involved the withholding of study data by manufacturers and regulatory agencies or the active attempt to suppress publication by manufacturers, which either resulted in substantial delays in publication (time-lag bias) or no publication at all.

Limitations of the review

The review does not provide a complete overview of reporting bias in clinical research. Although our efforts to identify relevant literature went beyond the usual efforts applied in narrative reviews, the review is non-systematic and we emphasized this feature in the title. A substantial amount of relevant literature was available in-house and further relevant literature was obtained by screening reference lists. We dispensed with our initial plan to conduct a systematic review to identify cases of reporting bias, as we noticed that many cases were not identifiable by screening titles and abstracts of citations from bibliographic databases, but were "hidden" in the discussion sections of journal articles or mentioned in other sources such as newspapers, books, government reports or websites. As a search of bibliographic databases and the Internet using keywords related to reporting bias produces thousands of potentially relevant hits, we would therefore have had to obtain and read an excessive amount of full texts in order to ensure that we had not missed any examples. This was not feasible due to resource limitations. However, within the framework of a previous publication [ 144 ] we had conducted a literature search in PubMed, and some of the citations retrieved formed the basis of our literature pool for the current review. In spite of this non-systematic approach, we were able to identify dozens of cases of reporting bias in numerous indications.

Another potential limitation of the review is the validity of the sources describing cases of reporting bias. Although the majority of examples were identified in peer-reviewed journals, several cases were based on information from other sources such as newspaper articles and websites. However, we also regard these sources to be valuable as they provide a broader overview of reporting bias beyond well-known examples and also offer a starting point for more systematic research on the additional examples identified.

Effects of reporting bias

Published evidence tends to overestimate efficacy and underestimate safety risks. The extent of misestimation is often unknown. The few identified comparisons that quantified overestimates of treatment effects in fully published versus unpublished or not fully published data showed wide variations in their results. Comparisons of pooled published versus pooled published and unpublished FDA data showed a greater treatment effect of 11% to 69% for individual antidepressants, 32% for the class of antidepressants [ 16 ], and over 100% for an agent to treat nocturnal leg cramps [ 136 ]. In addition, published studies have shown a 9% to 15% greater treatment effect than grey literature studies [ 145 , 146 ]. Thus, the conclusions of systematic reviews and meta-analyses based on published evidence alone may be misleading [ 5 , 7 , 38 ]. This is a serious concern as these documents are being used increasingly to support decision making in the health care system. Reporting bias may consequently result in inappropriate health care decisions by policy makers and clinicians, which harm patients, waste resources, and misguide future research [ 4 , 5 , 34 ].

Trial registration and public access to study data

There is an ethical obligation to publish research findings [ 120 , 147 – 150 ]. For example, patients who participate in clinical trials do so in the belief that they are contributing to medical progress, and this will only be the case if these trials are published. Deliberate non- or selective reporting represents unethical behaviour and scientific misconduct [ 34 , 147 ]. Public access to study data may also help identify safety problems at an earlier stage, which in the past have in some cases not always been detected by regulatory authorities [ 151 – 153 ]. Two concepts can help solve the issue of reporting bias: firstly, the mandatory and prospective registration of clinical trials, and secondly, the mandatory publication of full study results in results databases after study completion.

Non-industry initiatives

One of the first searchable computerized international registries of clinical trials was introduced in the United States in 1967; since then, several national and international trial registries have been created [ 154 ], such as the US government's trial registry and results database ClinicalTrials.gov (see Tse et al for an update on this registry [ 155 , 156 ]). The various controversies surrounding reporting bias, particularly the non-reporting of safety data, accelerated the movement both for trial registration and the establishment of results databases. Numerous researchers, organizations, regulatory and governmental authorities started various initiatives to achieve these goals [ 148 , 157 – 165 ].

In 2004, the International Committee of Medical Journal Editors (ICMJE) announced that it would make registration of clinical trials in a public registry a condition of consideration for publication [ 158 ]; this statement has since been updated [ 166 , 167 ].

In 2006, the WHO established the International Clinical Trials Registry Platform (ICTRP) in an initiative to bring national trial registries together in a global network providing a single point of access to registered trials [ 157 ]. However, to date no consensus has been found between the parties involved concerning which characteristics must be made publicly available at registration [ 168 ].

Section 801 of the US FDA Amendments Act 2007 (FDAAA, [ 169 ]) requires the registration at inception of all clinical trials involving a drug, biological product, or device regulated by the FDA. Trials must be registered on ClinicalTrials.gov and a defined set of results must be posted in the same registry within 12 months of study completion. Exceptions are phase I drug trials and early feasibility device trials. Non-compliance is sanctioned with monetary fines [ 163 , 170 ].

In 2004, the European Agency for the Evaluation of Medicinal Products (now European Medicines Agency) launched the European clinical trials database EudraCT (eudract.emea.europa.eu) to provide national authorities with a common set of information on clinical trials conducted in the EU. The database was initially supposed to be available only to the responsible authorities of the member states, as well as to the European Commission and the European Medicines Agency [ 171 ]. In 2006, the regulation on medicinal products for paediatric use was published, which required that information about European paediatric clinical trials of investigational medicinal products was to be made publicly available on EudraCT [ 172 , 173 ], and in February 2009, the European Commission published a guideline including the list of data fields to be made public [ 174 ]. On the same date, a similar list was published for all trials [ 175 ]. However, the legal obligation to publish information on trials in adults is not fully clear, and it is also unclear when all relevant information from EudraCT will be made publicly accessible.

With the introduction of the above-mentioned legislation, regulatory agencies are on the one hand helping to solve the problem of reporting bias, but on the other hand, they are also part of the problem: several of the examples identified refer to the non-publication or active withholding of study data by regulatory agencies [ 83 , 94 , 109 , 133 ]. This is partly due to existing confidentiality regulations such as Exemption 4 of the US Freedom of Information Act [ 176 ]. To solve the problems resulting from this situation, current legislation has to be changed to allow for the publication of comprehensive information on study methods and results by regulatory agencies. In his essay "A taxpayer-funded clinical trials registry and results database", Turner called for increased access to the FDA information sources, which would at least enable the assessment of drugs marketed in the USA [ 92 ]. Although the FDA posts selected reviews of NDAs on its website after the approval process following the Electronic Freedom of Information Act [ 177 ], the availability of these reviews is limited [ 92 ]. Moreover, according to the FDAAA, the results of older trials of approved drugs or of drugs that were never approved need not be disclosed [ 170 ], which is why a retrospective registry and results database is needed [ 178 ].

Industry initiatives

In 2002, the US Pharmaceutical Research and Manufacturers Association (PhRMA) member companies committed themselves to the registration of all hypothesis-testing clinical trials at initiation and to the timely disclosure of summary results, regardless of outcome [ 179 , 180 ]. PhRMA also launched the clinical study results database ClinicalStudyResults.org in 2004. In 2005, a similar commitment was made by several pharmaceutical industry associations [ 181 ], which has since been updated [ 182 ]. Following the legal settlement in the paroxetine case, GSK established a trial registry on its website gsk-clinicalstudyregister.com and other large companies have followed. In 2008, the German Association of Research-Based Pharmaceutical Companies (VFA) published a position paper on the issue of publication bias and claimed that, because of the voluntary self-commitment of the pharmaceutical industry and the introduction of legislation for the reporting of study data, publication bias had become a "historical" topic [ 183 ]. However, even after the update of the position paper in January 2009 [ 184 ], in Germany alone further attempts by drug companies to withhold study data have occurred [ 69 ], which shows that voluntary self-commitment is insufficient.

Conclusions

Reporting bias is widespread in the medical literature and has harmed patients in the past. Mandatory prospective registration of trials and public access to study data via results databases need to be introduced on a worldwide level. This would help fulfil ethical obligations towards patients by enabling proactive publication and independent reviews of clinical trial data, and ensure a basis for fully informed decision making in the health care system. Otherwise, clinical decision making based on the "best evidence" will remain an illusion.

Green S, Higgins S, editors: Glossary. Cochrane Handbook for Systematic Reviews of Interventions 4.2.5. Last update May 2005 [accessed 22 Feb 2010], http://www.cochrane.org/resources/handbook/

Sterne J, Egger M, Moher D: Addressing reporting biases. Cochrane handbook for systematic reviews of interventions. Edited by: Higgins JPT, Green S. 2008, Chichester: Wiley, 297-334. full_text.

Google Scholar  

Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, Cronin E, Decullier E, Easterbrook PJ, Von Elm E, Gamble C, Ghersi D, Ioannidis JP, Simes J, Williamson PR: Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS ONE. 2008, 3: e3081-10.1371/journal.pone.0003081.

PubMed   PubMed Central   Google Scholar  

Blumle A, Antes G, Schumacher M, Just H, Von Elm E: Clinical research projects at a German medical faculty: follow-up from ethical approval to publication and citation by others. J Med Ethics. 2008, 34: e20-10.1136/jme.2008.024521.

CAS   PubMed   Google Scholar  

Von Elm E, Rollin A, Blumle A, Huwiler K, Witschi M, Egger M: Publication and non-publication of clinical trials: longitudinal study of applications submitted to a research ethics committee. Swiss Med Wkly. 2008, 138: 197-203.

PubMed   Google Scholar  

Dickersin K, Min YI, Meinert CL: Factors influencing publication of research results: follow-up of applications submitted to two institutional review boards. JAMA. 1992, 267: 374-378. 10.1001/jama.267.3.374.

Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR: Publication bias in clinical research. Lancet. 1991, 337: 867-872. 10.1016/0140-6736(91)90201-Y.

Stern JM, Simes RJ: Publication bias: evidence of delayed publication in a cohort study of clinical research projects. BMJ. 1997, 315: 640-645.

CAS   PubMed   PubMed Central   Google Scholar  

Pich J, Carne X, Arnaiz JA, Gomez B, Trilla A, Rodes J: Role of a research ethics committee in follow-up and publication of results. Lancet. 2003, 361: 1015-1016. 10.1016/S0140-6736(03)12799-7.

Decullier E, Lheritier V, Chapuis F: Fate of biomedical research protocols and publication bias in France: retrospective cohort study. BMJ. 2005, 331: 19-24. 10.1136/bmj.38488.385995.8F.

Lee K, Bacchetti P, Sim I: Publication of clinical trials supporting successful new drug applications: a literature analysis. PLoS Med. 2008, 5: e191-10.1371/journal.pmed.0050191.

Hemminki E: Study of information submitted by drug companies to licensing authorities. Br Med J. 1980, 280: 833-836. 10.1136/bmj.280.6217.833.

MacLean CH, Morton SC, Ofman JJ, Roth EA, Shekelle PG: How useful are unpublished data from the Food and Drug Administration in meta-analysis?. J Clin Epidemiol. 2003, 56: 44-51. 10.1016/S0895-4356(02)00520-6.

Melander H, Ahlqvist-Rastad J, Meijer G, Beermann B: Evidence b(i)ased medicine: selective reporting from studies sponsored by pharmaceutical industry; review of studies in new drug applications. BMJ. 2003, 326: 1171-1173. 10.1136/bmj.326.7400.1171.

Benjamin DK, Smith PB, Murphy MD, Roberts R, Mathis L, Avant D, Califf RM, Li JS: Peer-reviewed publication of clinical trials completed for pediatric exclusivity. JAMA. 2006, 296: 1266-1273. 10.1001/jama.296.10.1266.

Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R: Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med. 2008, 358: 252-260. 10.1056/NEJMsa065779.

Bardy AH: Bias in reporting clinical trials. Br J Clin Pharmacol. 1998, 46: 147-150. 10.1046/j.1365-2125.1998.00759.x.

Ioannidis JP: Effect of the statistical significance of results on the time to completion and publication of randomized efficacy trials. JAMA. 1998, 279: 281-286. 10.1001/jama.279.4.281.

Hopewell S, Clarke M, Stewart L, Tierney J: Time to publication for results of clinical trials. Cochrane Database Syst Rev. 2007, MR000011-2

Institute for Quality and Efficiency in Health Care: General methods: version 3.0. Last update 27 May 2008 [accessed 22 Feb 2010], http://www.iqwig.de/download/IQWiG_General_methods_V-3-0.pdf

National Institute for Health and Clinical Excellence: Guide to the methods of technology appraisal. London. 2008

Scherer RW, Langenberg P, Von Elm E: Full publication of results initially presented in abstracts. Cochrane Database Syst Rev. 2007, MR000005-2

Altman D: Outcome reporting bias in meta-analyses. Last update 2007 [accessed 24 Feb 2010], http://www.chalmersresearch.com/bmg/docs/t2p1.pdf

Chan AW, Hrobjartsson A, Haahr MT, Gotzsche PC, Altman DG: Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA. 2004, 291: 2457-2465. 10.1001/jama.291.20.2457.

Chan AW, Altman DG: Identifying outcome reporting bias in randomised trials on PubMed: review of publications and survey of authors. BMJ. 2005, 330: 753-10.1136/bmj.38356.424606.8F.

Rising K, Bacchetti P, Bero L: Reporting bias in drug trials submitted to the Food and Drug Administration: review of publication and presentation. PLoS Med. 2008, 5: e217-10.1371/journal.pmed.0050217.

Chan AW, Krleza-Jeric K, Schmid I, Altman DG: Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research. CMAJ. 2004, 171: 735-740.

Al-Marzouki S, Roberts I, Evans S, Marshall T: Selective reporting in clinical trials: analysis of trial protocols accepted by The Lancet. Lancet. 2008, 372: 201-10.1016/S0140-6736(08)61060-0.

Ioannidis JP, Lau J: Completeness of safety reporting in randomized trials: an evaluation of 7 medical areas. JAMA. 2001, 285: 437-443. 10.1001/jama.285.4.437.

Hazell L, Shakir SA: Under-reporting of adverse drug reactions: a systematic review. Drug Saf. 2006, 29: 385-396. 10.2165/00002018-200629050-00003.

Bonhoeffer J, Zumbrunn B, Heininger U: Reporting of vaccine safety data in publications: systematic review. Pharmacoepidemiol Drug Saf. 2005, 14: 101-106. 10.1002/pds.979.

Loke YK, Derry S: Reporting of adverse drug reactions in randomised controlled trials: a systematic survey. BMC Clin Pharmacol. 2001, 1: 3-10.1186/1472-6904-1-3.

Ioannidis JP, Cappelleri JC, Sacks HS, Lau J: The relationship between study design, results, and reporting of randomized clinical trials of HIV infection. Control Clin Trials. 1997, 18: 431-444. 10.1016/S0197-2456(97)00097-4.

Dickersin K: Reporting and other biases in studies of Neurontin for migraine, psychiatric/bipolar disorders, nociceptive pain, and neuropathic pain. Last update 10 Aug 2008 [accessed 26 Feb 2010], http://dida.library.ucsf.edu/pdf/oxx18r10

Nassir Ghaemi S, Shirzadi AA, Filkowski M: Publication bias and the pharmaceutical industry: the case of lamotrigine in bipolar disorder. Medscape J Med. 2008, 10: 211-

Sterling T: Publication decisions and their possible effects on inferences drawn from tests of significances. J Am Stat Assoc. 1959, 54: 30-34. 10.2307/2282137.

Hopewell S, Loudon K, Clarke MJ, Oxman AD, Dickersin K: Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane Database Syst Rev. 2009, MR000006-1

Song F, Eastwood AJ, Gilbody S, Duley L, Sutton AJ: Publication and related biases. Health Technol Assess. 2000, 4: 1-115.

Dickersin K, Chan S, Chalmers TC, Sacks HS, Smith H: Publication bias and clinical trials. Control Clin Trials. 1987, 8: 343-353. 10.1016/0197-2456(87)90155-3.

Krzyzanowska MK, Pintilie M, Tannock IF: Factors associated with failure to publish large randomized trials presented at an oncology meeting. JAMA. 2003, 290: 495-501. 10.1001/jama.290.4.495.

Timmer A, Hilsden RJ, Cole J, Hailey D, Sutherland LR: Publication bias in gastroenterological research: a retrospective cohort study based on abstracts submitted to a scientific meeting. BMC Med Res Methodol. 2002, 2: 7-10.1186/1471-2288-2-7.

Tramer MR, Reynolds DJ, Moore RA, McQuay HJ: Impact of covert duplicate publication on meta-analysis: a case study. BMJ. 1997, 315: 635-640.

Gotzsche PC: Reference bias in reports of drug trials. Br Med J (Clin Res Ed). 1987, 295: 654-656. 10.1136/bmj.295.6599.654.

CAS   Google Scholar  

Kjaergard LL, Gluud C: Citation bias of hepato-biliary randomized clinical trials. J Clin Epidemiol. 2002, 55: 407-410. 10.1016/S0895-4356(01)00513-3.

Ravnskov U: Quotation bias in reviews of the diet-heart idea. J Clin Epidemiol. 1995, 48: 713-719. 10.1016/0895-4356(94)00222-C.

Egger M, Zellweger-Zahner T, Schneider M, Junker C, Lengeler C, Antes G: Language bias in randomised controlled trials published in English and German. Lancet. 1997, 350: 326-329. 10.1016/S0140-6736(97)02419-7.

Lee KP, Boyd EA, Holroyd-Leduc JM, Bacchetti P, Bero LA: Predictors of publication: characteristics of submitted manuscripts associated with acceptance at major biomedical journals. Med J Aust. 2006, 184: 621-626.

Callaham ML, Wears RL, Weber EJ, Barton C, Young G: Positive-outcome bias and other limitations in the outcome of research abstracts submitted to a scientific meeting. JAMA. 1998, 280: 254-257. 10.1001/jama.280.3.254.

Lexchin J, Bero LA, Djulbegovic B, Clark O: Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ. 2003, 326: 1167-1170. 10.1136/bmj.326.7400.1167.

Ramsey S, Scoggins J: Commentary: practicing on the tip of an information iceberg? Evidence of underpublication of registered clinical trials in oncology. Oncologist. 2008, 13: 925-929. 10.1634/theoncologist.2008-0133.

Olson CM, Rennie D, Cook D, Dickersin K, Flanagin A, Hogan JW, Zhu Q, Reiling J, Pace B: Publication bias in editorial decision making. JAMA. 2002, 287: 2825-2828. 10.1001/jama.287.21.2825.

Okike K, Kocher MS, Mehlman CT, Heckman JD, Bhandari M: Publication bias in orthopaedic research: an analysis of scientific factors associated with publication in the Journal of Bone and Joint Surgery (American Volume). J Bone Joint Surg Am. 2008, 90: 595-601. 10.2106/JBJS.G.00279.

Dickersin K, Min YI: NIH clinical trials and publication bias. Online J Curr Clin Trials. 1993, Doc No 50:[4967 words; 4953 paragraphs].

Hartmann M, Knoth H, Schulz D, Knoth S: Industry-sponsored economic studies in oncology vs studies sponsored by nonprofit organisations. Br J Cancer. 2003, 89: 1405-1408. 10.1038/sj.bjc.6601308.

Bekelman JE, Li Y, Gross CP: Scope and impact of financial conflicts of interest in biomedical research: a systematic review. JAMA. 2003, 289: 454-465. 10.1001/jama.289.4.454.

Sismondo S: Pharmaceutical company funding and its consequences: a qualitative systematic review. Contemp Clin Trials. 2008, 29: 109-113. 10.1016/j.cct.2007.08.001.

Jorgensen AW, Hilden J, Gotzsche PC: Cochrane reviews compared with industry supported meta-analyses and other meta-analyses of the same drugs: systematic review. BMJ. 2006, 333: 782-10.1136/bmj.38973.444699.0B.

Liss H: Publication bias in the pulmonary/allergy literature: effect of pharmaceutical company sponsorship. Isr Med Assoc J. 2006, 8: 451-454.

Ridker PM, Torres J: Reported outcomes in major cardiovascular clinical trials funded by for-profit and not-for-profit organizations: 2000-2005. JAMA. 2006, 295: 2270-2274. 10.1001/jama.295.19.2270.

Als-Nielsen B, Chen W, Gluud C, Kjaergard LL: Association of funding and conclusions in randomized drug trials: a reflection of treatment effect or adverse events?. JAMA. 2003, 290: 921-928. 10.1001/jama.290.7.921.

Perlis CS, Harwood M, Perlis RH: Extent and impact of industry sponsorship conflicts of interest in dermatology research. J Am Acad Dermatol. 2005, 52: 967-971. 10.1016/j.jaad.2005.01.020.

Bhandari M, Busse JW, Jackowski D, Montori VM, Schünemann H, Sprague S, Mears D, Schemitsch EH, Heels-Ansdell D, Devereaux PJ: Association between industry funding and statistically significant pro-industry findings in medical and surgical randomized trials. CMAJ. 2004, 170: 477-480.

Kjaergard LL, Als-Nielsen B: Association between competing interests and authors' conclusions: epidemiological study of randomised clinical trials published in the BMJ. BMJ. 2002, 325: 249-10.1136/bmj.325.7358.249.

Lauritsen K, Havelund T, Laursen LS, Rask-Madsen J: Withholding unfavourable results in drug company sponsored clinical trials. Lancet. 1987, 1: 1091-10.1016/S0140-6736(87)90515-0.

Wise J: Research suppressed for seven years by drug company. BMJ. 1997, 314: 1145-

Williams HC: Evening primrose oil for atopic dermatitis. BMJ. 2003, 327: 1358-1359. 10.1136/bmj.327.7428.1358.

Henry DA, Kerridge IH, Hill SR, McNeill PM, Doran E, Newby DA, Henderson KM, Maguire J, Stokes BJ, Macdonald GJ, Day RO: Medical specialists and pharmaceutical industry-sponsored research: a survey of the Australian experience. Med J Aust. 2005, 182: 557-560.

Gotzsche PC, Hrobjartsson A, Johansen HK, Haahr MT, Altman DG, Chan AW: Constraints on publication rights in industry-initiated clinical trials. JAMA. 2006, 295: 1645-1646. 10.1001/jama.295.14.1645.

Stafford N: German agency refuses to rule on drug's benefits until Pfizer discloses all trial results. BMJ. 2009, 338: b2521-10.1136/bmj.b2521.

Whittington CJ, Kendall T, Fonagy P, Cottrell D, Cotgrove A, Boddington E: Selective serotonin reuptake inhibitors in childhood depression: systematic review of published versus unpublished data. Lancet. 2004, 363: 1341-1345. 10.1016/S0140-6736(04)16043-1.

Cowley AJ, Skene A, Stainer K, Hampton JR: The effect of lorcainide on arrhythmias and survival in patients with acute myocardial infarction: an example of publication bias. Int J Cardiol. 1993, 40: 161-166. 10.1016/0167-5273(93)90279-P.

Moore TJ: Deadly medicine: why tens of thousands of heart patients died in America's worst drug disaster. 1995, New York: Simon & Schuster

Mathews A, Martinez B: E-mails suggest Merck knew Vioxx's dangers at early stage. Wall Street Journal. 2004, A1-

Psaty BM, Kronmal RA: Reporting mortality findings in trials of rofecoxib for Alzheimer disease or cognitive impairment: a case study based on documents from rofecoxib litigation. JAMA. 2008, 299: 1813-1817. 10.1001/jama.299.15.1813.

Rosenberg SA: Secrecy in medical research. N Engl J Med. 1996, 334: 392-394. 10.1056/NEJM199602083340610.

Baker CB, Johnsrud MT, Crismon ML, Rosenheck RA, Woods SW: Quantitative analysis of sponsorship bias in economic studies of antidepressants. Br J Psychiatry. 2003, 183: 498-506. 10.1192/bjp.183.6.498.

Moncrieff J: Clozapine v. conventional antipsychotic drugs for treatment-resistant schizophrenia: a re-examination. Br J Psychiatry. 2003, 183: 161-166. 10.1192/bjp.183.2.161.

Montgomery JH, Byerly M, Carmody T, Li B, Miller DR, Varghese F, Holland R: An analysis of the effect of funding source in randomized clinical trials of second generation antipsychotics for the treatment of schizophrenia. Control Clin Trials. 2004, 25: 598-612. 10.1016/j.cct.2004.09.002.

Procyshyn RM, Chau A, Fortin P, Jenkins W: Prevalence and outcomes of pharmaceutical industry-sponsored clinical trials involving clozapine, risperidone, or olanzapine. Can J Psychiatry. 2004, 49: 601-606.

Perlis RH, Perlis CS, Wu Y, Hwang C, Joseph M, Nierenberg AA: Industry sponsorship and financial conflict of interest in the reporting of clinical trials in psychiatry. Am J Psychiatry. 2005, 162: 1957-1960. 10.1176/appi.ajp.162.10.1957.

Heres S, Davis J, Maino K, Jetzinger E, Kissling W, Leucht S: Why olanzapine beats risperidone, risperidone beats quetiapine, and quetiapine beats olanzapine: an exploratory analysis of head-to-head comparison studies of second-generation antipsychotics. Am J Psychiatry. 2006, 163: 185-194. 10.1176/appi.ajp.163.2.185.

Kelly RE, Cohen LJ, Semple RJ, Bialer P, Lau A, Bodenheimer A, Neustadter E: Relationship between drug company funding and outcomes of clinical psychiatric research. Psychol Med. 2006, 36: 1647-1656. 10.1017/S0033291706008567.

Kirsch I, Deacon BJ, Huedo-Medina TB, Scoboria A, Moore TJ, Johnson BT: Initial severity and antidepressant benefits: a meta-analysis of data submitted to the Food and Drug Administration. PLoS Med. 2008, 5: e45-10.1371/journal.pmed.0050045.

Office of the Attorney General: Major pharmaceutical firm concealed drug information. Last update 02 Jun 2004 [accessed 24 Feb 2010], http://www.oag.state.ny.us/media_center/2004/jun/jun2b_04.html

Office of the Attorney General: Settlement sets new standard for release of drug information. Last update 26 Aug 2004 [accessed 26 Feb 2010], http://www.oag.state.ny.us/media_center/2004/aug/aug26a_04.html

Gibson L: GlaxoSmithKline to publish clinical trials after US lawsuit. BMJ. 2004, 328: 1513-10.1136/bmj.328.7455.1513-a.

Institute for Quality and Efficiency in Health Care: Bupropion, mirtazapine and reboxetine in the treatment of depression: executive summary of preliminary report; commission no A05-20C. Last update 29 May 2009 [accessed 26 Feb 2010], http://www.iqwig.de/download/A05-20C_Executive_summary_Bupropion_mirtazapine_and_reboxetine_in_the_treatment_of_depression.pdf

Institute for Quality and Efficiency in Health Care: Antidepressants: benefit of reboxetine not proven. Last update 24 Nov 2009 [accessed 26 Feb 2010], http://www.iqwig.de/antidepressants-benefit-of-reboxetine-not-proven.981.en.html

Abramson J: Expert report. Last update 11 Aug 2008 [accessed 26 Feb 2010], http://dida.library.ucsf.edu/pdf/oxx18v10

Vedula SS, Bero L, Scherer RW, Dickersin K: Outcome reporting in industry-sponsored trials of gabapentin for off-label use. N Engl J Med. 2009, 361: 1963-1971. 10.1056/NEJMsa0906126.

Vedantam S: A silenced drug study creates an uproar. Washington Post. 2009, A01-

Turner EH: A taxpayer-funded clinical trials registry and results database. PLoS Med. 2004, 1: e60-10.1371/journal.pmed.0010060.

Singh D: Merck withdraws arthritis drug worldwide. BMJ. 2004, 329: 816-10.1136/bmj.329.7470.816-a.

Lenzer J, Brownlee S: An untold story?. BMJ. 2008, 336: 532-534. 10.1136/bmj.39504.662685.0F.

Waknine Y: Bextra withdrawn from market. Medscape Today [Online]. 2005, http://www.medscape.com/viewarticle/502642

Hiatt WR: Observational studies of drug safety--aprotinin and the absence of transparency. N Engl J Med. 2006, 355: 2171-2173. 10.1056/NEJMp068252.

Tuffs A: Bayer withdraws heart surgery drug. BMJ. 2007, 335: 1015-10.1136/bmj.39395.644826.DB.

Furberg CD: Effect of antiarrhythmic drugs on mortality after myocardial infarction. Am J Cardiol. 1983, 52: 32C-36C. 10.1016/0002-9149(83)90629-X.

Antes G: Tödliche Medizin. Unpublizierte Studien - harmlos? [Fatal medicine. Unpublished studies - harmless?]. MMW Fortschr Med. 2006, 148: 8-

Hine LK, Laird N, Hewitt P, Chalmers TC: Meta-analytic evidence against prophylactic use of lidocaine in acute myocardial infarction. Arch Intern Med. 1989, 149: 2694-2698. 10.1001/archinte.149.12.2694.

MacMahon S, Collins R, Peto R, Koster RW, Yusuf S: Effects of prophylactic lidocaine in suspected acute myocardial infarction: an overview of results from the randomized, controlled trials. JAMA. 1988, 260: 1910-1916. 10.1001/jama.260.13.1910.

Preliminary report: effect of encainide and flecainide on mortality in a randomized trial of arrhythmia suppression after myocardial infarction. The Cardiac Arrhythmia Suppression Trial (CAST) Investigators. N Engl J Med. 1989, 321: 406-412.

Barbehenn E, Lurie P, Wolfe SM: Alosetron for irritable bowel syndrome. Lancet. 2000, 356: 2009-2010. 10.1016/S0140-6736(05)72978-0.

Moynihan R: Alosetron: a case study in regulatory capture, or a victory for patients' rights?. BMJ. 2002, 325: 592-595. 10.1136/bmj.325.7364.592.

Bombardier C, Laine L, Reicin A, Shapiro D, Burgos-Vargas R, Davis B, Day R, Ferraz MB, Hawkey CJ, Hochberg MC, Kvien TK, Schnitzer TJ, VIGOR Study Group: Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis. N Engl J Med. 2000, 343: 1520-1528. 10.1056/NEJM200011233432103.

Mukherjee D, Nissen SE, Topol EJ: Risk of cardiovascular events associated with selective COX-2 inhibitors. JAMA. 2001, 286: 954-959. 10.1001/jama.286.8.954.

McCormack JP, Rangno R: Digging for data from the COX-2 trials. CMAJ. 2002, 166: 1649-1650.

Silverstein FE, Faich G, Goldstein JL, Simon LS, Pincus T, Whelton A, Makuch R, Eisen G, Agrawal NM, Stenson WF, Burr AM: Gastrointestinal toxicity with celecoxib vs nonsteroidal anti-inflammatory drugs for osteoarthritis and rheumatoid arthritis: the CLASS study: a randomized controlled trial. JAMA. 2000, 284: 1247-1255. 10.1001/jama.284.10.1247.

Lurie P, Zieve A: Sometimes the silence can be like the thunder: access to pharmaceutical data at the FDA. Law Contemp Probl. 2008, 69: 85-97.

Nissen S, Califf R: A conversation about rosiglitazone. Medscape Diabetes & Endocrinology [Online]. 2007, http://www.medscape.com/viewarticle/561666

Nissen SE, Wolski K: Effect of rosiglitazone on the risk of myocardial infarction and death from cardiovascular causes. N Engl J Med. 2007, 356: 2457-2471. 10.1056/NEJMoa072761.

Mitka M: Controversies surround heart drug study: questions about Vytorin and trial sponsors' conduct. JAMA. 2008, 299: 885-887. 10.1001/jama.299.8.885.

Kastelein JJ, Akdim F, Stroes ES, Zwinderman AH, Bots ML, Stalenhoef AF, Visseren FL, Sijbrands EJ, Trip MD, Stein EA, Duivenvoorden R, Veltri EP, Marais AD, de Groot E, ENHANCE Investigators: Simvastatin with or without ezetimibe in familial hypercholesterolemia. N Engl J Med. 2008, 358: 1431-1443. 10.1056/NEJMoa0800742.

Psaty BM, Furberg CD, Ray WA, Weiss NS: Potential for conflict of interest in the evaluation of suspected adverse drug reactions: use of cerivastatin and risk of rhabdomyolysis. JAMA. 2004, 292: 2622-2631. 10.1001/jama.292.21.2622.

Tuffs A: Bayer faces potential fine over cholesterol lowering drug. BMJ. 2001, 323: 415-10.1136/bmj.323.7310.415.

King RT: Bitter pill: how a drug firm paid for university study, then undermined it. Wall Street Journal. 1996, 1: A13-

Dong BJ, Hauck WW, Gambertoglio JG, Gee L, White JR, Bubp JL, Greenspan FS: Bioequivalence of generic and brand-name levothyroxine products in the treatment of hypothyroidism. JAMA. 1997, 277: 1205-1213. 10.1001/jama.277.15.1205.

Kenemans P, Bundred NJ, Foidart JM, Kubista E, von Schoultz B, Sismondi P, Vassilopoulou-Sellin R, Yip CH, Egberts J, Mol-Arts M, Mulder R, van Os S, Beckmann MW, LIBERATE Study Group: Safety and efficacy of tibolone in breast-cancer patients with vasomotor symptoms: a double-blind, randomised, non-inferiority trial. Lancet Oncol. 2009, 10: 135-146. 10.1016/S1470-2045(08)70341-3.

Lippegaus O, Prokscha S, Thimme C: Verharmloste Gefahren. Krebs durch Hormonbehandlung [Trivialised dangers. Cancer caused by hormone therapy]. Last update 2009 [accessed 26 Feb 2010], http://frontal21.zdf.de/ZDFde/inhalt/11/0,1872,7593675,00.html

Doroshow JH: Commentary: publishing cancer clinical trial results: a scientific and ethical imperative. Oncologist. 2008, 13: 930-932. 10.1634/theoncologist.2008-0168.

Simes RJ: Publication bias: the case for an international registry of clinical trials. J Clin Oncol. 1986, 4: 1529-1541.

Takeda A, Loveman E, Harris P, Hartwell D, Welch K: Time to full publication of studies of anti-cancer medicines for breast cancer and the potential for publication bias: a short systematic review. Health Technol Assess. 2008, 12: iii-x. 1-46

Peppercorn J, Blood E, Winer E, Partridge A: Association between pharmaceutical involvement and outcomes in breast cancer clinical trials. Cancer. 2007, 109: 1239-1246. 10.1002/cncr.22528.

Kyzas PA, Loizou KT, Ioannidis JP: Selective reporting biases in cancer prognostic factor studies. J Natl Cancer Inst. 2005, 97: 1043-1055.

Begg CB, Pocock SJ, Freedman L, Zelen M: State of the art in comparative cancer clinical trials. Cancer. 1987, 60: 2811-2815. 10.1002/1097-0142(19871201)60:11<2811::AID-CNCR2820601136>3.0.CO;2-P.

Kyzas PA, Denaxa-Kyza D, Ioannidis JP: Almost all articles on cancer prognostic markers report statistically significant results. Eur J Cancer. 2007, 43: 2559-2579.

Manheimer E, Anderson D: Survey of public information about ongoing clinical trials funded by industry: evaluation of completeness and accessibility. BMJ. 2002, 325: 528-531. 10.1136/bmj.325.7363.528.

Rennie D: Thyroid storm. JAMA. 1997, 277: 1238-1243. 10.1001/jama.277.15.1238.

Godlee F, Clarke M: Why don't we have all the evidence on oseltamivir?. BMJ. 2009, 339: b5351-10.1136/bmj.b5351.

Jefferson TO, Demicheli V, Di Pietrantonj C, Jones M, Rivetti D: Neuraminidase inhibitors for preventing and treating influenza in healthy adults. Cochrane Database Syst Rev. 2006, 3: CD001265-

Kaiser L, Wat C, Mills T, Mahoney P, Ward P, Hayden F: Impact of oseltamivir treatment on influenza-related lower respiratory tract complications and hospitalizations. Arch Intern Med. 2003, 163: 1667-1672. 10.1001/archinte.163.14.1667.

Jefferson T, Jones M, Doshi P, Del Mar C: Neuraminidase inhibitors for preventing and treating influenza in healthy adults: systematic review and meta-analysis. BMJ. 2009, 339: b5106-10.1136/bmj.b5106.

The influence of the pharmaceutical industry; formal minutes, oral and written evidence. 2005, London: Stationery Office, 2: [House of Commons, Health Committee (Series Editor): Report of session 2004-05; vol 4]

Cochrane Injuries Group Albumin Reviewers: Human albumin administration in critically ill patients: systematic review of randomised controlled trials. BMJ. 1998, 317: 235-240.

McCarthy M: Company sought to block paper's publication. Lancet. 2000, 356: 1659-10.1016/S0140-6736(00)03166-4.

Man-Son-Hing M, Wells G, Lau A: Quinine for nocturnal leg cramps: a meta-analysis including unpublished data. J Gen Intern Med. 1998, 13: 600-606. 10.1046/j.1525-1497.1998.00182.x.

Marshall E: Is data-hoarding slowing the assault on pathogens?. Science. 1997, 275: 777-780. 10.1126/science.275.5301.777.

Campbell EG, Clarridge BR, Gokhale M, Birenbaum L, Hilgartner S, Holtzman NA, Blumenthal D: Data withholding in academic genetics: evidence from a national survey. JAMA. 2002, 287: 473-480. 10.1001/jama.287.4.473.

Misakian AL, Bero LA: Publication bias and research on passive smoking: comparison of published and unpublished studies. JAMA. 1998, 280: 250-253. 10.1001/jama.280.3.250.

Barnes DE, Bero LA: Why review articles on the health effects of passive smoking reach different conclusions. JAMA. 1998, 279: 1566-1570. 10.1001/jama.279.19.1566.

Hilts PJ: Philip Morris blocked paper showing addiction, panel finds. New York Times. 1994, A7-

Hilts PJ: Scientists say Philip Morris withheld nicotine findings. New York Times. 1994, A1-A7.

Anderson HR, Atkinson RW, Peacock JL, Sweeting MJ, Marston L: Ambient particulate matter and health effects: publication bias in studies of short-term associations. Epidemiology. 2005, 16: 155-163. 10.1097/01.ede.0000152528.22746.0f.

Peinemann F, McGauran N, Sauerland S, Lange S: Negative pressure wound therapy: potential publication bias caused by lack of access to unpublished study results data. BMC Med Res Methodol. 2008, 8: 4-10.1186/1471-2288-8-4.

McAuley L, Pham B, Tugwell P, Moher D: Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta-analyses?. Lancet. 2000, 356: 1228-1231. 10.1016/S0140-6736(00)02786-0.

Hopewell S, McDonald S, Clarke M, Egger M: Grey literature in meta-analyses of randomized trials of health care interventions. Cochrane Database Syst Rev. 2007, MR000010-2

Chalmers I: Underreporting research is scientific misconduct. JAMA. 1990, 263: 1405-1408. 10.1001/jama.263.10.1405.

World Medical Association: Declaration of Helsinki: ethical principles for medical research involving human subjects. Last update Oct 2008 [accessed 26 Feb 2010], http://www.wma.net/en/30publications/10policies/b3/index.html

Pearn J: Publication: an ethical imperative. BMJ. 1995, 310: 1313-1315.

The Nuremberg code. Trials of war criminals before the Nuremberg Military Tribunals under Control Council Law no10. 1949, Washington, D.C.: US Government Printing Office, 2: 181-182.

Healy D: Did regulators fail over selective serotonin reuptake inhibitors?. BMJ. 2006, 333: 92-95. 10.1136/bmj.333.7558.92.

Topol EJ: Failing the public health: rofecoxib, Merck, and the FDA. N Engl J Med. 2004, 351: 1707-1709. 10.1056/NEJMp048286.

Rennie D: When evidence isn't: trials, drug companies and the FDA. J Law Policy. 2007, 15: 991-1012.

Dickersin K, Rennie D: Registering clinical trials. JAMA. 2003, 290: 516-523. 10.1001/jama.290.4.516.

Tse T, Williams RJ, Zarin DA: Update on Registration of Clinical Trials in ClinicalTrials.gov. Chest. 2009, 136: 304-305. 10.1378/chest.09-1219.

Tse T, Williams RJ, Zarin DA: Reporting "basic results" in ClinicalTrials.gov. Chest. 2009, 136: 295-303. 10.1378/chest.08-3022.

WHO clinical trials initiative to protect the public. Bull World Health Organ. 2006, 84: 10-11.

De Angelis C, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJ, Schroeder TV, Sox HC, Weyden Van Der MB, International Committee of Medical Journal Editors: Clinical trial registration: a statement from the International Committee of Medical Journal Editors. N Engl J Med. 2004, 351: 1250-1251. 10.1056/NEJMe048225.

Deutsches Cochrane Zentrum, Deutsches Netzwerk Evidenzbasierte Medizin: Stellungnahme [Comment]. Last update 22 Sep 2004 [accessed 26 Feb 2010], http://www.ebm-netzwerk.de/netzwerkarbeit/images/stellungnahme_anhoerung_probandenschutz.pdf

Krleza-Jeric K: International dialogue on the Public Reporting Of Clinical Trial Outcome and Results: PROCTOR meeting. Croat Med J. 2008, 49: 267-268. 10.3325/cmj.2008.2.267.

Krleza-Jeric K, Chan AW, Dickersin K, Sim I, Grimshaw J, Gluud C: Principles for international registration of protocol information and results from human trials of health related interventions: Ottawa statement (part 1). BMJ. 2005, 330: 956-958. 10.1136/bmj.330.7497.956.

European Research Council: ERC Scientific Council guidelines on open access. Last update 17 Dec 2007 [accessed 25 Feb 2010], http://erc.europa.eu/pdf/ScC_Guidelines_Open_Access_revised_Dec07_FINAL.pdf

Groves T: Mandatory disclosure of trial results for drugs and devices. BMJ. 2008, 336: 170-10.1136/bmj.39469.465139.80.

Steinbrook R: Public access to NIH-funded research. N Engl J Med. 2005, 352: 1739-1741. 10.1056/NEJMp058088.

Dickersin K: Report from the Panel on the Case for Registers of Clinical Trials at the Eighth Annual Meeting of the Society for Clinical Trials. Control Clin Trials. 1988, 9: 76-81.

De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C: Is this clinical trial fully registered? A statement from the International Committee of Medical Journal Editors. N Engl J Med. 2005, 352: 2436-2438. 10.1056/NEJMe058127.

Laine C, Horton R, DeAngelis CD, Drazen JM, Frizelle FA, Godlee F, Haug C, Hebert PC, Kotzin S, Marusic A, Sahni P, Schroeder TV, Sox HC, Weyden Van der MB, Verheugt FW: Clinical trial registration--looking back and moving ahead. N Engl J Med. 2007, 356: 2734-2736. 10.1056/NEJMe078110.

Krleza-Jeric K: Clinical trial registration: the differing views of industry, the WHO, and the Ottawa Group. PLoS Med. 2005, 2: e378-10.1371/journal.pmed.0020378.

Food and Drug Administration: FDA Amendments Act (FDAAA) of 2007, public law no. 110-85 §801. Last update 2007 [accessed 26 Feb 2010], http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname=110_cong_public_laws%26docid=f:publ085.110.pdf

Wood AJJ: Progress and deficiencies in the registration of clinical trials. N Engl J Med. 2009, 360: 824-830. 10.1056/NEJMsr0806582.

European Medicines Agency: EMEA launches EudraCT database. Last update 06 May 2004 [accessed 25 Feb 2010], http://www.emea.europa.eu/pdfs/general/direct/pr/1258904en.pdf

Smyth RL: Making information about clinical trials publicly available. BMJ. 2009, 338: b2473-10.1136/bmj.b2473.

Regulation (EC) No 1901/2006 of the European Parliament and of the Council of 12 December 2006 on medicinal products for paediatric use and amending regulation (EEC) no 1768/92, directive 2001/20/EC, directive 2001/83/EC and regulation (EC) No 726/2004. Official J Eur Commun. 2006, 49: L378/1-L378/19.

European Commission: List of fields to be made public from EudraCT for paediatric clinical trials in accordance with article 41 of regulation (EC) no 1901/2006 and its implementing guideline 2009/C28/01. Last update 26 Mar 2009 [accessed 26 Feb 2010], http://ec.europa.eu/enterprise/pharmaceuticals/eudralex/vol-10/2009_02_04_guidelines_paed.pdf

European Commission: List of fields contained in the 'EudraCT' clinical trials database to be made public, in accordance with Article 57(2) of Regulation (EC) No 726/2004 and its implementing guideline 2008/c168/021. Last update 04 Feb 2009 [accessed 25 Feb 2010], http://ec.europa.eu/enterprise/pharmaceuticals/eudralex/vol-10/2009_02_04_guideline.pdf

Committee on Government Reform: A citizen's guide on using the Freedom of Information Act and the Privacy Act of 1974 to request government records. Last update 20 Sep 2005 [accessed 26 Feb 2010], http://www.fas.org/sgp/foia/citizen.pdf

Food and Drug Administration: Executive summary of the Food and Drug Administration's consumer roundtable on consumer protection priorities. Last update 2000 [accessed 26 Feb 2010], http://www.fda.gov/ohrms/dockets/dockets/00n_1665/cr00001.pdf

Turner EH: Closing a loophole in the FDA Amendments Act. Science. 2008, 322: 44-46. 10.1126/science.322.5898.44c.

Pharmaceutical Research and Manufacturers of America: PhRMA clinical trial registry proposal. Last update 2010 [accessed 26 Feb 2010], http://www.phrma.org/node/446

Principles on the conduct of clinical trials and communication of clinical trial results. 2002, Washington DC: Pharmaceutical Research and Manufacturers of America

International Federation of Pharmaceutical Manufacturers & Associations: Joint position on the disclosure of clinical trial information via clinical trial registries and databases. Last update 2005 [accessed 24 Feb 2010], http://www.phrma.org/files/attachments/2005-01-06.1113.PDF

International Federation of Pharmaceutical Manufacturers & Associations: Joint position on the disclosure of clinical trial information via clinical trial registries and databases. Last update Nov 2008 [accessed 11 Mar 2010], http://www.ifpma.org/pdf/Revised_Joint_Industry_Position_26Nov08.pdf

Verband Forschender Arzneimittelhersteller: VFA-Positionspapier zum Thema "publication bias" [VFA position paper on the subject of "publication bias"]. 2008, Berlin: VFA

Verband Forschender Arzneimittelhersteller: VFA-Positionspapier zum Thema "publication bias" [VFA position paper on the subject of "publication bias"]. Last update Jan 2009 [accessed 26 Feb 2010], http://www.vfa.de/download/SAVE/de/presse/positionen/pos-publication-bias.html/pos-publication-bias.pdf

Mathew SJ, Charney DS: Publication bias and the efficacy of antidepressants. Am J Psychiatry. 2009, 166: 140-145. 10.1176/appi.ajp.2008.08071102.

Abbott A: British panel bans use of antidepressant to treat children. Nature. 2003, 423: 792-

Mitka M: FDA alert on antidepressants for youth. JAMA. 2003, 290: 2534-10.1001/jama.290.19.2534.

Garland EJ: Facing the evidence: antidepressant treatment in children and adolescents. CMAJ. 2004, 170: 489-491.

Herxheimer A, Mintzes B: Antidepressants and adverse effects in young patients: uncovering the evidence. CMAJ. 2004, 170: 487-489.

Dyer O: GlaxoSmithKline faces US lawsuit over concealment of trial results. BMJ. 2004, 328: 1395-10.1136/bmj.328.7453.1395.

Jureidini JN, McHenry LB, Mansfield PR: Clinical trials and drug promotion: selective reporting of study 329. Int J Risk Safety Med. 2008, 73-81.

Institute for Quality and Efficiency in Health Care: Preliminary report on antidepressants published. Last update 10 Jun 2009 [accessed 26 Feb 2010], http://www.iqwig.de/index.867.en.html

Steinman MA, Bero LA, Chren MM, Landefeld CS: Narrative review: the promotion of gabapentin: an analysis of internal industry documents. Ann Intern Med. 2006, 145: 284-293.

Steinman MA, Harper GM, Chren MM, Landefeld CS, Bero LA: Characteristics and impact of drug detailing for gabapentin. PLoS Med. 2007, 4: e134-10.1371/journal.pmed.0040134.

Landefeld CS, Steinman MA: The Neurontin legacy: marketing through misinformation and manipulation. N Engl J Med. 2009, 360: 103-106. 10.1056/NEJMp0808659.

Mack A: Examination of the evidence for off-label use of gabapentin. J Manag Care Pharm. 2003, 9: 559-568.

Petersen M: Memos cast shadow on drug's promotion. New York Times. 2002, C2-

U.S. Department of Justice: Warner-Lambert to pay $430 million to resolve criminal & civil health care liability relating to off-label promotion. Last update 13 May 2004 [accessed 13 Mar 2010], http://www.usdoj.gov/opa/pr/2004/May/04_civ_322.htm

Feeley J, Cronin Fisk M: AstraZeneca Seroquel studies 'buried,' papers show (update 3). Last update 27 Feb 2009 [accessed 19 Mar 2010], http://www.bloomberg.com/apps/news?pid=20601087%26sid=aS_.NqzMArG8#

Milford P: AstraZeneca may link Seroquel, diabetes, doctor says (update 1). Last update 11 Mar 2009 [accessed 19 Mar 2010], http://www.bloomberg.com/apps/news?pid=newsarchive%26sid=ayzJsK2HlF6s

Whalen J: AstraZeneca chalks up Seroquel dismissal in State Court. Wall Street Journal Health Blog [Online]. 2009, http://blogs.wsj.com/health/2009/06/10/astrazeneca-chalks-up-seroquel-dismissal-in-state-court

Kapczinski F, Lima MS, Souza JS, Schmitt R: Antidepressants for generalized anxiety disorder. Cochrane Database Syst Rev. 2003, CD003592-2

Bang LM, Keating GM: Paroxetine controlled release. CNS Drugs. 2004, 18: 355-364. 10.2165/00023210-200418060-00003.

Lenzer J: NIH secrets: study break. Last update 19.10.2006 [accessed 13 Mar 2010], http://www.ahrp.org/cms/index2.php?option=com_content%26do_pdf=1%26id=398

Ray WA, Stein CM, Daugherty JR, Hall K, Arbogast PG, Griffin MR: COX-2 selective non-steroidal anti-inflammatory drugs and risk of serious coronary heart disease. Lancet. 2002, 360: 1071-1073. 10.1016/S0140-6736(02)11131-7.

Juni P, Nartey L, Reichenbach S, Sterchi R, Dieppe PA, Egger M: Risk of cardiovascular events and rofecoxib: cumulative meta-analysis. Lancet. 2004, 364: 2021-2029. 10.1016/S0140-6736(04)17514-4.

Curfman GD, Morrissey S, Drazen JM: Expression of concern: Bombardier et al., "Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis," N Engl J Med 2000;343:1520-8. N Engl J Med. 2005, 353: 2813-2814. 10.1056/NEJMe058314.

Waxman HA: The lessons of Vioxx: drug safety and sales. N Engl J Med. 2005, 352: 2576-2578. 10.1056/NEJMp058136.

Waxman HA: The Marketing of Vioxx to Physicians (Memorandum to Democratic members of the Government Reform Committee). 2005

Krumholz HM, Ross JS, Presler AH, Egilman DS: What have we learnt from Vioxx?. BMJ. 2007, 334: 120-123. 10.1136/bmj.39024.487720.68.

Charatan F: Merck to pay $58 m in settlement over rofecoxib advertising. BMJ. 2008, 336: 1208-1209. 10.1136/bmj.39591.705231.DB.

DeAngelis CD, Fontanarosa PB: Impugning the integrity of medical science: the adverse effects of industry influence. JAMA. 2008, 299: 1833-1835. 10.1001/jama.299.15.1833.

Hill KP, Ross JS, Egilman DS, Krumholz HM: The ADVANTAGE seeding trial: a review of internal documents. Ann Intern Med. 2008, 149: 251-258.

Ross JS, Hill KP, Egilman DS, Krumholz HM: Guest authorship and ghostwriting in publications related to rofecoxib: a case study of industry documents from rofecoxib litigation. JAMA. 2008, 299: 1800-1812. 10.1001/jama.299.15.1800.

Moynihan R: Merck defends Vioxx in court, as publisher apologises for fake journal. BMJ. 2009, 338: b1914-10.1136/bmj.b1914.

West RR, Jones DA: Publication bias in statistical overview of trials: example of psychological rehabilitation following myocardial infarction [Abstract]. Proceedings of the 2nd International Conference on the Scientific Basis of Health Services and 5th Annual Cochrane Colloquium; 1997 Oct 8-12; Amsterdam. Amsterdam. 1999, 17-

Mangano DT, Tudor IC, Dietzel C: The risk associated with aprotinin in cardiac surgery. N Engl J Med. 2006, 354: 353-365. 10.1056/NEJMoa051379.

Karkouti K, Beattie WS, Dattilo KM, McCluskey SA, Ghannam M, Hamdy A, Wijeysundera DN, Fedorko L, Yau TM: A propensity score case-control comparison of aprotinin and tranexamic acid in high-transfusion-risk cardiac surgery. Transfusion (Paris). 2006, 46: 327-338.

Hauser RG, Maron BJ: Lessons from the failure and recall of an implantable cardioverter-defibrillator. Circulation. 2005, 112: 2040-2042. 10.1161/CIRCULATIONAHA.105.580381.

Kesselheim AS, Mello MM: Confidentiality laws and secrecy in medical research: improving public access to data on drug safety. Health Aff (Millwood). 2007, 26: 483-491. 10.1377/hlthaff.26.2.483.

Sackner-Bernstein JD, Kowalski M, Fox M, Aaronson K: Short-term risk of death after treatment with nesiritide for decompensated heart failure: a pooled analysis of randomized controlled trials. JAMA. 2005, 293: 1900-1905. 10.1001/jama.293.15.1900.

Camilleri M, Northcutt AR, Kong S, Dukes GE, McSorley D, Mangel AW: Efficacy and safety of alosetron in women with irritable bowel syndrome: a randomised, placebo-controlled trial. Lancet. 2000, 355: 1035-1040. 10.1016/S0140-6736(00)02033-X.

Horton R: Lotronex and the FDA: a fatal erosion of integrity. Lancet. 2001, 357: 1544-1545. 10.1016/S0140-6736(00)04776-0.

Lenzer J: FDA warns that antidepressants may increase suicidality in adults. BMJ. 2005, 331: 70-10.1136/bmj.331.7508.70-b.

Lenzer J: Drug secrets: what the FDA isn't telling. Slate Magazine. 2005, http://www.slate.com/id/2126918

Saunders MC, Dick JS, Brown IM, McPherson K, Chalmers I: The effects of hospital admission for bed rest on the duration of twin pregnancy: a randomised trial. Lancet. 1985, 2: 793-795. 10.1016/S0140-6736(85)90792-5.

Nissen SE: The DREAM trial. Lancet. 2006, 368: 2049-10.1016/S0140-6736(06)69825-5.

Drazen JM, Morrissey S, Curfman GD: Rosiglitazone: continued uncertainty about safety. N Engl J Med. 2007, 357: 63-64. 10.1056/NEJMe078118.

Home PD, Pocock SJ, Beck-Nielsen H, Gomis R, Hanefeld M, Jones NP, Komajda M, McMurray JJ: Rosiglitazone evaluated for cardiovascular outcomes: an interim analysis. N Engl J Med. 2007, 357: 28-38. 10.1056/NEJMoa073394.

Nathan DM: Rosiglitazone and cardiotoxicity: weighing the evidence. N Engl J Med. 2007, 357: 64-66. 10.1056/NEJMe078117.

Psaty BM, Furberg CD: The record on rosiglitazone and the risk of myocardial infarction. N Engl J Med. 2007, 357: 67-69. 10.1056/NEJMe078116.

Psaty BM, Furberg CD: Rosiglitazone and cardiovascular risk. N Engl J Med. 2007, 356: 2522-2524. 10.1056/NEJMe078099.

Rosen CJ: The rosiglitazone story: lessons from an FDA Advisory Committee meeting. N Engl J Med. 2007, 357: 844-846. 10.1056/NEJMp078167.

Singh S, Loke YK, Furberg CD: Long-term risk of cardiovascular events with rosiglitazone: a meta-analysis. JAMA. 2007, 298: 1189-1195. 10.1001/jama.298.10.1189.

Shuster JJ, Schatz DA: The rosiglitazone meta-analysis: lessons for the future. Diabetes Care. 2008, 31: e10-10.2337/dc07-2147.

Friedrich JO, Beyene J, Adhikari NK: Rosiglitazone: can meta-analysis accurately estimate excess cardiovascular risk given the available data? Re-analysis of randomized trials using various methodologic approaches. BMC Res Notes. 2009, 2: 5-10.1186/1756-0500-2-5.

Home PD, Pocock SJ, Beck-Nielsen H, Curtis PS, Gomis R, Hanefeld M, Jones NP, Komajda M, McMurray JJ: Rosiglitazone evaluated for cardiovascular outcomes in oral agent combination therapy for type 2 diabetes (RECORD): a multicentre, randomised, open-label trial. Lancet. 2009, 373: 2125-2135. 10.1016/S0140-6736(09)60953-3.

Merck/Schering-Plough Pharmaceuticals: Merck/Schering-Plough Pharmaceuticals provides results of the ENHANCE trial. Last update 14 Jan 2008 [accessed 13 Mar 2010], http://www.msppharma.com/msppharma/documents/press_release/ENHANCE_news_release_1-14-08.pdf

Greenland P, Lloyd-Jones D: Critical lessons from the ENHANCE trial. JAMA. 2008, 299: 953-955. 10.1001/jama.299.8.953.

Lenzer J: Unreported cholesterol drug data released by company. BMJ. 2008, 336: 180-181. 10.1136/bmj.39468.610775.DB.

Berenson A: Data about Zetia risks was not fully revealed. New York Times. 2007

Furberg CD, Pitt B: Withdrawal of cerivastatin from the world market. Curr Control Trials Cardiovasc Med. 2001, 2: 205-207. 10.1186/CVM-2-5-205.

Wooltorton E: Bayer pulls cerivastatin (Baycol) from market. CMAJ. 2001, 165: 632-

Marwick C: Bayer is forced to release documents over withdrawal of cerivastatin. BMJ. 2003, 326: 518-10.1136/bmj.326.7388.518/a.

Piorkowski JD: Bayer's response to "potential for conflict of interest in the evaluation of suspected adverse drug reactions: use of cerivastatin and risk of rhabdomyolysis". JAMA. 2004, 292: 2655-2657. 10.1001/jama.292.21.2655.

Zinberg DS: A cautionary tale. Science. 1996, 273: 411-10.1126/science.273.5274.411.

Begg CB, Berlin JA: Publication bias and dissemination of clinical research. J Natl Cancer Inst. 1989, 81: 107-115. 10.1093/jnci/81.2.107.

Nathan DG, Weatherall DJ: Academia and industry: lessons from the unfortunate events in Toronto. Lancet. 1999, 353: 771-772. 10.1016/S0140-6736(99)00072-0.

Harris G: Approval of antibiotic worried safety officials. New York Times. 2006

Ross DB: The FDA and the case of Ketek. N Engl J Med. 2007, 356: 1601-1604. 10.1056/NEJMp078032.

Johansen HK, Gotzsche PC: Problems in the design and reporting of trials of antifungal agents encountered during meta-analysis. JAMA. 1999, 282: 1752-1759. 10.1001/jama.282.18.1752.

McKenzie R, Fried MW, Sallie R, Conjeevaram H, Di Bisceglie AM, Park Y, Savarese B, Kleiner D, Tsokos M, Luciano C: Hepatic failure and lactic acidosis due to fialuridine (FIAU), an investigational nucleoside analogue for chronic hepatitis B. N Engl J Med. 1995, 333: 1099-1105. 10.1056/NEJM199510263331702.

Blumsohn A: Authorship, ghostscience, access to data and control of the pharmaceutical scientific literature: who stands behind the word?. Prof Ethics Rep. 2006, 19: 1-4.

Bracken MB, Shepard MJ, Holford TR, Leo-Summers L, Aldrich EF, Fazl M, Fehlings M, Herr DL, Hitchon PW, Marshall LF, Nockels RP, Pascale V, Perot PL, Piepmeier J, Sonntag VK, Wagner F, Wilberger JE, Winn HR, Young W: Administration of methylprednisolone for 24 or 48 hours or tirilazad mesylate for 48 hours in the treatment of acute spinal cord injury: results of the Third National Acute Spinal Cord Injury Randomized Controlled Trial. JAMA. 1997, 277: 1597-1604. 10.1001/jama.277.20.1597.

Download references

Acknowledgements

The authors thank Dirk Eyding, Daniel Fleer, Elke Hausner, Regine Potthast, Andrea Steinzen, and Siw Waffenschmidt for helping to screen reference lists and Verena Wekemann for formatting citations.

Funding source

This work was supported by the German Institute for Quality and Efficiency in Health Care. All authors are employees of the Institute.

Author information

Authors and affiliations.

Institute for Quality and Efficiency in Health Care, Dillenburger Str 27, 51105, Cologne, Germany

Natalie McGauran, Beate Wieseler, Julia Kreis, Yvonne-Beatrice Schüler, Heike Kölsch & Thomas Kaiser

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Natalie McGauran .

Additional information

Competing interests.

Non-financial competing interests: All authors are employees of the German Institute for Quality and Efficiency in Health Care. In order to produce unbiased HTA reports, the Institute depends on access to all of the relevant data on the topic under investigation. We therefore support the mandatory worldwide establishment of trial registries and study results databases.

Authors' contributions

NM and BW had the idea for the manuscript. NM, HK, YBS, and JK screened reference lists. JK and YBS reviewed titles and abstracts of potentially relevant citations identified in the screening process. NM extracted relevant examples from the full-text publications. BW and TK checked the extracted examples. NM drafted the first version of the manuscript. The remaining authors contributed important intellectual content to the final version. All authors approved the final version.

Electronic supplementary material

13063_2009_448_moesm1_esm.doc.

Additional file 1: Table S2: Examples of reporting bias in the medical literature. Extracts from 50 publications presenting examples of reporting bias. (DOC 634 KB)

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

McGauran, N., Wieseler, B., Kreis, J. et al. Reporting bias in medical research - a narrative review. Trials 11 , 37 (2010). https://doi.org/10.1186/1745-6215-11-37

Download citation

Received : 07 November 2009

Accepted : 13 April 2010

Published : 13 April 2010

DOI : https://doi.org/10.1186/1745-6215-11-37

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Irritable Bowel Syndrome
  • Atopic Dermatitis

ISSN: 1745-6215

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

what is reporting bias in research

  • Reporting Bias: Definition, Types, Examples & Mitigation

busayo.longe

Reporting bias is a type of selection bias that occurs when only certain observations are reported or published. Reporting bias can greatly impact the accuracy of results, and it is important to consider reporting bias when conducting research. In this article, we will discuss reporting bias, the types, and the examples.

What is Reporting Bias?

Reporting bias is a type of selection bias that occurs when the results of a study are skewed due to the way they are reported. Reporting bias happens when researchers or scientists choose to report only certain data, even though other data may exist that would have influenced their findings.

For example, if you were to conduct a study on the effects of eating chocolate on mice and only used mice from a particular region, then your results could be skewed because they would not be representative of all mice. Reporting bias can also occur when the data is manipulated before it’s reported, as in the case of cherry-picking or data mining.

This can result in unreliable or biased results reported by an organization or individual. The most common form of reporting bias is selection bias. This is when participants in a study are chosen based on their ability to influence the outcome or for other reasons that might create an inaccurate picture of reality.

Read: What is Participant Bias? How to Detect & Avoid It

Another reporting bias occurs when researchers do not report all the results of their studies. They may leave out information because they don’t think it’s important or because they want to make their findings seem more impressive than they are.

This is why there’s a field of science dedicated to trying to determine what happened in studies where there was reporting bias: a meta-analysis. A meta-analysis is an analysis of multiple studies on the same topic done by different researchers, which can help provide more insight into whether a particular finding is factual or not.

Read: Selection Bias in Research: Types, Examples & Impact

Types of Reporting Bias

1. outcome reporting bias.

Outcome reporting bias occurs when an outcome that was not originally planned for (or expected) is reported in favor of the hypothesis being tested. This can be due to either a conscious or unconscious decision made by the researcher.

For example, suppose you wanted to test whether eating more vegetables improves health outcomes. If you find that people who ate more vegetables were healthier than those who didn’t and then decide that this means eating vegetables improves health, you may have fallen victim to outcome reporting bias because it was never your intention to study how vegetables affect health outcomes.

2. Publication bias

Publication bias is another form of reporting bias where journals or other publications only print positive results from studies conducted on their topic(s). This means that only positive results are published, leading readers to believe that there’s no need for further research on the topic because all of the relevant evidence has already been collected. This can also lead to the “file drawer” phenomenon where negative results are not published because they will not contribute positively to the researcher’s reputation or career advancement.

Read: 21 Chrome Extensions for Academic Researchers in 2022

3. Knowledge reporting bias

Knowledge reporting bias refers to the fact that researchers may not report all their knowledge about a topic or experiment because they feel it isn’t important enough or doesn’t fit their hypothesis.

Here’s an example: let’s say two researchers are studying whether people feel healthier if they eat vegetables every day versus if they eat vegetables once or twice per week and one researcher finds no difference between eating vegetables daily vs. once per week, but the other did find a health improvement when people ate veggies daily versus once per week. The first researcher might decide not to report this finding and then prevent the readers from knowing that possibility.

4. Multiple (duplicate) publication bias

Multiple publication bias is when a study is published repeatedly either due to changes in methodology or because the same data is analyzed differently by different researchers. This can lead to false conclusions about the effectiveness of a treatment or program because it skews the results towards positive outcomes.

5. Time lag bias

Time lag bias occurs when a study is conducted with no follow-up measures, so it doesn’t get published until much later when someone revisits the topic and does another study on it (with follow-up data). When this happens, researchers are unable to determine whether or not there has been any change over time due to intervention or other factors because they don’t have any data from before intervention began.

6. Citation bias

Citation bias occurs when one researcher cites another researcher’s work as evidence for their argument without acknowledging that it was cited from someone else’s paper first; essentially making it look like they came up with an idea on their own instead of building off someone else’s work.

Explore: Citation Styles in Research Writing: MLA, APA, IEEE

Examples of Reporting Bias

For example, if you were studying how many people with blue eyes wear glasses and found that only 40% of people with blue eyes wear glasses, then it could be because they didn’t include all the people with blue eyes in your study. It could also be because there are other factors at play that make fewer people with blue eyes wear glasses such as those who don’t wear glasses may not feel like wearing them.

Reporting bias can also happen when someone running a survey or experiment asks leading questions. For example, instead of asking “Do you like eating bananas?” they might say “Bananas are delicious,” or instead of asking “Have you ever eaten bananas?” they might say “Wouldn’t it be nice if we could eat bananas every day?” The first question does not affect the outcome, but if someone answers yes to the second question, it is because they think they have to agree with the pollster’s statement.

Read: Undercoverage Bias: Definition, Examples in Survey Research

If you ask people to rate their performance as a manager on a scale from 1 to 5, and then you ask them about their coworkers’ performance, they might tell you that all their coworkers are doing great (because they don’t want to look bad). This is an example of reporting bias because it skews the results of your study by making it seem like everyone is performing well when in reality, it may be that some people are doing poorly.

Effects and Implications of Reporting Bias

Reporting bias can lead to false conclusions being drawn from experiments and may even lead to harm for patients or subjects involved in a study. For example, if a researcher does not report all of their data, it could lead them to think that their treatment works better than it does.

This would result in them prescribing this treatment to patients who don’t need it or causing other researchers to replicate their experiment with incorrect results. When a study sample is selected based on reporting, the results are likely to be biased in favor of positive results: if people are more likely to report positive events than negative events, then they’ll also be more likely to not report at all.

Reporting bias can have serious implications for your survey and your business. If you’re trying to figure out whether your product/service is effective at solving problems for customers (or not), then reporting bias can make it hard for you to get an accurate lead on how well it works across different demographics and situations.

Read: Survey Errors To Avoid: Types, Sources, Examples, Mitigation

How to Prevent or Manage Reporting Bias

  • Be honest with yourself about whether or not your results are statistically significant enough to be considered valid. If they are not, then don’t pretend they are.
  • You can also try running some more experiments before publishing your findings so that there’s more evidence available for readers who might want more proof before believing what was found during your experiment.
  • Consider using statistics or data to back up your claims whenever possible.
  • You can also manage it by collecting and reporting data from additional sources to provide an alternative view of the data. This can be done by asking colleagues who have not been involved in the project to review the results for plausibility, or by running a second analysis that uses more conservative assumptions.
Read: Research Bias: Definition, Types + Examples

Reporting bias is a phenomenon in which the reporting of a study is biased by the researcher’s expectations of what they want to find and it can be caused by many factors. You must practice transparency in your research as that would help you better manage reporting bias.

Logo

Connect to Formplus, Get Started Now - It's Free!

  • citation bias
  • knowledge reporting bias
  • outcome-reporting bias
  • publication bias
  • recall bias
  • recall limitation
  • reporting tools
  • time lag bias
  • busayo.longe

Formplus

You may also like:

Projective Techniques In Surveys: Definition, Types & Pros & Cons

Introduction When you’re conducting a survey, you need to find out what people think about things. But how do you get an accurate and...

what is reporting bias in research

Selection Bias in Research: Types, Examples & Impact

In this article, we’ll discuss the effects of selection bias, how it works, its common effects and the best ways to minimize it.

Recall Bias: Definition, Types, Examples & Mitigation

This article will discuss the impact of recall bias in studies and the best ways to avoid them during research.

What is Publication Bias? (How to Detect & Avoid It)

In this article, we will do a deep dive into publication bias, how to reduce or avoid it, and other types of biases in research.

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

Grad Coach

Research Bias 101: What You Need To Know

By: Derek Jansen (MBA) | Expert Reviewed By: Dr Eunice Rautenbach | September 2022

If you’re new to academic research, research bias (also sometimes called researcher bias) is one of the many things you need to understand to avoid compromising your study. If you’re not careful, research bias can ruin the credibility of your study. 

In this post, we’ll unpack the thorny topic of research bias. We’ll explain what it is , look at some common types of research bias and share some tips to help you minimise the potential sources of bias in your research.

Overview: Research Bias 101

  • What is research bias (or researcher bias)?
  • Bias #1 – Selection bias
  • Bias #2 – Analysis bias
  • Bias #3 – Procedural (admin) bias

So, what is research bias?

Well, simply put, research bias is when the researcher – that’s you – intentionally or unintentionally skews the process of a systematic inquiry , which then of course skews the outcomes of the study . In other words, research bias is what happens when you affect the results of your research by influencing how you arrive at them.

For example, if you planned to research the effects of remote working arrangements across all levels of an organisation, but your sample consisted mostly of management-level respondents , you’d run into a form of research bias. In this case, excluding input from lower-level staff (in other words, not getting input from all levels of staff) means that the results of the study would be ‘biased’ in favour of a certain perspective – that of management.

Of course, if your research aims and research questions were only interested in the perspectives of managers, this sampling approach wouldn’t be a problem – but that’s not the case here, as there’s a misalignment between the research aims and the sample .

Now, it’s important to remember that research bias isn’t always deliberate or intended. Quite often, it’s just the result of a poorly designed study, or practical challenges in terms of getting a well-rounded, suitable sample. While perfect objectivity is the ideal, some level of bias is generally unavoidable when you’re undertaking a study. That said, as a savvy researcher, it’s your job to reduce potential sources of research bias as much as possible.

To minimize potential bias, you first need to know what to look for . So, next up, we’ll unpack three common types of research bias we see at Grad Coach when reviewing students’ projects . These include selection bias , analysis bias , and procedural bias . Keep in mind that there are many different forms of bias that can creep into your research, so don’t take this as a comprehensive list – it’s just a useful starting point.

Research bias definition

Bias #1 – Selection Bias

First up, we have selection bias . The example we looked at earlier (about only surveying management as opposed to all levels of employees) is a prime example of this type of research bias. In other words, selection bias occurs when your study’s design automatically excludes a relevant group from the research process and, therefore, negatively impacts the quality of the results.

With selection bias, the results of your study will be biased towards the group that it includes or favours, meaning that you’re likely to arrive at prejudiced results . For example, research into government policies that only includes participants who voted for a specific party is going to produce skewed results, as the views of those who voted for other parties will be excluded.

Selection bias commonly occurs in quantitative research , as the sampling strategy adopted can have a major impact on the statistical results . That said, selection bias does of course also come up in qualitative research as there’s still plenty room for skewed samples. So, it’s important to pay close attention to the makeup of your sample and make sure that you adopt a sampling strategy that aligns with your research aims. Of course, you’ll seldom achieve a perfect sample, and that okay. But, you need to be aware of how your sample may be skewed and factor this into your thinking when you analyse the resultant data.

Need a helping hand?

what is reporting bias in research

Bias #2 – Analysis Bias

Next up, we have analysis bias . Analysis bias occurs when the analysis itself emphasises or discounts certain data points , so as to favour a particular result (often the researcher’s own expected result or hypothesis). In other words, analysis bias happens when you prioritise the presentation of data that supports a certain idea or hypothesis , rather than presenting all the data indiscriminately .

For example, if your study was looking into consumer perceptions of a specific product, you might present more analysis of data that reflects positive sentiment toward the product, and give less real estate to the analysis that reflects negative sentiment. In other words, you’d cherry-pick the data that suits your desired outcomes and as a result, you’d create a bias in terms of the information conveyed by the study.

Although this kind of bias is common in quantitative research, it can just as easily occur in qualitative studies, given the amount of interpretive power the researcher has. This may not be intentional or even noticed by the researcher, given the inherent subjectivity in qualitative research. As humans, we naturally search for and interpret information in a way that confirms or supports our prior beliefs or values (in psychology, this is called “confirmation bias”). So, don’t make the mistake of thinking that analysis bias is always intentional and you don’t need to worry about it because you’re an honest researcher – it can creep up on anyone .

To reduce the risk of analysis bias, a good starting point is to determine your data analysis strategy in as much detail as possible, before you collect your data . In other words, decide, in advance, how you’ll prepare the data, which analysis method you’ll use, and be aware of how different analysis methods can favour different types of data. Also, take the time to reflect on your own pre-conceived notions and expectations regarding the analysis outcomes (in other words, what do you expect to find in the data), so that you’re fully aware of the potential influence you may have on the analysis – and therefore, hopefully, can minimize it.

Analysis bias

Bias #3 – Procedural Bias

Last but definitely not least, we have procedural bias , which is also sometimes referred to as administration bias . Procedural bias is easy to overlook, so it’s important to understand what it is and how to avoid it. This type of bias occurs when the administration of the study, especially the data collection aspect, has an impact on either who responds or how they respond.

A practical example of procedural bias would be when participants in a study are required to provide information under some form of constraint. For example, participants might be given insufficient time to complete a survey, resulting in incomplete or hastily-filled out forms that don’t necessarily reflect how they really feel. This can happen really easily, if, for example, you innocently ask your participants to fill out a survey during their lunch break.

Another form of procedural bias can happen when you improperly incentivise participation in a study. For example, offering a reward for completing a survey or interview might incline participants to provide false or inaccurate information just to get through the process as fast as possible and collect their reward. It could also potentially attract a particular type of respondent (a freebie seeker), resulting in a skewed sample that doesn’t really reflect your demographic of interest.

The format of your data collection method can also potentially contribute to procedural bias. If, for example, you decide to host your survey or interviews online, this could unintentionally exclude people who are not particularly tech-savvy, don’t have a suitable device or just don’t have a reliable internet connection. On the flip side, some people might find in-person interviews a bit intimidating (compared to online ones, at least), or they might find the physical environment in which they’re interviewed to be uncomfortable or awkward (maybe the boss is peering into the meeting room, for example). Either way, these factors all result in less useful data.

Although procedural bias is more common in qualitative research, it can come up in any form of fieldwork where you’re actively collecting data from study participants. So, it’s important to consider how your data is being collected and how this might impact respondents. Simply put, you need to take the respondent’s viewpoint and think about the challenges they might face, no matter how small or trivial these might seem. So, it’s always a good idea to have an informal discussion with a handful of potential respondents before you start collecting data and ask for their input regarding your proposed plan upfront.

Procedural bias

Let’s Recap

Ok, so let’s do a quick recap. Research bias refers to any instance where the researcher, or the research design , negatively influences the quality of a study’s results, whether intentionally or not.

The three common types of research bias we looked at are:

  • Selection bias – where a skewed sample leads to skewed results
  • Analysis bias – where the analysis method and/or approach leads to biased results – and,
  • Procedural bias – where the administration of the study, especially the data collection aspect, has an impact on who responds and how they respond.

As I mentioned, there are many other forms of research bias, but we can only cover a handful here. So, be sure to familiarise yourself with as many potential sources of bias as possible to minimise the risk of research bias in your study.

what is reporting bias in research

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Research proposal mistakes

This is really educational and I really like the simplicity of the language in here, but i would like to know if there is also some guidance in regard to the problem statement and what it constitutes.

Alvin Neil A. Gutierrez

Do you have a blog or video that differentiates research assumptions, research propositions and research hypothesis?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Your session is about to expire

Bias in research and why it’s important to control for reliable clinical trial results, what is bias in research, research bias definition.

In research, bias refers to a systematic error or deviation from the truth, either in the data, results, or conclusions of a study. Bias can originate in any stage of the research process, and is usually related to flaws, incorrect assumptions, or limitations in the study’s design, conduct, analysis, or interpretation/reporting. Different types of bias in research can distort findings and influence the outcomes and validity of a study or lead to inaccurate conclusions. Bias can be entirely unintentional, but can also be intentional, arising from malicious/unethical acts.

All research is subject to bias. Clinical research involving human participants is no exception, but there are a few types of bias that are particularly relevant to clinical trials.

In this article, we will explore the main types of bias in clinical research, why they present a risk to research validity and patient safety, and how to prevent and address bias to ensure scientific integrity in clinical research studies.

Why is bias an issue in research?

Bias has a significant potential to cause issues in research because it compromises the reliability and validity of study findings. When biased data or conclusions are then used in decision-making processes related to healthcare directives and policy, it could lead to the implementation of inappropriate practices or guidelines that can undermine patient care or even pose serious risks to patient health and safety. Thus, minimizing bias in clinical research is important for developing treatments that are safe and effective for patients. It’s the responsibility of clinical researchers to understand the typical sources of bias in research and to take steps to prevent bias, or at the very least aim to limit its impacts and acknowledge any bias when reporting the findings of the study.

What types of research bias affect clinical trials?

The main types of bias in clinical research include selection bias, performance bias, attrition bias, detection bias, and reporting bias. There are other less-common types of research bias that may be relevant to certain clinical research study designs. We describe these types of research bias in the following section, and then provide examples of such types of bias and ideas for preventing it or minimizing its impacts on study outcomes.

10 types of bias in clinical trials and how to address them

1. Selection bias in research

Selection bias is one of the most important types of bias that can skew study outcomes, and occurs when the selection of participants for a study – and/or their assignment to groups – is not sufficiently random. Selection bias can arise from enrolling a sample that is not representative of the broader population, and thus which does not capture sufficient variability in confounding factors. Patients in such a sample may be predisposed to a certain outcome or treatment response due to underlying similarities.

One of the factors that plays into establishing a study’s eligibility criteria is capturing enough variability among participants in order to achieve a somewhat representative sample. This needs to be balanced with limiting extreme variability in confounding variables, in order for the study to have sufficient power to identify a treatment effect. Finding this balance can be tricky, and we’ve discussed it further in the following articles:

Inclusion vs Exclusion Criteria | Power

Clinical Trial Basics: Eligibility Criteria in Clinical Trials | Power

To prevent selection bias, eligibility criteria should be defined in a way that minimizes risk to at-risk populations and eliminates exaggerated variability in confounding factors, while still capturing a sufficient level of diversity in the enrolled sample. Randomization is another important part of minimizing selection bias, as it aims to minimize systematic differences between study arms. Randomization should be carried out strictly according to a randomization protocol that is appropriate for the study, and with proper blinding procedures (allocation concealment) in place.

2. Attrition bias

Attrition bias occurs when there is a high level of patient drop-out, or attrition , wherein the patients who drop out are systematically different from those who remain in the study. The resulting sample differs from the original sample in a systematic way, potentially introducing bias.

Attrition bias can be hard to prevent, as patients may leave a study for numerous and sometimes unpredictable reasons. When a high attrition rate is expected for a study, one potential fix is to over-enroll. Designing patient-centric studies can also make them more attractive and manageable for patients, encouraging engagements and discouraging dropouts.

For more, see Lost to follow-up: Where did the lost patients go? | Power

3. Allocation bias

If the allocation of participants into treatment and control groups is not performed entirely randomly or is influenced by factors other than chance, it can lead to bias in the results. Allocation bias is a form of selection bias, and is prevented by strictly following a randomization protocol that is suitable for the study design at hand, and ensuring that researchers are appropriately blinded during the process, if necessary.

4. Reporting bias

Reporting bias has to do with selective reporting of outcomes or findings from a study. This type of bias is especially prone to having a malicious origin, i.e., dishonest or unethical conduct by researchers in order to reach a certain conclusion about the safety or efficacy of an intervention. However, it can also result from errors in data analysis or accidentally overlooking certain data or possible influencing factors. Reporting bias leads to incomplete or biased dissemination of results, which can have consequences that ripple out through scientific publications and policy-making decisions, potentially impacting patient safety.

Reporting bias is prevented by ensuring ethical conduct and research practices, and by validating data and being careful and attentive to detail during data analysis. Transparency is important for scientific integrity, and that means publishing findings even when they are not what was expected.

5. Performance bias

Performance bias arises when there are differences between the care provided to participants in different study arms which then influences the outcomes being measured.

Utilizing proper allocation concealment, or blinding, for both study participants and researchers (i.e., a double-blind study) can help to limit the potential for differential care decisions or certain patient behaviors based on knowledge about study group assignment.

6. Detection bias

When there are systematic differences in how outcomes are assessed or detected between groups, it can introduce bias into the study findings. One way to avoid detection bias is to blind the analysts who assess the outcomes of the study, as in a triple-blinded study. This works to ensure that any differences are actually due to differences in the intervention received, and not due to knowledge about the assignments affecting the way that the results are interpreted.

Types of bias in clinical research

Types of bias in clinical trials continued

The above are the most common sources of bias in clinical research. The list continues below with other types of research bias that may be relevant in certain studies, but they are generally less common/widespread. It’s also important to note that the descriptions may overlap to some degree. Selection bias could be related to attrition or to allocation, for example, and allocation bias could be referred to as a type of selection bias in a given study, depending on nuances in how it’s characterized. What’s important is to be able to recognize the different potential sources of bias and work to mitigate their impacts on study results.

7. Measurement bias

Measurement bias, sometimes called classification bias, describes errors or inaccuracies in the way data is collected, measured, or recorded during a study, which can distort the relationships between variables used to determine study outcomes. Measurement bias can be prevented by developing robust SOPs for data management, making sure they are complied with faithfully, and ensuring proper validation of source data before database lock.

8. Recall bias

Participants' ability to recall past events accurately can fade over time, and can also be influenced by factors such as personal beliefs, emotional states, or other circumstances affecting their behavior during data collection interviews. Recall bias is particularly relevant to retrospective studies and in those employing subjective self-reporting measures that are not collected in the present.

Solutions such as electronic patient-reported outcomes (ePRO) can help prevent recall bias by allowing patients to report remotely on their present state, without having to recall past events or health parameters in scheduled study visits.

9. Confounding bias

Confounding bias refers to the situation wherein a certain correlation between a variable and an outcome is mistaken for a causative relationship, or when a relationship between variables is confounded - blurred - by another variable that was not taken into consideration. In other words, a confounding factor is playing a role in the observed outcome. It is important to identify potential confounding variables (such as patient characteristics and habits, i.e., the influence of smoking in a lung cancer study cannot be overlooked), control for them to the extent possible, and specifically check for confounding relationships during the analysis of study data in order to prevent confounding bias.

10. Observer bias

Observer bias occurs when investigators’ or researchers' expectations, assumptions, or knowledge about group assignments influence their interpretation of study outcomes. Blinding procedures are employed to prevent observer bias arising from investigators or study physicians making differential care decisions for participants in different groups, or from interpreting results differently for the different study arms.

Clinical research bias examples

Let’s take a look at three hypothetical case studies to demonstrate examples of bias in clinical research.

An exciting new drug has been developed for depression, and researchers want to know how effective it is. The study sponsor receives approval and begins enrolling patients based on the eligibility criteria. Upon completion, data is analyzed and there is a 50% higher rate of success in the treatment arm compared to the control arm, who took the comparator standard of care drug. However, it’s later found that almost all individuals in the control arm were smokers, while there were few smokers in the treatment arm. A selection bias was thus introduced unintentionally, as researchers overlooked this confounding variable. It’s possible that smoking directly impeded the control drug from exerting a beneficial effect, but it is not possible to retrospectively unravel the influence of this factor, which also represents a potential confounding bias.

A clinician at a trial site has not been blinded to participant allocations. When participant A comes in for a study visit complaining of headaches, the clinician decides to prescribe an NSAID (non-steroidal antiinflammatory drug) outside of the study protocol, knowing that participant A has been assigned to the placebo group, and that the supplemental NSAID will help alleviate the headaches and will not interact with the study drug. Participant A then responds to a study questionnaire, revealing that they did not have headaches all week. This act of differential care by the physician represents performance bias, and may skew study results by hiding a side effect experienced by participant A in the placebo group.

A new study is enrolling participants for a promising new cancer drug. During the informed consent procedure, the principal investigator is touched by a patient’s difficult story, and decides that the individual deserves to be in the treatment group. The PI then modifies the randomization procedure to ensure this patient is assigned to the treatment group and thus receives treatment. This exemplifies allocation bias, a type of selection bias wherein the assumption of completely random allocation is broken. While it is not guaranteed to skew study results, it has the potential to introduce bias, and this action goes against ethical and impartial practices in research.

How to avoid bias in research

Preventing bias in clinical research largely begins with a well thought-out study protocol that is designed from the bottom up to prevent the introduction of bias. However, careful adherence to the study protocol, SOPs, and regulations and ethical guidelines throughout the trial is also important for limiting further sources of bias.

  • Thoughtful study design

Time should be taken to develop robust, patient-centric study protocols that have clear objectives, cleverly selected inclusion criteria, an appropriate randomization procedure, blinding protocols, and standardized data collection methods and data handling protocols. Most bias is prevented in the initial design stage of the trial.

  • Sample size calculation

A sufficient number of participants should be included in the study to achieve enough statistical power to answer the research hypothesis and maximize the generalizability of findings. This should be balanced with a consideration of the extra resources required per additional patient enrolled, along with the ethical consideration of exposing participants to any risk inherent to the study.

  • Inclusion and exclusion criteria

The inclusion criteria and exclusion criteria should be set in a way that supports enough variability in the study population to mimic that of the broader population, while avoiding overly extreme variability in confounding characteristics that could make it impossible to reveal a drug effect.

  • Blinding procedures

If there is any reasonable potential for bias to be introduced in the case that either researchers or patients should become aware of group allocations, blinding measures should be implemented. Most studies are either single-blind (participants blinded) or double-blind (participants and researchers blinded), but triple-blinding (also blinding the analysts) can also be used when there is potential for bias during analysis of study data. Unblinded studies, also called open-label trials , are less common in controlled trials, instead being mostly reserved for single-arm trials, wherein there is no grouping at all and thus blinding is unnecessary.

  • Randomization techniques

Employing a randomization method that is appropriate for the study helps to distribute potential confounding variables evenly among study groups. We’ve written about various types of randomization protocols here .

  • Transparency in reporting

As part of ethical conduct in clinical trials, researchers and sponsors should prioritize being transparent in their reporting of study methodologies, data analysis, and interpretation of results, along with limitations of the study and details about any potential conflicts of interest. Transparency is important for upholding scientific integrity and public trust in clinical research, but also helps to reveal potential biases that may have been missed. The idea behind transparency in research is to accept that mistakes are natural, and that it’s better to reveal and acknowledge them in order to rectify research findings and ensure that misleading results are not used to inform healthcare guidelines or policy, which could pose a threat to public health.

  • Independent review

Clinical research protocols are reviewed by an IRB to ensure they meet ethical guidelines. Part of the independent review process also focuses on identifying any methodological flaws or potential sources of biases that could skew results or lead to inaccurate findings.

There are many potential sources of bias in research, and bias in clinical research carries potential risks to the safety and well-being of patients and the general public. Minimizing clinical research bias is a major priority in study design and conduct, and it’s important for trial sponsors and clinical researchers to understand potential sources of bias in order to recognize and prevent them. When bias cannot be avoided, it should at the very least be mitigated to the extent possible and acknowledged during the reporting of findings, which will support the generation of reliable evidence to support transparently informed decision-making for patient care and healthcare policies.

Other Trials to Consider

Patient Care

Wait-list Control

Computerized attentional bias modification training, sponsor mcg device, training group 1, communication and bias mitigation training, mbc + vibrant, error-augmentation gait training, combined mri and micro-ultrasound guided prostate biopsy., cognitive bias modification intervention for help-seeking stigma, popular categories.

Vitiligo Clinical Trials 2024

Vitiligo Clinical Trials 2024

Ewing Sarcoma Clinical Trials 2024

Ewing Sarcoma Clinical Trials 2024

Lenalidomide Clinical Trials

Lenalidomide Clinical Trials

Clinical Trials in Tampa, FL

Clinical Trials in Tampa, FL

Paid Clinical Trials in San Antonio, TX

Paid Clinical Trials in San Antonio, TX

Paid Clinical Trials in Pennsylvania

Paid Clinical Trials in Pennsylvania

Birth Control Clinical Trials 2024

Birth Control Clinical Trials 2024

Axial Spondyloarthritis Clinical Trials 2023

Axial Spondyloarthritis Clinical Trials 2023

Spina Bifida Clinical Trials 2023

Spina Bifida Clinical Trials 2023

Vestibular Schwannoma Clinical Trials 2023

Vestibular Schwannoma Clinical Trials 2023

Popular guides.

Clinical Trial Basics: Randomization in Clinical Trials

Causes of reporting bias: a theoretical framework

Affiliations.

  • 1 Department of Public Health and Primary Care, Leiden University Medical Center, Hippocratespad 21, Gebouw 3, Leiden, 2300 RC Leiden, The Netherlands.
  • 2 Department of Primary and Community Care, Radboud university medical center, Geert Grooteplein Noord 21, 6500 HB Nijmegen, The Netherlands.
  • 3 ACHIEVE Centre for Applied Research, Amsterdam University of Applied Sciences, Tafelbergweg 51, Amsterdam, 1105 BD Amsterdam, The Netherlands.
  • 4 Department of Cardiology, Amsterdam University Medical Center (location Meibergdreef), University of Amsterdam, Meibergdreef 9, 1105 AZ Amsterdam, The Netherlands.
  • 5 Apotheek Boekel, Kerkstraat 35, Boekel, 5427 BB, The Netherlands.
  • 6 Department of Epidemiology and Biostatistics, Amsterdam University Medical Centers, location VUmc, Van der Boechorststraat 7, 1081 BT Amsterdam, The Netherlands.
  • 7 Department of Philosophy, Faculty of Humanities, Vrije Universiteit Amsterdam, De Boelelaan 1105, 1081 HV Amsterdam, The Netherlands.
  • PMID: 31497290
  • PMCID: PMC6713068
  • DOI: 10.12688/f1000research.18310.2

Reporting of research findings is often selective. This threatens the validity of the published body of knowledge if the decision to report depends on the nature of the results. The evidence derived from studies on causes and mechanisms underlying selective reporting may help to avoid or reduce reporting bias. Such research should be guided by a theoretical framework of possible causal pathways that lead to reporting bias. We build upon a classification of determinants of selective reporting that we recently developed in a systematic review of the topic. The resulting theoretical framework features four clusters of causes. There are two clusters of necessary causes: (A) motivations (e.g. a preference for particular findings) and (B) means (e.g. a flexible study design). These two combined represent a sufficient cause for reporting bias to occur. The framework also features two clusters of component causes: (C) conflicts and balancing of interests referring to the individual or the team, and (D) pressures from science and society. The component causes may modify the effect of the necessary causes or may lead to reporting bias mediated through the necessary causes. Our theoretical framework is meant to inspire further research and to create awareness among researchers and end-users of research about reporting bias and its causes.

Keywords: Causality; publication bias; questionable research practice; reporting bias; research design; selective reporting.

Publication types

  • Research Support, Non-U.S. Gov't
  • Systematic Review
  • Research Design*

Grants and funding

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

what is reporting bias in research

Home Market Research

Reporting Bias: Strategies for More Transparent Research

Reporting bias occurs when specific outcomes or results are selectively reported. Learn how to detect & prevent it to ensure fairness.

When important research is shared with the public, reporting bias can affect the situation. This happens when the way results are presented affects how the public and professionals perceive them, sometimes negatively influencing decision-making in healthcare and policy.

In this blog, we will help you understand reporting bias, discuss its overall impact, and provide tips for recognizing and reducing it. This will help you understand and tackle bias in information.

What is Reporting Bias?

Reporting bias occurs when a study, experiment, or research outcomes are presented in a manner that does not accurately represent the real data.

Imagine an artist with many bright colors to choose from but paints using only different shades of blue. The final painting looks nice but doesn’t show the complete range of colors the artist could have used.

This limited color choice is similar to reporting bias in research, where research information gets distorted because some details are selectively revealed or kept secret, whether on purpose or by accident. It’s like picking specific colors from a palette, creating a picture that might not accurately reflect reality.

Reporting bias comes in different forms, each with its own features and effects. Some types of reporting bias are:

  • Publication bias.
  • Time-lag bias.
  • Multiple (duplicate) publication bias.
  • Location bias.
  • Citation bias.
  • Language bias.
  • Outcome reporting bias.

Each type of bias can distort the overall understanding of research results.

Types of Reporting Bias

Different types of reporting bias affect how accurate and reliable research results are. It’s important to know about these types to evaluate study findings carefully.

01. Publication Bias

Publication bias happens when research studies with positive outcomes are more likely to be published than those with neutral or negative results.

Journals may preferentially accept studies showing significant effects, creating an incomplete picture of the overall evidence. This bias can impact meta-analyses and systematic reviews, as they depend heavily on the published literature.

The impact of publication bias is significant. It affects systematic reviews, medical research, and healthcare choices. It’s similar to a magnifying glass that amplifies positive results while downplaying the visibility of negative or inconclusive outcomes.

This distortion not only skews scientific discussions but also confuses medical professionals, policymakers, and patients, resulting in less-than-optimal decisions that could potentially harm patient care and health.

02. Selective Outcome Reporting Bias

Selective reporting of outcomes involves consciously reporting specific results over others within a study. 

Researchers may focus on results that show statistically significant findings and minimize or leave out less positive findings. This can affect how people perceive the effectiveness of an intervention or treatment.

Imagine a photographer taking a hundred photos but deciding only to show ten that tell a specific story, keeping the others hidden. This is essentially what selective outcome reporting means.

The problem of this reporting comes in many forms and has a big impact. It can involve reporting data on certain groups, using adjusted analyses instead of unadjusted ones, and dealing with missing data differently.

These practices can give a misleading view of study results, and it’s a widespread issue, affecting almost half of all studies. This significantly affects how study or research findings are communicated.

Think of selective outcome reporting as a filter for raw data. It only shows a chosen perspective, hiding the rest, including the main outcome. This biased view can mess up clinical decision-making and introduce inaccuracies in the body of evidence.

03. Time-Lag Bias

Time-lag bias happens when research findings are published late. It creates an incomplete and outdated view of the evidence. Studies with positive results need to be published faster. It affects decision-making before all relevant information is known.

Imagine a race where the fastest runners get a head start while the slower ones are delayed. This is similar to time lag, where studies with positive results are published more quickly than those with negative results. This leads to an overestimation of treatment effects in the research field.

Time-lag bias affects medical literature by having more positive or statistically significant results, which affects the overall representation of research findings. This bias can lead to overestimating treatment effects, affecting how effective interventions are perceived.

The Impact of Reporting Bias on Healthcare Decisions

Reporting bias in healthcare decisions goes beyond research, affecting different players in the healthcare system. This distortion starts in research and affects policymakers, medical professionals, and patient care.

It influences how treatments are assessed for risks and benefits. This can result in decisions that may not be in the best interest of patients.

  • Distorted Risk-Benefit Ratio: Reporting bias may affect treatment risks and benefits. It impacts efficacy and safety assessments.
  • Misleading Medical Professionals: Healthcare workers may be misled by biased reporting. It causes suboptimal decision-making with insufficient or selective data.
  • Patient Outcomes: Reporting bias impacts patient treatment. It may lead to unsuitable therapies and poor health effects.
  • Systemic Inefficiencies: Reporting bias affects data accuracy and completeness, causing healthcare system inefficiencies. Overall, decision-making is affected.
  • Compromised Risk Mitigation: Data misinterpretation from reporting bias could hamper risk minimization. It makes finding and managing treatment difficulties difficult.
  • Inequality in Healthcare: Reporting bias causes healthcare inequality. It affects therapy effects and may cause care discrepancies between patient groups.
  • Biased Policymaking: Reporting bias may impact policymaking. Resource allocation, research priorities, and healthcare policy can be compromised.

Examples of Reporting Bias in Medical Research

To fully understand how reporting bias affects things, it’s important to look at real-life examples that highlight the problem. These examples act like a mirror, showing you what reporting bias looks like in medical research and how it can seriously affect healthcare choices and the well-being of patients.

Non-Publication of Trials

One common issue is when studies don’t publish results that show no effect or exclude certain measured outcomes. This selective sharing or holding back of information is an example of reporting bias, giving a misleading picture of what the study found.

  • Situation: Studies that show no or negative results often don’t get published, or researchers leave out specific results.
  • Result: Hiding or not sharing certain information can make the study’s findings seem one-sided and biased, giving a distorted picture.

Reporting Bias in Clinical Trials

Let’s look at another example that shows how sharing only positive results in clinical trials can be a problem. When researchers only report the good outcomes, it gives a one-sided view of how well a treatment works and its potential risks. This kind of bias can mess up how doctors make decisions and create mistakes in the overall evidence.

  • Situation : Only favorable results are shared, creating a one-sided view of how well a treatment works.
  • Effect: It affects how we understand the pros and cons scientifically, possibly confusing medical choices and adding errors to the overall evidence.

Study on Delays in Publishing HIV Treatment Results

In a study about HIV treatments, the time it took for negative trial results to be published was much longer compared to positive trial results. This delay in sharing negative trial findings highlights the time-lag and emphasizes how it can mislead healthcare decisions.

  • Situation: Research on HIV treatments indicates a significant gap in the release of findings from negative trials when compared to positive trials.
  • Consequence: This highlights time-lag bias, illustrating how the delayed release of results from negative trials can affect the timing of healthcare choices and may jeopardize patient outcomes.

Prevention and Mitigation Strategies

You can use methods like trial registration, open science practices, and reporting guidelines to prevent and minimize reporting bias. Each of these plays a crucial role in making research more transparent, improving the quality of reporting, and lowering the risks of bias.

Implementing these strategies comes with its own set of challenges. Some of the obstacles to successful implementation include:

  • Research culture.
  • Reporting biases.
  • Statistical and methodological issues.
  • Variation in bias introduced by different records.

However, when these strategies are used together, they can create a research environment that is more open and fair.

Trial Registration

Trial registration is the initial move to reduce reporting bias. It’s an important step that makes clinical trials more transparent. If you think of research like a game, trial registration is like setting clear rules before starting. It makes sure everything is transparent, boosts reporting quality, and lowers the chances of biased selection.

The process of trial registration involves a few key steps:

  • Registering the clinical trial, usually through a platform.
  • Handle any review comments.
  • Follow human subject or ethics review rules and any local or national regulations.

Although it may seem simple, this process is crucial for making clinical research findings more transparent and credible.

The World Health Organization (WHO) and the International Committee of Medical Journal Editors (ICMJE) are the international custodians of trial registration for international clinical trials. They ensure adherence to global standards for transparency and ethical reporting in medical research.

Open Science Practices

Next is open science practices, which are principles and actions to make scientific research accessible to everyone.

Imagine a research ecosystem where data is freely shared, study plans are openly available, and research findings are published in open-access journals. This is what open science practices aim for. It helps reduce reporting bias by increasing transparency and integrity in research.

Open science practices include: 

  • Study pre-registration
  • Open data sharing
  • Open-access publishing

Much like a transparent curtain, these practices let you look into how scientific research is done, making it easier to check the original analysis and develop new ideas.

Researchers, journals, and funding agencies can enhance transparency and fairness in research by adopting open science practices. These approaches help reduce reporting bias and promote an open and honest culture in the scientific community, creating a more equitable research environment.

Reporting Guidelines

The final step is a set of reporting guidelines. These guidelines help researchers report their study findings clearly and unbiasedly. These guidelines serve as a roadmap for researchers to report their results thoroughly and transparently.

There are key reporting guidelines for medical research, including:

  • PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses): Concentrates on clear and comprehensive reporting of systematic reviews and meta-analyses.
  • CONSORT (Consolidated Standards of Reporting Trials): Created for clinical trials, it offers guidelines to enhance the transparency and quality of trial reports.
  • STROBE (Strengthening the Reporting of Observational Studies in Epidemiology): Focuses on improving transparency and quality in reporting observational studies.
  • MOOSE (Meta-analysis Of Observational Studies in Epidemiology): Specifically designed for meta-analyses of observational studies, ensuring transparent and rigorous reporting.
  • STARD (Standards for Reporting Diagnostic Accuracy): Targets studies evaluating the accuracy of diagnostic tests, providing guidelines for transparent reporting.

These guidelines provide an organized framework for sharing research results, ensuring that research findings are accurately and impartially communicated through a systematic review.

The Role of Stakeholders in Combating Reporting Bias

Reporting bias doesn’t happen on its own. Many factors affect it, and addressing it requires everyone involved in research to work together. Various stakeholders play roles and have responsibilities in dealing with reporting bias, including:

  • Researchers.
  • Medical journal editors.
  • Research ethics committees.
  • Funding agencies..

01. Researchers

Researchers are the frontline warriors in fighting against reporting bias. They can make sure that study results are reported accurately and without bias, directly impacting the presence and degree of reporting bias.

They are responsible for ensuring their study findings are reported transparently and without bias. Researchers are responsible for reducing reporting bias by accurately and impartially reporting study results. They can use different strategies to achieve this, such as:

  • Using multiple people to code the data.
  • Allowing participants to review the results.
  • Verifying with additional data sources.
  • Considering alternative explanations.

Researchers can significantly reduce reporting bias. But the responsibility of researchers doesn’t end there. They also need to actively mitigate the risks associated with publication bias. Here are some ways they can do this:

  • Find and include unpublished outcomes and studies.
  • Compare the results of both published and unpublished research.
  • Conduct sensitivity analyses.
  • Use registered reports.
  • Apply strong research methods.
  • Make sure the research process is transparent.

Ultimately, researchers need to ensure clear and honest reporting of their discoveries. They can achieve this by:

  • Registering their studies in advance.
  • Providing truthful and unbiased information.
  • Utilizing reporting guidelines checklists during writing and peer-review.

By following these steps, researchers can guarantee transparent and unbiased reporting of their research findings.

02. Medical Journal Editors

Medical journal editors control what gets published. They use methods like double-blind reviews. They ensure that trials are registered to guarantee transparent reporting of studies. Additionally, they make sure that results are fully and accurately disclosed.

Medical journal editors act as gatekeepers to prevent reporting bias. They:

  • Evaluate manuscripts based on the strength of the study design, not just the outcomes.
  • Support efforts like COMPare and Registered Reports to minimize reporting bias.
  • Implement submission and review policies to promote transparent reporting.
  • Protect the rights of study participants.
  • Embrace initiatives and guidelines for openness.

Following these principles ensures that only well-conducted and transparent studies get published.

Medical journal editorial policies have improved to address reporting bias better. It covers aspects like research culture, reporting biases, and statistical and methodological concerns. With these thorough regulations, they can reduce reporting bias and promote fair and transparent research.

03. Research Ethics Committees

Research ethics committees act as supervisors, ensuring that studies are presented transparently and without bias. They enforce ethical rules to protect the rights of participants, maintaining the honesty of scholarly work and encouraging clear reporting in research.

They use their special position to:

  • Identify and address bias.
  • Provide guidance.
  • Facilitate discussions.
  • Ensure a comprehensive review of the ethical aspects of research projects.

Research ethics committees include enough information in their publications to make it easy for others to replicate and review the research. It ensures that scientific discoveries are reported thoroughly and transparently.

Research Ethics Committees have the authority to:

  • Stop studies that go against established standards.

Ensuring the safety and rights of research participants promotes academic honesty and transparency in reporting research findings.

04. Funding Agencies

Funding agencies can support transparent research practices and combat reporting bias. They can show a commitment to openness and insist that researchers adopt transparent practices. These agencies play a crucial role in addressing reporting bias. They can do this by:

  • Focusing on methods to reduce bias.
  • Providing funds to less-supported research areas.
  • Encouraging collaboration among researchers to address bias.
  • Taking proactive steps to consider historically marginalized groups when distributing research funds.

Funding agencies can significantly reduce reporting bias by taking these steps. They can also promote transparent research by showing a dedication to openness and requiring the researchers they fund to embrace transparency.

For example, the National Institutes of Health (NIH) enforces policies that mandate recipients to provide accurate, thorough, and timely reports for their supported research projects. This ensures comprehensive and transparent reporting of scientific findings.

However, funding agencies are also responsible for ensuring that their financial support does not lead to biased studies favoring their products or interests. This highlights the critical role of agencies in ensuring fair and unbiased research results that are not influenced by sponsor-related reporting bias.

Future Directions for Addressing Reporting Bias

What will happen with reporting bias in the future? How can you make research more transparent and reduce bias in reporting? Using technology and changing policies might be the key to addressing reporting bias in the future.

Artificial intelligence (AI) and machine learning offer hope in finding and reducing reporting bias. By creating algorithms that can:

  • Find and reduce bias.
  • Analyze big sets of data for patterns.
  • Identify potential sources of bias.
  • Provide recommendations for unbiased reporting.

Technological Solutions

Technology has the power to change how we tackle reporting bias. Artificial intelligence (AI) and machine learning can significantly impact by analyzing large amounts of data and identifying patterns. These technological advancements have the potential to decrease reporting bias significantly.

Blockchain technology is another promising solution. It can enhance the transparency of research reporting by verifying the authenticity of data sources, processing methods, and the data itself. Integrating technology and research could mark a new era in combating reporting bias.

Policy Changes

While technology has great potential, it’s important to combine it with practical policy changes to tackle reporting bias fully. Policies that punish researchers can stop them from selectively sharing information, keeping the evaluation of treatment risks and benefits honest. This ensures accurate information for medical professionals and policymakers.

Additionally, making policy adjustments to highlight the ability to reproduce research can greatly decrease reporting bias.

Recent policy suggestions to address reporting bias in scientific research focus on the following:

  • Streamlining review criteria.
  • Modifying grant review processes to prioritize scientific quality.
  • Preventing political interference or inappropriate influence in research design, proposal, conduct, and reporting.

Implementing these policy adjustments can create a future where research is carried out and reported clearly and unbiasedly.

Utilizing QuestionPro Research in Detecting and Preventing Reporting Bias

QuestionPro is a comprehensive survey and research platform. It can play a significant role in detecting and preventing reporting bias. Using QuestionPro’s features can help researchers improve the reliability of their studies, identify possible reporting bias, and take steps to prevent it. Here’s how you can use QuestionPro for these purposes:

Utilizing QuestionPro for the Detection of Reporting Bias

  • Real-Time Monitoring: QuestionPro’s real-time monitoring feature can help you monitor how participants respond in real-time. If you notice sudden or unexpected patterns, it could suggest potential reporting bias.
  • Advanced Analytics: QuestionPro’s advanced analytics tools can help you examine how responses are grouped, find any unusual data points, and check the data distribution to spot potential biases.
  • Comparative Analysis: QuestionPro allows you to conduct a comparative analysis to see how different demographic groups or survey conditions affect the results. Discrepancies in answers could indicate potential biases linked to participant characteristics.
  • Response Time Analysis: QuestionPro allows you to examine participant response times. Rapid or delayed responses may indicate rash or overthought answers, providing insights into potential bias.

Utilizing QuestionPro for the Prevention of Reporting Bias

  • Anonymous Responses: QuestionPro allows anonymous responses in surveys. This can encourage participants to provide honest feedback, reducing the potential for social desirability bias.
  • Randomization and Rotation: You can use QuestionPro’s randomization and rotation features when designing surveys. This helps minimize order effects, reducing the impact of question sequencing on participant responses.
  • Predefined Answer Options: You can provide predefined answer options in QuestionPro surveys to standardize responses. This minimizes the chances of selective reporting and ensures consistency in participant responses.
  • Diverse Question Formats: QuestionPro allows you to use various question formats, including multiple-choice, open-ended, and scaled questions. A diverse set of question types can enhance the depth and accuracy of responses.
  • Response Validation Checks: QuestionPro allows you to implement response validation checks in your surveys to ensure data accuracy and reliability. Checks can identify inconsistent or biased responses during data collection.
  • Participant Screening: You can implement participant screening questions to ensure the survey sample is representative and diverse. This helps prevent bias introduced by an unrepresentative participant pool.

By using QuestionPro, researchers can use these strategies to enhance their ability to detect and prevent reporting bias, ensuring the reliability and validity of the collected survey data.

Reporting bias significantly challenges research outcomes and healthcare decisions, influencing perceptions. Understanding its nature allows for the implementation of effective strategies.

Some important ways to address this problem include registering trials, practicing open science, and following reporting guidelines. These actions help clarify things, improve the reporting quality, and create a more fair research environment.

It’s important to involve everyone with a stake in this, as sharing responsibilities can significantly reduce reporting issues and promote fairness and clarity. Using technology and having good rules can help create a future where research is done and reported without bias, making the scientific world more reliable and trustworthy.

QuestionPro survey and research platform help you detect and prevent reporting bias effectively. With advanced analytics, real-time monitoring, and diverse question formats, QuestionPro offers a robust set of tools to improve the quality of your survey data.

Ready to experience the difference? Take advantage of our free trial today.

LEARN MORE         FREE TRIAL

Frequently Asked Questions

01. what is reporting bias in systematic review.

Reporting bias in systematic review refers to the selective publication of research findings based on their nature, such as modifying the review outcome or including studies to highlight key findings.

02. What is an example of selective reporting bias?

Selective reporting bias happens when a news outlet treats a particular political candidate with preferential treatment. They may highlight the candidate’s positive qualities and accomplishments while downplaying any negative information or controversies. This can lead to a distorted view of the candidates.

03. How can stakeholders contribute to reducing reporting bias?

Stakeholders can contribute to reducing reporting bias by ensuring accurate and unbiased reporting of study results and promoting transparency through strategies like double-blind review and trial registration.

MORE LIKE THIS

quantitative data analysis software

10 Quantitative Data Analysis Software for Every Data Scientist

Apr 18, 2024

Enterprise Feedback Management software

11 Best Enterprise Feedback Management Software in 2024

online reputation management software

17 Best Online Reputation Management Software in 2024

Apr 17, 2024

customer satisfaction survey software

Top 11 Customer Satisfaction Survey Software in 2024

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • For authors
  • Browse by collection
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 8, Issue 3
  • Tools for assessing risk of reporting biases in studies and syntheses of studies: a systematic review
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Matthew J Page 1 , 2 ,
  • Joanne E McKenzie 1 ,
  • Julian P T Higgins 2
  • 1 School of Public Health and Preventive Medicine , Monash University , Melbourne , Victoria , Australia
  • 2 Population Health Sciences , Bristol Medical School, University of Bristol , Bristol , UK
  • Correspondence to Dr Matthew J Page; matthew.page{at}monash.edu

Background Several scales, checklists and domain-based tools for assessing risk of reporting biases exist, but it is unclear how much they vary in content and guidance. We conducted a systematic review of the content and measurement properties of such tools.

Methods We searched for potentially relevant articles in Ovid MEDLINE, Ovid Embase, Ovid PsycINFO and Google Scholar from inception to February 2017. One author screened all titles, abstracts and full text articles, and collected data on tool characteristics.

Results We identified 18 tools that include an assessment of the risk of reporting bias. Tools varied in regard to the type of reporting bias assessed (eg, bias due to selective publication, bias due to selective non-reporting), and the level of assessment (eg, for the study as a whole, a particular result within a study or a particular synthesis of studies). Various criteria are used across tools to designate a synthesis as being at ‘high’ risk of bias due to selective publication (eg, evidence of funnel plot asymmetry, use of non-comprehensive searches). However, the relative weight assigned to each criterion in the overall judgement is unclear for most of these tools. Tools for assessing risk of bias due to selective non-reporting guide users to assess a study, or an outcome within a study, as ‘high’ risk of bias if no results are reported for an outcome. However, assessing the corresponding risk of bias in a synthesis that is missing the non-reported outcomes is outside the scope of most of these tools. Inter-rater agreement estimates were available for five tools.

Conclusion There are several limitations of existing tools for assessing risk of reporting biases, in terms of their scope, guidance for reaching risk of bias judgements and measurement properties. Development and evaluation of a new, comprehensive tool could help overcome present limitations.

  • publication bias
  • bias (epidemiology)
  • review literature as topic

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

https://doi.org/10.1136/bmjopen-2017-019703

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

Tools for assessing risk of reporting biases, and studies evaluating their measurement properties, were identified by searching several relevant databases using a search string developed in conjunction with an information specialist.

Detailed information on the content and measurement properties of existing tools was collected, providing readers with pertinent information to help decide which tools to use in evidence syntheses.

Screening of articles and data collection were performed by one author only, so it is possible that some relevant articles were missed, or that errors in data collection were made.

The search of grey literature was not comprehensive, so it is possible that there are other tools for assessing risk of reporting biases, and unpublished studies evaluating measurement properties, that were omitted from this review.

Background 

The credibility of evidence syntheses can be compromised by reporting biases, which arise when dissemination of research findings is influenced by the nature of the results. 1 For example, there may be bias due to selective publication, where a study is only published if the findings are considered interesting (also known as publication bias). 2 In addition, bias due to selective non-reporting may occur, where findings (eg, estimates of intervention efficacy or an association between exposure and outcome) that are statistically non-significant are not reported or are partially reported in a paper (eg, stating only that ‘P>0.05’). 3 Alternatively, there may be bias in selection of the reported result, where authors perform multiple analyses for a particular outcome/association, yet only report the result which yielded the most favourable effect estimate. 4 Evidence from cohorts of clinical trials followed from inception suggest that biased dissemination is common. Specifically, on average, half of all trials are not published, 1 5 trials with statistically significant results are twice as likely to be published 5 and a third of trials have outcomes that are omitted, added or modified between protocol and publication. 6

Audits of systematic review conduct suggest that most systematic reviewers do not assess risk of reporting biases. 7–10 For example, in a cross-sectional study of 300 systematic reviews indexed in MEDLINE in February 2014, 7 the risk of bias due to selective publication was not considered in 56% of reviews. A common reason for not doing so was that the small number of included studies, or inability to perform a meta-analysis, precluded the use of funnel plots. Only 19% of reviews included a search of a trial registry to identify completed but unpublished trials or prespecified but non-reported outcomes, and only 7% included a search of another source of data disseminated outside of journal articles. The risk of bias due to selective non-reporting in the included studies was assessed in only 24% of reviews. 7 Another study showed that authors of Cochrane reviews routinely record whether any outcomes that were measured were not reported in the included trials, yet rarely consider if such non-reporting could have biased the results of a synthesis. 11

Previous researchers have summarised the characteristics of tools designed to assess various sources of bias in randomised trials, 12–14 non-randomised studies of interventions (NRSI), 14 15 diagnostic test accuracy studies 16 and systematic reviews. 14 17 Others have summarised the performance of statistical methods developed to detect or adjust for reporting biases. 18–20 However, no prior review has focused specifically on tools (ie, structured instruments such as scales, checklists or domain-based tools) for assessing the risk of reporting biases. A particular challenge when assessing risk of reporting biases is that existing tools vary in their level of assessment. For example, tools for assessing risk of bias due to selective publication direct assessments at the level of the synthesis, whereas tools for assessing risk of bias due to selective non-reporting within studies can direct assessments at the level of the individual study, at the level of the synthesis or at both levels. It is unclear how many tools are available to assess different types of reporting bias, and what level they direct assessments at. It is also unclear whether criteria for reaching risk of bias judgements are consistent across existing tools. Therefore, the aim of this research was to conduct a systematic review of the content and measurement properties of such tools.

Methods for this systematic review were prespecified in a protocol which was uploaded to the Open Science Framework in February 2017 ( https://osf.io/9ea22/ ).

Eligibility criteria

Papers were included if the authors described a tool that was designed for use by individuals performing evidence syntheses to assess risk of reporting biases in the included studies or in their synthesis of studies. Tools could assess any type of reporting bias, including bias due to selective publication, bias due to selective non-reporting or bias in selection of the reported result. Tools could assess the risk of reporting biases in any type of study (eg, randomised trial of intervention, diagnostic test accuracy study, observational study estimating prevalence of an exposure) and in any type of result (eg, estimate of intervention efficacy or harm, estimate of diagnostic accuracy, association between exposure and outcome). Eligible tools could take any form, including scales, checklists and domain-based tools. To be considered a scale, each item had to have a numeric score attached to it, so that an overall summary score could be calculated. 12 To be considered a checklist, the tool had to include multiple questions, but the developers’ intention was not to attach a numerical score to each response, or to calculate an overall score. 13 Domain-based tools were those that required users to judge risk of bias or quality within specific domains, and to record the information on which each judgement was based. 21

Tools with a broad scope, for example, to assess multiple sources of bias or the overall quality of the body of evidence, were eligible if one of the items covered risk of reporting bias. Multidimensional tools with a statistical component were also eligible (eg, those that require users to respond to a set of questions about the comprehensiveness of the search, as well as to perform statistical tests for funnel plot asymmetry). In addition, any studies that evaluated the measurement properties of existing tools (eg, construct validity, inter-rater agreement, time taken to complete assessments) were eligible for inclusion. Papers were eligible regardless of the date or format of publication, but were limited to those written in English.

The following were ineligible:

articles or book chapters providing guidance on how to address reporting biases, but which do not include a structured tool that can be applied by users (eg, the 2011 Cochrane Handbook chapter on reporting biases 22 );

tools developed or modified for use in one particular systematic review;

tools designed to appraise published systematic reviews, such as the Risk Of Bias In Systematic reviews (ROBIS) tool 23 or A MeaSurement Tool to Assess systematic Reviews (AMSTAR) 24 ;

articles that focus on the development or evaluation of statistical methods to detect or adjust for reporting biases, as these have been reviewed elsewhere. 18–20

Search methods

On 9 February 2017, one author (MJP) searched for potentially relevant records in Ovid MEDLINE (January 1946 to February 2017), Ovid Embase (January 1980 to February 2017) and Ovid PsycINFO (January 1806 to February 2017). The search strategies included terms relating to reporting bias which were combined with a search string used previously by Whiting et al to identify risk of bias/quality assessment tools 17 (see full Boolean search strategies in online supplementary table S1 ).

Supplementary file 1

To capture any tools not published by formal academic publishers, we searched Google Scholar using the phrase ‘reporting bias tool OR risk of bias’. One author (MJP) screened the titles of the first 300 records, as recommended by Haddaway et al . 25 To capture any papers that may have been missed by all searches, one author (MJP) screened the references of included articles. In April 2017, the same author emailed the list of included tools to 15 individuals with expertise in reporting biases and risk of bias assessment, and asked if they were aware of any other tools we had not identified.

Study selection and data collection

One author (MJP) screened all titles and abstracts retrieved by the searches. The same author screened any full-text articles retrieved. One author (MJP) collected data from included papers using a standardised data-collection form. The following data on included tools were collected:

type of tool (scale, checklist or domain-based tool);

types of reporting bias addressed by the tool;

level of assessment (ie, whether users direct assessments at the synthesis or at the individual studies included in the synthesis);

whether the tool is designed for general use (generic) or targets specific study designs or topic areas (specific);

items included in the tool;

how items within the tool are rated;

methods used to develop the tool (eg, Delphi study, expert consensus meeting);

availability of guidance to assist with completion of the tool (eg, guidance manual).

The following data from studies evaluating measurement properties of an included tool were collected:

tool evaluated

measurement properties evaluated (eg, inter-rater agreement)

number of syntheses/studies evaluated

publication year of syntheses/studies evaluated

areas of healthcare addressed by syntheses/studies evaluated

number of assessors

estimate (and precision) of psychometric statistics (eg, weighted kappa; κ).

Data analysis

We summarised the characteristics of included tools in tables. We calculated the median (IQR) number of items across all tools, and tabulated the frequency of different criteria used in tools to denote a judgement of ‘high’ risk of reporting bias. We summarised estimates of psychometric statistics, such as weighted κ to estimate inter-rater agreement, 26 by reporting the range of values across studies. For studies reporting weighted κ, we categorised agreement according to the system proposed by Landis and Koch, 27 as poor (0.00), slight (0.01–0.20), fair (0.21–0.40), moderate (0.41–0.60), substantial (0.61–0.80) or almost perfect (0.81–1.00).

In total, 5554 records were identified from the searches, of which we retrieved 165 for full-text screening ( figure 1 ). The inclusion criteria were met by 42 reports summarising 18 tools ( table 1 ) and 17 studies evaluating the measurement properties of tools. 3 4 21 28–66 A list of excluded papers is presented in online supplementary table S2 . No additional tools were identified by the 15 experts contacted.

Supplementary file 2

  • View inline

List of included tools

  • Download figure
  • Open in new tab
  • Download powerpoint

Flow diagram of identification, screening and inclusion of studies. a Records identified from Ovid MEDLINE, Ovid Embase, Ovid PsycINFO and Google Scholar. b Records identified from screening references of included articles. SR, systematic review.

General characteristics of included tools

Nearly all of the included tools (16/18; 89%) were domain-based, where users judge risk of bias or quality within specific domains ( table 2 ; individual characteristics of each tool are presented in online supplementary table S3 ). All tools were designed for generic rather than specific use. Five tools focused solely on the risk of reporting biases 3 28 29 47 48 ; the remainder addressed reporting biases and other sources of bias/methodological quality (eg, problems with randomisation, lack of blinding). Half of the tools (9/18; 50%) addressed only one type of reporting bias (eg, bias due to selective non-reporting only). Tools varied in regard to the study design that they assessed (ie, randomised trial, non-randomised study of an intervention, laboratory animal experiment). The publication year of the tools ranged from 1998 to 2016 (the earliest was the Downs-Black tool, 31 a 27-item tool assessing multiple sources of bias, one of which focuses on risk of bias in the selection of the reported result).

Supplementary file 3

Summary of general characteristics of included tools

Assessments for half of the tools (9/18; 50%) are directed at an individual study (eg, tool is used to assess whether any outcomes in a study were not reported). In 5/18 (28%) tools, assessments are directed at a specific outcome or result within a study (eg, tool is used to assess whether a particular outcome in a study , such as pain, was not reported). In a few tools (4/18; 22%), assessments are directed at a specific synthesis (eg, tool is used to assess whether a particular synthesis , such as a meta-analysis of studies examining pain as an outcome, is missing unpublished studies).

The content of the included tools was informed by various sources of data. The most common included a literature review of items used in existing tools or a literature review of empirical evidence of bias (9/18; 50%), ideas generated at an expert consensus meeting (8/18; 44%) and pilot feedback on a preliminary version of the tool (7/18; 39%). The most common type of guidance available for the tools was a brief annotation per item/response option (9/18; 50%). A detailed guidance manual is available for four (22%) tools.

Tool content

Four tools include items for assessing risk of bias due to both selective publication and selective non-reporting. 29 33 45 49 One of these tools (the AHRQ tool for evaluating the risk of reporting bias 29 ) directs users to assess a particular synthesis, where a single risk of bias judgement is made based on information about unpublished studies and under-reported outcomes. In the other three tools (the GRADE framework, and two others which are based on GRADE), 33 45 49 the different sources of reporting bias are assessed in separate domains (bias due to selective non-reporting is considered in a ‘study limitations (risk of bias)’ domain, while bias due to selective publication is considered in a ‘publication bias’ domain).

Five tools 21 28 43 44 47 guide users to assess risk of bias due to both selective non-reporting and selection of the reported result (ie, problems with outcomes/results that are not reported and those that are reported, respectively). Four of these tools, which include the Cochrane risk of bias tool for randomised trials 21 and three others which are based on the Cochrane tool, 43 44 47 direct assessments at the study level. That is, a whole study is rated at ‘high’ risk of reporting bias if any outcome/result in the study has been omitted, or fully reported, on the basis of the findings.

Some of the tools designed to assess the risk of bias due to selective non-reporting ask users to assess, for particular outcomes of interest, whether the outcome was not reported or only partially reported in the study on the basis of its results (eg, Outcome Reporting Bias In Trials (ORBIT) tools, 3 48 the AHRQ outcome reporting bias framework, 28 and GRADE. 34 This allows users to perform multiple outcome-level assessments of the risk of reporting bias (rather than one assessment for the study as a whole). In total, 15 tools include a mechanism for assessing risk of bias due to selective non-reporting in studies, but assessing the corresponding risk of bias in a synthesis that is missing the non-reported outcomes is not within the scope of 11 of these tools. 3 21 28 30 38 43 44 47 48 51 52

A variety of criteria are used in existing tools to inform a judgement of ‘high’ risk of bias due to selective publication ( table 3 ), selective non-reporting ( table 4 ), and selection of the reported result ( table 5 ; more detail is provided in online supplementary table S4 ). In the four tools with an assessment of risk of bias due to selective publication, ‘high’ risk criteria include evidence of funnel plot asymmetry, discrepancies between published and unpublished studies, use of non-comprehensive searches and presence of small, ‘positive’ studies with for-profit interest ( table 3 ). However, not all of these criteria appear in all tools (only evidence of funnel plot asymmetry does), and the relative weight assigned to each criterion in the overall risk of reporting bias judgement is clear for only one tool (the Semi-Automated Quality Assessment Tool; SAQAT). 45 46

Supplementary file 4

Criteria used in existing tools to inform a judgement of ‘high’ risk of bias due to selective publication

Criteria used in existing tools to inform a judgement of ‘high’ risk of bias due to selective non-reporting

Criteria used in existing tools to inform a judgement of ‘high’ risk of bias in selection of the reported result

All 15 tools with an assessment of the risk of bias due to selective non-reporting suggest that the risk of bias is ‘high’ when it is clear that an outcome was measured but no results were reported ( table 4 ). Fewer of these tools (n=8; 53%) also recommend a ‘high’ risk judgement when results for an outcome are partially reported (eg, it is stated that the result was non-significant, but no effect estimate or summary statistics are presented).

The eight tools that include an assessment of the risk of bias in selection of the reported result recommend various criteria for a ‘high’ risk judgement ( table 5 ). These include when some outcomes that were not prespecified are added post hoc (in 4 (50%) tools), or when it is likely that the reported result for a particular outcome has been selected, on the basis of the findings, from among multiple outcome measurements or analyses within the outcome domain (in 2 (25%) tools).

General characteristics of studies evaluating measurement properties of included tools

Despite identifying 17 studies that evaluated measurement properties of an included tool, psychometric statistics for the risk of reporting bias component were available only from 12 studies 43 44 54–60 62 64 66 (the other five studies include only data on properties of the multidimensional tool as a whole 31 53 61 63 65 ; online supplementary table S5 ). Nearly all 12 studies (11; 92%) evaluated inter-rater agreement between two assessors; eight of these studies reported weighted κ values, but only two described the weighting scheme. 55 62 Eleven studies 43 44 54–60 64 66 evaluated the measurement properties of tools for assessing risk of bias in a study due to selective non-reporting or risk of bias in selection of the reported result; in these 11 studies, a median of 40 (IQR 32–109) studies were assessed. One study 62 evaluated a tool for assessing risk of bias in a synthesis due to selective publication, in which 44 syntheses were assessed. In the studies evaluating inter-rater agreement, all involved two assessors.

Supplementary file 5

Results of evaluation studies.

Five studies 54 56–58 60 included data on the inter-rater agreement of assessments of risk of bias due to selective non-reporting using the Cochrane risk of bias tool for randomised trials 21 ( table 6 ). Weighted κ values in four studies 54 56–58 ranged from 0.13 to 0.50 (sample size ranged from 87 to 163 studies), suggesting slight to moderate agreement. 27 In the other study, 60 the per cent agreement in selective non-reporting assessments in trials that were included in two different Cochrane reviews was low (43% of judgements were in agreement). Two other studies found that inter-rater agreement of selective non-reporting assessments were substantial for SYRCLE’s RoB tool (κ=0.62, n=32), 43 but poor for the RoBANS tool (κ=0, n=39). 44 There was substantial agreement between raters in the assessment of risk of bias due to selective publication using the SAQAT (κ=0.63, n=29). 62 The inter-rater agreement of assessments of risk of bias in selection of the reported result using the ROBINS-I tool 4 was moderate for NRSI included in a review of the effect of cyclooxygenase-2 inhibitors on cardiovascular events (κ=0.45, n=21), and substantial for NRSI included in a review of the effect of thiazolidinediones on cardiovascular events (κ=0.78, n=16). 55

Reported measurement properties of tools with an assessment of the risk of reporting bias

From a systematic search of the literature, we identified 18 tools designed for use by individuals performing evidence syntheses to assess risk of reporting biases in the included studies or in their synthesis of studies. The tools varied with regard to the type of reporting bias assessed (eg, bias due to selective publication, bias due to selective non-reporting), and the level of assessment (eg, for the study as a whole, a particular outcome within a study or a particular synthesis of studies). Various criteria are used across tools to designate a synthesis as being at ‘high’ risk of bias due to selective publication (eg, evidence of funnel plot asymmetry, use of non-comprehensive searches). However, the relative weight assigned to each criterion in the overall judgement is not clear for most of these tools. Tools for assessing risk of bias due to selective non-reporting guide users to assess a study, or an outcome within a study, as ‘high’ risk of bias if no results are reported for an outcome. However, assessing the corresponding risk of bias in a synthesis that is missing the non-reported outcomes is outside the scope of most of these tools. Inter-rater agreement estimates were available for five tools, 4 21 43 44 62 and ranged from poor to substantial; however, the sample sizes of most evaluations were small, and few described the weighting scheme used to calculate κ.

Strengths and limitations

There are several strengths of this research. Methods were conducted in accordance with a systematic review protocol ( https://osf.io/9ea22/ ). Published articles were identified by searching several relevant databases using a search string developed in conjunction with an information specialist, 17 and by contacting experts to identify tools missed by the search. Detailed information on the content and measurement properties of existing tools was collected, providing readers with pertinent information to help decide which tools to use in future reviews. However, the findings need to be considered in light of some limitations. Screening of articles and data collection were performed by one author only. It is therefore possible that some relevant articles were missed, or that errors in data collection were made. The search for unpublished tools was not comprehensive (only Google Scholar was searched), so it is possible that other tools for assessing risk of reporting biases exist. Further, restricting the search to articles in English was done to expedite the review process, but may have resulted in loss of information about tools written in other languages, and additional evidence on measurement properties of tools.

Comparison with other studies

Other systematic reviews of risk of bias tools 12–17 have restricted inclusion to tools developed for particular study designs (eg, randomised trials, diagnostic test accuracy studies), where the authors recorded all the sources of bias addressed. A different approach was taken in the current review, where all tools (regardless of study design) that address a particular source of bias were examined. By focusing on one source of bias only, the analysis of included items and criteria for risk of bias judgements was more detailed than that recorded previously. Some of the existing reviews of tools 15 considered tools that were developed or modified in the context of a specific systematic review. However, such tools were excluded from the current review as they are unlikely to have been developed systematically, 15 67 and are difficult to find (all systematic reviews conducted during a particular period would need to have been examined for the search to be considered exhaustive).

Explanations and implications

Of the 18 tools identified, only four (22%) included a mechanism for assessing risk of bias due to selective publication, which is the type of reporting bias that has been investigated by methodologists most often. 2 This is perhaps unsurprising given that hundreds of statistical methods to ‘detect’ or ‘adjust’ for bias due to selective publication have been developed. 18 These statistical methods may be considered by methodologists and systematic reviewers as the tools of choice for assessing this type of bias. However, application of these statistical methods without considering other factors (eg, existence of registered but unpublished studies, conflicts of interest that may influence investigators to not disseminate studies with unfavourable results) is not sufficiently comprehensive, and could lead to incorrect conclusions about the risk of bias due to selective publication. Further, there are many limitations of these statistical approaches, in terms of their underlying assumptions, statistical power, which is often low because most meta-analyses include few studies, 7 and the need for specialist statistical software to apply them. 19 68 These factors may have limited their use in practice and potentially explain why a large number of systematic reviewers currently ignore the risk of bias due to selective publication. 7–9 69

Our analysis suggests that the factors that need to be considered to assess risk of reporting biases adequately (eg, comprehensiveness of the search, amount of data missing from the synthesis due to unpublished studies and under-reported outcomes) are fragmented. A similar problem was occurring a decade ago with the assessment of risk of bias in randomised trials. Some authors assessed only problems with randomisation, while others focused on whether trials were not ‘double blinded’ or had any missing participant data. 70 It was not until all the important bias domains were brought together into a structured, domain-based tool to assess the risk of bias in randomised trials, 21 that systematic reviewers started to consider risk of bias in trials comprehensively. A similar initiative to link all the components needed to judge the risk of reporting biases into a comprehensive new tool may improve the credibility of evidence syntheses.

In particular, there is an emergent need for a new tool to assess the risk that a synthesis is affected by reporting biases. This tool could guide users to consider risk of bias in a synthesis due to both selective publication and selective non-reporting, given that both practices lead to the same consequence: evidence missing from the synthesis. 11 Such a tool would complement recently developed tools for assessing risk of bias within studies (RoB 2.0 41 and ROBINS-I 4 which include a domain for assessing the risk of bias in selection of the reported result, but no mechanism to assess risk of bias due to selective non-reporting). Careful thought would need to be given as to how to weigh up various pieces of information underpinning the risk of bias judgement. For example, users will need guidance on how evidence of known, unpublished studies (as identified from trial registries, protocols or regulatory documents) should be considered alongside evidence that is more speculative (eg, funnel plots suggesting that studies may be missing). Further, guidance for the tool will need to emphasise the value of seeking documents other than published journal articles (eg, protocols) to inform risk of bias judgements. Preparation of a detailed guidance manual may enhance the usability of the tool, minimise misinterpretation and increase reliability in assessments. Once developed, evaluations of the measurement properties of the tool, such as inter-rater agreement and construct validity, should be conducted to explore whether modifications to the tool are necessary.

Conclusions

There are several limitations of existing tools for assessing risk of reporting biases in studies or syntheses of studies, in terms of their scope, guidance for reaching risk of bias judgements and measurement properties. Development and evaluation of a new, comprehensive tool could help overcome present limitations.

  • Vickers A , et al
  • Hooper L , et al
  • Kirkham JJ ,
  • Altman DG , et al
  • Sterne JA ,
  • Hernán MA ,
  • Reeves BC , et al
  • Schmucker C ,
  • Schell LK ,
  • Portalupi S , et al
  • Holland WC , et al
  • Shamseer L ,
  • Koletsi D ,
  • Fleming PS , et al
  • Umberham BA ,
  • Detweiler BN , et al
  • Chan AW , et al
  • Nichol G , et al
  • Macedo LG ,
  • Gadotti IC , et al
  • Shukla VK ,
  • Bak G , et al
  • Sanderson S ,
  • Whiting P ,
  • Rutjes AW ,
  • Dinnes J , et al
  • Savovic J , et al
  • Mueller KF ,
  • Meerpohl JJ ,
  • Briel M , et al
  • Sutton AJ ,
  • Ioannidis JP , et al
  • Higgins JP ,
  • Altman DG ,
  • Gøtzsche PC , et al
  • Sterne JAC ,
  • Savović J ,
  • Higgins JP , et al
  • Grimshaw JM ,
  • Wells GA , et al
  • Haddaway NR ,
  • Collins AM ,
  • Coughlin D , et al
  • Landis JR ,
  • Balshem H ,
  • Stevens A ,
  • Ansari M , et al
  • Berkman ND ,
  • Downes MJ ,
  • Brennan ML ,
  • Williams HC , et al
  • Kolamunnage-Dona R , et al
  • Guyatt GH ,
  • Vist GE , et al
  • Vist G , et al
  • Montori V , et al
  • Schünemann H ,
  • Guyatt G , et al
  • Santesso N ,
  • Carrasco-Labra A ,
  • Langendam M , et al
  • Hayden JA ,
  • van der Windt DA ,
  • Cartwright JL , et al
  • Higgins JPT ,
  • Page MJ , et al
  • Savović J , et al
  • Hooijmans CR ,
  • Rovers MM ,
  • de Vries RB , et al
  • Lee YJ , et al
  • Llewellyn A , et al
  • Stewart GB ,
  • Schünemann H , et al
  • Tejani AM ,
  • Huan LN , et al
  • Gamble C , et al
  • Salanti G ,
  • Del Giovane C ,
  • Chaimani A , et al
  • Viswanathan M ,
  • Dryden DM , et al
  • Armijo-Olivo S ,
  • Stiles CR ,
  • Hagen NA , et al
  • da Costa BR , et al
  • Bilandzic A ,
  • Fitzpatrick T ,
  • Rosella L , et al
  • Hartling L ,
  • Liang Y , et al
  • Vandermeer B , et al
  • Milne A , et al
  • Jordan VM ,
  • Lensen SF ,
  • Farquhar CM
  • Miladinovic B ,
  • Guyatt GH , et al
  • Llewellyn A ,
  • Whittington C ,
  • Stewart G , et al
  • Mustafa RA ,
  • Brozek J , et al
  • Norris SL ,
  • Holmer HK ,
  • Ogden LA , et al
  • O’Connor SR ,
  • Ryan B , et al
  • Tierney JF ,
  • Whiting PF ,
  • Westwood ME , et al
  • Moher D , et al
  • Gøtzsche PC

Contributors MJP conceived and designed the study, collected data, analysed the data and wrote the first draft of the article. JEM and JPTH provided input on the study design and contributed to revisions of the article. All authors approved the final version of the submitted article.

Funding MJP is supported by an Australian National Health and Medical Research Council (NHMRC) Early Career Fellowship (1088535). JEM is supported by an NHMRC Australian Public Health Fellowship (1072366). JPTH is funded in part by Cancer Research UK Programme Grant C18281/A19169; is a member of the MRC Integrative Epidemiology Unit at the University of Bristol, which is supported by the UK Medical Research Council and the University of Bristol (grant MC_UU_12013/9); and is a member of the MRC ConDuCT-II Hub (Collaboration and innovation for Difficult and Complex randomised controlled Trials In Invasive procedures; grant MR/K025643/1).

Competing interests JPTH led or participated in the development of four of the included tools (the current Cochrane risk of bias tool for randomised trials, the RoB 2.0 tool for assessing risk of bias in randomised trials, the ROBINS-I tool for assessing risk of bias in non-randomised studies of interventions and the framework for assessing quality of evidence from a network meta-analysis). MJP participated in the development of one of the included tools (the RoB 2.0 tool for assessing risk of bias in randomised trials). All authors are participating in the development of a new tool for assessing risk of reporting biases in systematic reviews.

Patient consent Not required.

Provenance and peer review Not commissioned; externally peer reviewed.

Data sharing statement The study protocol, data collection form, and the raw data and statistical analysis code for this study are available on the Open Science Framework: https://osf.io/3jdaa/

Read the full text or download the PDF:

U.S. flag

An official website of the Department of Health & Human Services

  • Search All AHRQ Sites
  • Email Updates

Patient Safety Network

1. Use quotes to search for an exact match of a phrase.

2. Put a minus sign just before words you don't want.

3. Enter any important keywords in any order to find entries where all these terms appear.

  • The PSNet Collection
  • All Content
  • Perspectives
  • Current Weekly Issue
  • Past Weekly Issues
  • Curated Libraries
  • Clinical Areas
  • Patient Safety 101
  • The Fundamentals
  • Training and Education
  • Continuing Education
  • WebM&M: Case Studies
  • Training Catalog
  • Submit a Case
  • Improvement Resources
  • Innovations
  • Submit an Innovation
  • About PSNet
  • Editorial Team
  • Technical Expert Panel

Unmasking bias in artificial intelligence: a systematic review of bias detection and mitigation strategies in electronic health record-based models.

Chen F, Wang L, Hong J, et al. Unmasking bias in artificial intelligence: a systematic review of bias detection and mitigation strategies in electronic health record-based models. J Am Med Inform Assoc. 2024;Epub Mar 23. doi:10.1093/jamia/ocae060.

When biased data are used for research, the results may reflect the same biases if appropriate precautions are not taken. In this systematic review, researchers describe possible types of bias (e.g., implicit, selection) that can result from research with artificial intelligence (AI) using electronic health record (EHR) data. Along with recommendations to reduce introducing bias into the data model, the authors stress the importance of standardized reporting of model development and real-world testing.

Development and validation of a deep learning model for detection of allergic reactions using safety event reports across hospitals. December 9, 2020

Allergy safety events in healthcare: development and application of a classification schema based on retrospective review. June 15, 2022

Toward safer health care: a review strategy of FDA medical device adverse event database to identify and categorize health information technology related events. November 28, 2018

Identifying and reconciling patients' allergy information within the electronic health record. July 6, 2022

Leveraging patient safety research: efforts made fifteen years since To Err Is Human. September 11, 2019

Quality improvements in decreasing medication administration errors made by nursing staff in an academic medical center hospital: a trend analysis during the journey to Joint Commission International accreditation and in the post-accreditation era. April 22, 2015

When order sets do not align with clinician workflow: assessing practice patterns in the electronic health record. June 19, 2019

Factors associated with mental health outcomes among health care workers exposed to coronavirus disease 2019. April 15, 2020

Use and implementation of standard operating procedures and checklists in prehospital emergency medicine: a literature review. April 5, 2017

The effect of evidence in crisis learning: based on a perspective integration framework. October 25, 2023

Incidence of speech recognition errors in the emergency department. September 14, 2016

Clinical characteristics and short-term outcomes of acute kidney injury missed diagnosis in older patients with severe COVID-19 in intensive care unit. May 19, 2021

The psychological and mental impact of coronavirus disease 2019 (COVID-19) on medical staff and general public - a systematic review and meta-analysis. July 15, 2020

Intraoperative patient information handover between anesthesia providers. November 5, 2014

Healthcare failure mode and effect analysis (HFMEA) as an effective mechanism in preventing infection caused by accompanying caregivers during COVID-19-experience of a city medical center in Taiwan. January 27, 2021

System approach to prevent common bile duct injury and enhance performance of laparoscopic cholecystectomy. July 11, 2007

Natural language processing and its implications for the future of medication safety: a narrative review of recent advances and challenges. September 5, 2018

The application of strong matrix management and PDCA cycle in the management of severe COVID-19 patients. May 20, 2020

Predictive value of alert triggers for identification of developing adverse drug events. December 2, 2009

Implementing the clinical occurrence reporting and learning system: a double-loop learning incident reporting system in long-term care. March 24, 2021

Accuracy of medication documentation in hospital discharge summaries: a retrospective analysis of medication transcription errors in manual and electronic discharge summaries. November 4, 2009

Psychological impact and coping strategies of frontline medical staff in Hunan between January and March 2020 during the outbreak of Coronavirus Disease 2019 (COVID‑19) in Hubei, China. April 8, 2020

Effectiveness of N95 respirators versus surgical masks against influenza: a systematic review and meta-analysis. April 29, 2020

Near miss research in the healthcare system: a scoping review. May 25, 2022

Effectiveness of interventions to improve adverse drug reaction reporting by healthcare professionals over the last decade: A systematic review December 11, 2019

The safety implications of missed test results for hospitalised patients: a systematic review. February 23, 2011

Can patients contribute to enhancing the safety and effectiveness of test-result follow-up? Qualitative outcomes from a health consumer workshop. June 2, 2021

Evaluation and accurate diagnoses of pediatric diseases using artificial intelligence. February 27, 2019

Medication errors in community pharmacies: the need for commitment, transparency, and research. February 20, 2019

Failure to follow-up test results for ambulatory patients: a systematic review. January 11, 2012

The impact of meaningful use and electronic health records on hospital patient safety. November 2, 2022

Development of prescribing indicators related to opioid-related harm in patients with chronic pain in primary care- a modified e-Delphi study. January 17, 2024

Pharmacy e-prescription dispensing before and after CancelRx implementation. September 20, 2023

The impact of health information technology on the management and follow-up of test results—a systematic review. May 8, 2019

Outcome differences between surgeons performing first and subsequent coronary artery bypass grafting procedures in a day: a retrospective comparative cohort study. June 29, 2022

A visual medication schedule to improve anticoagulation control: a randomized, controlled trial. October 17, 2007

Decreasing clinically significant adverse events using feedback to emergency physicians of telephone follow-up outcomes. April 21, 2005

An innovative mobile approach for patient safety services: the case of a Taiwan health care provider. August 22, 2007

Patient safety culture in nursing: a dimensional concept analysis. July 30, 2008

Examining the relationship of an all-cause harm patient safety measure and critical performance measures at the frontline of care. February 28, 2018

The delivery of safe and effective test result communication, management and follow-up. September 27, 2023

Using electronic health records to identify adverse drug events in ambulatory care: a systematic review. March 13, 2019

Relationship between operating room teamwork, contextual factors, and safety checklist performance. August 31, 2016

Use of the Beers criteria to predict adverse drug reactions among first-visit elderly outpatients. June 15, 2005

The impact of electronic communication of medication discontinuation (CancelRx) on medication safety: a pilot study. October 5, 2022

Analysis of errors in dictated clinical documents assisted by speech recognition software and professional transcriptionists. July 25, 2018

Supratherapeutic dosing of acetaminophen among hospitalized patients. December 5, 2012

Effect of lawsuits on professional well-being and medical error rates among orthopaedic surgeons. August 30, 2023

Do patient safety events increase readmissions? April 8, 2009

Variation in electronic test results management and its implications for patient safety: a multisite investigation. August 19, 2020

Perioperative handoff enhancement opportunities through technology and artificial intelligence: a narrative review. June 14, 2023

Patient groups, clinicians and healthcare professionals agree—all test results need to be seen, understood and followed up. December 19, 2018

Habit and automaticity in medical alert override: cohort study. April 27, 2022

Scoping review of the second victim syndrome among surgeons: understanding the impact, responses, and support systems. March 27, 2024

Impact of a warning CPOE system on the inappropriate pill splitting of prescribed medications in outpatients. January 21, 2015

Rising drug allergy alert overrides in electronic health records: an observational retrospective study of a decade of experience. December 2, 2015

Applying root cause analysis to improve patient safety: decreasing falls in postpartum women. April 21, 2010

The mental health trigger tool: development and testing of a specialized trigger tool for mental health settings. July 10, 2019

Improved pain resolution in hospitalized patients through targeting of pain mismanagement as medical error. April 8, 2009

The prevalence of medical error related to end-of-life communication in Canadian hospitals: results of a multicentre observational study. November 25, 2015

Development and interrater agreement of a novel classification system combining medical and surgical adverse event reporting. April 19, 2023

Associations of physician burnout with career engagement and quality of patient care: systematic review and meta-analysis. September 28, 2022

Children's Hospital investigated five patient deaths from deadly fungal disease in 2009. April 30, 2014

Effect of electronic health records in ambulatory care: retrospective, serial, cross sectional study. April 15, 2005

A piece of my mind. The patient you least want to see. May 4, 2016

Work effort, readability and quality of pharmacy transcription of patient directions from electronic prescriptions: a retrospective observational cohort analysis. March 31, 2021

A comparison of the effects of different typographical methods on the recognizability of printed drug names. August 6, 2014

Psychological safety as a new ACGME requirement: a comprehensive all-in-one guide to radiology residency programs. November 8, 2023

Postoperative adverse events inconsistently improved by the World Health Organization surgical safety checklist: a systematic literature review of 25 studies. May 25, 2016

Differences in hospitals' workplace violence incident reporting practices: a mixed methods study. May 18, 2022

Prevalence and factors associated with patient-requested corrections to the medical record through use of a patient portal: findings from a national survey. March 9, 2022

The impacts of using community health volunteers to coach medication safety behaviors among rural elders with chronic illnesses. May 22, 2013

Underreporting of quality measures and associated facility characteristics and racial disparities in US nursing home ratings. June 7, 2023

Learning from non-routine events and teamwork in intensive care units: challenges and opportunities. February 28, 2024

Investigating the mediating effect of patient self-efficacy on the relationship between patient safety engagement and patient safety in healthcare professionals. March 22, 2023

Exploring mediating effects between nursing leadership and patient safety from a person-centred perspective: a literature review. September 8, 2021

Do drug interaction alerts between a chemotherapy order-entry system and an electronic medical record affect clinician behavior? July 17, 2013

The practical implementation of artificial intelligence technologies in medicine. January 30, 2019

Association of hospitalist years of experience with mortality in the hospitalized Medicare population. January 17, 2018

Prolonged diagnostic intervals as marker of missed diagnostic opportunities in bladder and kidney cancer patients with alarm features: a longitudinal linked data study. February 17, 2021

Association between Leapfrog safe practices score and hospital mortality in major surgery. November 30, 2011

Lessons from walking the medical distancing tightrope. July 22, 2020

Shifting the learning curve. January 19, 2011

The evolving curriculum in quality improvement and patient safety in undergraduate and graduate medical education: a scoping review. February 15, 2023

Reevaluating the safety profile of pediatrics: a comparison of computerized adverse drug event surveillance and voluntary reporting in the pediatric environment. June 4, 2008

Potentially inappropriate medication use and healthcare expenditures in the US community-dwelling elderly. May 16, 2007

Impact of technological and departmental changes on incident rates in radiation oncology over a seventeen-year period. June 30, 2021

Effectiveness of a computerized system for intravenous heparin administration: using information technology to improve patient care and patient safety. May 11, 2005

Crisis checklists in emergency medicine: another step forward for cognitive aids. April 7, 2021

Accuracy of pressure ulcer events in US nursing home ratings. August 24, 2022

Surgical complications: disclosing adverse events and medical errors. January 16, 2013

Association of electronic health record use above meaningful use thresholds with hospital quality and safety outcomes. October 21, 2020

Role of nursing home quality on COVID-19 cases and deaths: evidence from Florida nursing homes. September 22, 2021

Aging stigma and the health of US adults over 65: what do we know? January 10, 2024

A model for building a standardized hand-off protocol.  November 8, 2006

Medical emergency team: a review of the literature. March 19, 2008

Preventable deaths in patients admitted from emergency department. June 21, 2006

A systematic review of the psychological literature on interruption and its patient safety implications. October 12, 2011

Optimizing transitions of care to reduce rehospitalizations. July 2, 2014

Is there a link between nursing home reported quality and COVID-19 cases? Evidence from California skilled nursing facilities. August 12, 2020

Feeling safe in the context of digitalization in healthcare: a scoping review. April 10, 2024

Patient Safety Research Summaries. April 9, 2024

The impact of digital hospitals on patient and clinician experience: systematic review and qualitative evidence synthesis. March 27, 2024

Risk management and patient safety in the artificial intelligence era: a systematic review. March 27, 2024

Assessing diagnostic performance. February 14, 2024

Application of "Human Factor Analysis and Classification System" (HFACS) model to the prevention of medical errors and adverse events: a systematic review. February 7, 2024

E-prescribing and medication safety in community settings: a rapid scoping review. January 24, 2024

The impact of rationing nursing care on patient safety: a systematic review. January 24, 2024

Abusive supervision and its relationship with nursing workforce and patient safety outcomes: a systematic review. January 17, 2024

Methods for studying medication safety following electronic health record implementation in acute care: a scoping review. January 10, 2024

Conceptualising learning from resilient performance: a scoping literature review. January 10, 2024

Validation and use of the Second Victim Experience and Support Tool questionnaire: a scoping review. December 20, 2023

Drivers of unprofessional behaviour between staff in acute care hospitals: a realist review. December 20, 2023

Sleep deprivation and medication administration errors in registered nurses- a scoping review. December 20, 2023

The relationship between nursing home staffing and resident safety outcomes: a systematic review of reviews. December 6, 2023

Unconscious bias among health professionals: a scoping review. September 27, 2023

Defining speaking up in the healthcare system: a systematic review. September 20, 2023

Assessing the utility of ChatGPT throughout the entire clinical workflow: development and usability study. September 13, 2023

Perspectives on Safety

Final Report on Prioritization of Patient Safety Practices for a New Rapid Review or Rapid Response. Making Healthcare Safer IV Series. August 9, 2023

Experimental evidence for structured information-sharing networks reducing medical errors. August 9, 2023

When to err is inhuman: an examination of the influence of artificial intelligence-driven nursing care on patient safety. August 2, 2023

Meta-analysis of medication administration errors in African hospitals. August 2, 2023

Mitigating bias in AI at the point of care. July 26, 2023

An integrative systematic review of employee silence and voice in healthcare: what are we really measuring. June 28, 2023

Room of horrors simulation in healthcare education: a systematic review. June 21, 2023

Patient Safety Innovations

Remote Response Team and Customized Alert Settings Help Improve Management of Sepsis

Identifying electronic health record contributions to diagnostic error in ambulatory settings through legal claims analysis. May 3, 2023

Optimizing measurement of misdiagnosis-related harms using symptom-disease pair analysis of diagnostic error (SPADE): comparison groups to maximize SPADE validity. May 3, 2023

Patient Safety Network

Connect With Us

LinkedIn

Sign up for Email Updates

To sign up for updates or to access your subscriber preferences, please enter your email address below.

Agency for Healthcare Research and Quality

5600 Fishers Lane Rockville, MD 20857 Telephone: (301) 427-1364

  • Accessibility
  • Disclaimers
  • Electronic Policies
  • HHS Digital Strategy
  • HHS Nondiscrimination Notice
  • Inspector General
  • Plain Writing Act
  • Privacy Policy
  • Viewers & Players
  • U.S. Department of Health & Human Services
  • The White House
  • Don't have an account? Sign up to PSNet

Submit Your Innovations

Please select your preferred way to submit an innovation.

Continue as a Guest

Track and save your innovation

in My Innovations

Edit your innovation as a draft

Continue Logged In

Please select your preferred way to submit an innovation. Note that even if you have an account, you can still choose to submit an innovation as a guest.

Continue logged in

New users to the psnet site.

Access to quizzes and start earning

CME, CEU, or Trainee Certification.

Get email alerts when new content

matching your topics of interest

in My Innovations.

  • See us on facebook
  • See us on twitter
  • See us on youtube
  • See us on linkedin
  • See us on instagram

Child abuse reports by medical staff linked to children’s race, Stanford Medicine study finds

Over-reporting of Black children and under-reporting of white children as suspected abuse victims suggests systemic bias from medical providers, Stanford Medicine research shows.

February 6, 2023 - By Erin Digitale

child bandaged

Stanford researchers have found that medical professionals are less likely to report suspected abuse when an injured child is white. wavebreakmedia/Shutterstock

Black children are over-reported as suspected victims of child abuse when they have traumatic injuries, even after accounting for poverty, according to new research from the Stanford School of Medicine .

The study , which drew on a national database of nearly 800,000 traumatic injuries in children, appears in the February issue of the Journal of Pediatric Surgery . It also found evidence that injuries in white children are under-reported as suspected abuse.

The study highlights the potential for bias in doctors’ and nurses’ decisions about which injuries should be reported to Child Protective Services, according to the researchers. Medical caregivers are mandated reporters, obligated to report to CPS any situations in which they think children may be victims of abuse. Because caregivers rarely admit to injuring their children, such reports rely in part on providers’ gut feelings, making them susceptible to unconscious, systemic bias.

Bias can harm both Black and white children, said senior study author Stephanie Chao , MD, assistant professor of surgery at Stanford Medicine. The study’s lead author is Modupeola Diyaolu, MD, a resident in general surgery at Stanford Medicine.

“If you over-identify cases of suspected child abuse, you’re separating children unnecessarily from their families and creating stress that lasts a lifetime,” Chao said. “But child abuse is extremely deadly, and if you miss one event — maybe a well-to-do Caucasian child where you think ‘No way’ — you may send that child back unprotected to a very dangerous environment. The consequences are really sad and devastating on both sides.”

Distinguishing race and poverty

Racial disparities in reporting child abuse have been documented before, but prior studies have not controlled well for poverty, which is a risk factor for abuse. Some experts argue that disproportionate reporting of injured Black children as possible abuse victims reflects only that their families tend to have lower incomes, not that medical professionals are subject to bias. Chao’s team wanted to clarify the debate.

The new study drew on data from the National Trauma Data Bank, which is maintained by the American College of Surgeons. The researchers studied records of nearly 800,000 traumatic injuries that occurred in children ages 1 to 17 from 2010 to 2014 and from 2016 to 2017. Of these injuries, 1% were suspected to be caused by abuse, based on medical codes used to report different types of abuse. The researchers controlled their findings for whether children had public or private insurance as a marker for family income.

Suspected victims of child abuse were younger (a median age of 2 versus 10 years), more likely to have public insurance (77% versus 43%) and more likely to be admitted to the intensive care unit (68% versus 48%) than the general population of children with traumatic injuries. Suspected child abuse victims also were 10 times as likely as the general population of children with traumatic injuries to die of their injuries in the hospital, with 8.2% of suspected abuse victims versus 0.84% of all children with traumatic injuries dying during hospitalization.

Stephanie Chao

Stephanie Chao

Similar proportions of children in the suspected child abuse group and in the general population of injured children were of Asian, Native Hawaiian/Pacific Islander, American Indian and “other” races, and similar proportions of both groups were of Hispanic or Latino ethnicity.

However, Black patients were over-represented among suspected child abuse victims, comprising 33% of suspected child abuse victims and 18% of the general population of injured children. White children comprised 51% of suspected child abuse victims and 66% of the general population of injured children.

“Even when we control for income — in this case, via insurance type — African American children are still significantly over-represented as suspected victims of child abuse,” said Chao. “In addition, they were reported with lower injury severity scores, meaning there was more suspicion for children with less-severe injuries in one particular racial group.”

In general, the researchers found medical professionals had a higher threshold for suspecting white families of abuse and a lower threshold for suspecting Black families. For example, white children in the suspected abuse group were more likely than Black children to have worse injuries, and they were more likely to have been admitted to the intensive care unit.

Implementing universal screening

Chao and her colleagues are designing more equitable ways to screen injured children for possible abuse. An important element, she said, is to make the screening universal so evaluation for possible abuse is not initiated primarily by medical providers’ gut feelings.

Chao created a universal screening system, in use at Stanford Medicine Children’s Health since 2019, in which every time a child younger than 6 years old is evaluated for an injury sustained in a private home, the electronic medical record automatically sends an alert to the organization’s child abuse team. Composed of pediatricians and social workers with specialized training in abuse detection, the team checks the medical record for other indications of abuse. In most cases, no such signals are found, and the entire process occurs behind the scenes. However, if the medical record shows any red flags, the medical staff who admitted the patient to the emergency department or hospital can be alerted for further consideration of whether further work-up or a CPS report is warranted.

Chao is also now working with Epic, the nation’s largest electronic medical record company, to include an automated child abuse screening tool in its system. The screening tool will be tested at several medical institutions later this year.

Chao hopes the work will improve the accuracy of CPS reports, especially when it comes to reducing the impact of medical providers’ unconscious bias.

“Everyone means well here, but the consequences of getting these reports wrong are pretty dire in either direction,” she said. “If we don’t recognize bias and always chalk it up to something else, we can’t fix the problem in a thoughtful way. Now, I hope we can recognize it and work toward a solution.”

The study was funded by the National Center for Advancing Translational Sciences (grant KL2TR003143).

Erin Digitale

About Stanford Medicine

Stanford Medicine is an integrated academic health system comprising the Stanford School of Medicine and adult and pediatric health care delivery systems. Together, they harness the full potential of biomedicine through collaborative research, education and clinical care for patients. For more information, please visit med.stanford.edu .

Artificial intelligence

Exploring ways AI is applied to health care

Stanford Medicine Magazine: AI

Read the Latest on Page Six

latest in US News

15-year-old dog plays matchmaker for mom and pilot who flew them home: 'Ultimate wingman'

15-year-old dog plays matchmaker for mom and pilot who flew them...

Indianapolis teacher allegedly recorded young students in 'fight club-style' brawls inside classroom: lawsuit

Teacher filmed students — including special needs boy — in...

San Francisco sues Oakland over new airport name that includes ‘San Francisco’

San Francisco sues Oakland over new airport name that includes...

Torso, arm of Sade Robinson, college student allegedly killed by date, wash up on Milwaukee beach

Torso, arm of college student allegedly killed by date, wash up...

Cow-calf rejected by its mother becomes Maine gun store employee 'as long as I can fit him in the car'

Cow-calf rejected by its mother becomes gun store employee 'as...

'Class clown' behind 2001 Dartmouth killings granted parole after spending half his life behind bars

'Class clown' behind 2001 Dartmouth killings granted parole after...

House committee advances bipartisan $95B foreign aid package for Israel, Ukraine and Taiwan

House committee advances bipartisan $95B foreign aid package for...

Trump leads Biden in Michigan and Georgia; razor-close in 2 other battleground states, new poll finds

Trump leads Biden in Michigan and Georgia; razor-close in 2 other...

Npr’s new ceo katherine maher haunted by woke, anti-trump tweets as veteran editor claims bias.

  • View Author Archive
  • Get author RSS feed

Thanks for contacting us. We've received your submission.

Katherine Maher

The woke, anti-Trump tweets of NPR’s new CEO are coming back to haunt her after she struggled to refute bombshell charges of journalistic bias lodged this week by a veteran editor.

Award-winning NPR business editor Uri Berliner’s lengthy essay in The Free Press was “profoundly disrespectful, hurtful, and demeaning,” Katherine Maher, the radio network’s 42-year-old president, complained in a letter to staffers .

“Our people represent America, our irreducibly complex nation,” Maher wrote Friday, in a response that did not address Berliner’s evidence of the news organization’s relentlessly leftist slant.

“We succeed through our diversity.”

Newly appointed Web Summit CEO Katherine Maher during her first press conference

But in January, when Maher was announced as NPR’s new leader, The Post revealed her penchant for parroting the progressive line on social media — including bluntly biased Twitter posts like “Donald Trump is a racist,” which she wrote in 2018.

That hyper-partisan message was scrubbed from the platform now known as X, but preserved on the site Archive.Today.

It’s unclear when Maher deleted it, or if its removal was tied to her new gig.

Other woke posts remain on Maher’s X account.

In 2020, as the George Floyd riots raged, she attempted to justify the looting epidemic in Los Angeles as payback for the sins of slavery.

“I mean, sure, looting is counterproductive,” Maher wrote on May 31, 2020 .

NPR's new CEO Katherine Maher has a long history of hyper-partisan tweets

“But it’s hard to be mad about protests not prioritizing the private property of a system of oppression founded on treating people’s ancestors as private property.”

The next day, she lectured her 27,000 followers on “white silence.”

“White silence is complicity,” she scolded .

“If you are white, today is the day to start a conversation in your community.”

Keep up with today's most important news

Stay up on the very latest with Evening Update.

Thanks for signing up!

Please provide a valid email address.

By clicking above you agree to the Terms of Use and Privacy Policy .

Never miss a story.

The NPR job is Maher’s first position in journalism or media.

She was previously the CEO of the Wikimedia Foundation, the San Francisco-based nonprofit that hosts Wikipedia, after holding communications roles for the likes of HSBC, UNICEF and the World Bank.

Maher earned a bachelor’s degree in Middle Eastern and Islamic studies from New York University, according to her LinkedIn account , and grew up in Wilton, Conn. — a town that her mother, Ceci Maher, now represents as a Democratic state senator.

Everything you need to know about the NPR political bias scandal

  • Veteran NPR editor Uri Berliner wrote a bombshell essay that claimed the broadcaster allowed liberal bias to affect its coverage. The senior business editor also said the internal culture at NPR had made race and identity ”paramount in nearly every aspect of the workplace.”
  • Berliner slammed NPR for ignoring the Hunter Biden laptop scandal,  and claimed a co-worker said he was happy the network wasn’t pursuing the story because it could help Donald Trump get re-elected.
  • Berliner was suspended without pay following the essay and announced his resignation Wednesday .
  • Berliner blasted NPR’s controversial new CEO, Katherine Maher, who previously posted hyper-partisan tweets , saying that she is the “opposite” of what the embattled radio outlet needs.
  • After the essay was published, Berliner said, he received  “a lot of support from colleagues, and many of them unexpected, who say they agree with me.”
  • Berliner’s essay prompted new calls from Republican lawmakers to strip NPR of government funding.
  • COLUMN: NPR, New York Times are in immense turmoil with the world on the verge of global conflict

On Tuesday, Berliner made waves with an essay slamming the left-leaning broadcaster for ignoring  the Hunter Biden laptop scandal  in 2020 for fear it could have helped Donald Trump get re-elected — calling out his bosses for turning NPR into “an openly polemical news outlet serving a niche audience.”

He also took his longtime employer to task for its coverage of the since-debunked Russia collusion saga — saying NPR “hitched our wagon to Trump’s most visible antagonist,” Rep. Adam Schiff (D-Calif.).

Berliner did some investigative journalism on his workplace to understand the reason for its coverage choices, he wrote.

“In DC, where NPR is headquartered and many of us live, I found 87 registered Democrats working in editorial positions and zero Republicans,” he reported.

“None.”

Maher’s Friday letter did not mention this finding, or debunk any of Berliner’s bias claims.

NPR did not immediately respond to a request for comment.

Share this article:

what is reporting bias in research

eLife logo

  • Inside eLife

eLife Latest: April 2024 update on our actions to promote equity, diversity and inclusion

  • Open access
  • Copyright information

2023 was a transformative year for eLife. Now, as 2024 marks us entering the second year of our new publishing model , we report on the actions we have taken to promote greater equity, diversity and inclusion in science and publishing since our last update , as well as our plans for the months ahead. As before, our efforts will focus on four key areas:

Supporting inclusive and empowered communities

Addressing bias in peer review, encouraging inclusive and equitable research.

  • Underpinning action with equitable infrastructure

Questions and comments on this update are welcome. Please feel free to share via a comment on this blog post or via email to [email protected]. Anonymous feedback may be shared via this form.

Report prepared by: Stuart King, Research Culture Manager

Last year’s Ben Barres Spotlight Awards acknowledged a record 14 pioneering researchers from groups that are underrepresented in biology and medicine or countries with limited research funding. Our new Global South Committee marked a pivotal step in enabling us to continue to work towards more equitable collaboration and inclusion of traditionally underrepresented and minoritised communities. Our Sparks of Change collection featured neurodivergent scientists, offering them a platform to discuss their experiences while keeping control over their own narrative . Additionally, staff events continued to bolster our collective awareness on equity, diversity, inclusion and research culture issues.

The planned launch of the next cohort of the eLife Community Ambassadors has been rescheduled for late 2024. However, as before the aim will be to assemble a diverse community of researchers passionate about research culture change.

Next steps: By October 2024, we will take the following steps:

  • Elect new members of our Early-Career Advisory Group with a focus on maintaining geographic and gender diversity
  • Run the sixth Ben Barres Spotlight Awards to support researchers from underrepresented backgrounds and countries with limited research funding
  • Establish a subcommittee with members from our Board of Directors, Early-Career Advisory Group and executive staff to review eLife’s governance structure

Enhancing the representativeness of our editorial board remains integral to our commitment to deliver equitable decisions for authors. Progress continues with the recruitment of new editors in Africa through our second open call, helping to address our historical underrepresentation of researchers from the Global South. Our most comprehensive report of the diversity of eLife’s editorial board underscores this and other achievements as well as pinpointing areas for improvement. Finally, our analysis of the first year under our new model reveals no significant differences in the self-reported demographic characteristics among corresponding authors and peer reviewers compared to submissions through our legacy model.

Next steps: Within the next six months we will:

  • Design subject-specific efforts to increase the gender balance of our editorial board
  • Plan a mentoring scheme to support new editors, especially those from traditionally underrepresented backgrounds
  • Diversify and increase engagement with our early-career reviewers pool

eLife aims to support researchers in ensuring accurate and respectful communication. We will soon integrate the Guidelines on Inclusive Language and Images in Scholarly Communication into our journal policies to aid our authors, editors and reviewers in adopting more inclusive and culturally sensitive practices when publishing.

Next step: In time for our next report, we will:

  • Host a community event on ​​inclusive and equitable research in the life sciences

Underpinning action with accessible and equitable infrastructure

Products, services and systems used and developed by eLife are all tools that can support our efforts to embody our values and drive research culture change. Since our last report, feedback from a diverse group of user testers has directly shaped the upcoming changes to our Reviewed Preprint designs.

Next steps: To further our ambition in this area, we will:

  • Assemble a diverse group of new user testers for upcoming technology projects, ensuring perspectives from across our researcher communities
  • Publish a blog on how Sciety embeds equity, diversity and inclusion in its activities

# For updates from and for eLife’s communities, sign up to our monthly newsletter . You can also follow @eLifeCommunity on X (formerly Twitter ).

Be the first to read new articles from eLife

Howard Hughes Medical Institute

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Political Typology 2017

Survey conducted June 8-18 and June 27-July 9, 2017

The Generation Gap in American Politics

Generational differences have long been a factor in U.S. politics. These divisions are now as wide as they have been in decades, with the potential to shape politics well into the future.

Political Typology Reveals Deep Fissures on the Right and Left

The partisan divide on political values grows even wider.

Gaps between Republicans and Democrats over racial discrimination, immigration and poverty assistance have widened considerably in recent years.

Partisan Shifts in Views of the Nation, but Overall Opinions Remain Negative

Republicans have become far more upbeat about the country and its future since before Donald Trump’s election victory. By contrast, Democrats have become much less positive.

Since Trump’s Election, Increased Attention to Politics – Especially Among Women

Following an election that had one of the largest gender gaps in history, women are more likely than men to say they are paying increased attention to politics.

Support for Same-Sex Marriage Grows, Even Among Groups That Had Been Skeptical

Two years after the Supreme Court decision that required states to recognize same-sex marriages nationwide, support for allowing gays and lesbians to marry legally is at its highest point in over 20 years of Pew Research Center polling on the issue.

Public Has Criticisms of Both Parties, but Democrats Lead on Empathy for Middle Class

Both political parties’ favorability ratings are more negative than positive and fewer than half say either party has high ethical standards.

Download Dataset

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Biochem Med (Zagreb)
  • v.23(1); 2013 Feb

Bias in research

By writing scientific articles we communicate science among colleagues and peers. By doing this, it is our responsibility to adhere to some basic principles like transparency and accuracy. Authors, journal editors and reviewers need to be concerned about the quality of the work submitted for publication and ensure that only studies which have been designed, conducted and reported in a transparent way, honestly and without any deviation from the truth get to be published. Any such trend or deviation from the truth in data collection, analysis, interpretation and publication is called bias. Bias in research can occur either intentionally or unintentionally. Bias causes false conclusions and is potentially misleading. Therefore, it is immoral and unethical to conduct biased research. Every scientist should thus be aware of all potential sources of bias and undertake all possible actions to reduce or minimize the deviation from the truth. This article describes some basic issues related to bias in research.

Introduction

Scientific papers are tools for communicating science between colleagues and peers. Every research needs to be designed, conducted and reported in a transparent way, honestly and without any deviation from the truth. Research which is not compliant with those basic principles is misleading. Such studies create distorted impressions and false conclusions and thus can cause wrong medical decisions, harm to the patient as well as substantial financial losses. This article provides the insight into the ways of recognizing sources of bias and avoiding bias in research.

Definition of bias

Bias is any trend or deviation from the truth in data collection, data analysis, interpretation and publication which can cause false conclusions. Bias can occur either intentionally or unintentionally ( 1 ). Intention to introduce bias into someone’s research is immoral. Nevertheless, considering the possible consequences of a biased research, it is almost equally irresponsible to conduct and publish a biased research unintentionally.

It is worth pointing out that every study has its confounding variables and limitations. Confounding effect cannot be completely avoided. Every scientist should therefore be aware of all potential sources of bias and undertake all possible actions to reduce and minimize the deviation from the truth. If deviation is still present, authors should confess it in their articles by declaring the known limitations of their work.

It is also the responsibility of editors and reviewers to detect any potential bias. If such bias exists, it is up to the editor to decide whether the bias has an important effect on the study conclusions. If that is the case, such articles need to be rejected for publication, because its conclusions are not valid.

Bias in data collection

Population consists of all individuals with a characteristic of interest. Since, studying a population is quite often impossible due to the limited time and money; we usually study a phenomenon of interest in a representative sample. By doing this, we hope that what we have learned from a sample can be generalized to the entire population ( 2 ). To be able to do so, a sample needs to be representative of the population. If this is not the case, conclusions will not be generalizable, i.e. the study will not have the external validity.

So, sampling is a crucial step for every research. While collecting data for research, there are numerous ways by which researchers can introduce bias in the study. If, for example, during patient recruitment, some patients are less or more likely to enter the study than others, such sample would not be representative of the population in which this research is done. In that case, these subjects who are less likely to enter the study will be under-represented and those who are more likely to enter the study will be over-represented relative to others in the general population, to which conclusions of the study are to be applied to. This is what we call a selection bias . To ensure that a sample is representative of a population, sampling should be random, i.e. every subject needs to have equal probability to be included in the study. It should be noted that sampling bias can also occur if sample is too small to represent the target population ( 3 ).

For example, if the aim of the study is to assess the average hsCRP (high sensitive C-reactive protein) concentration in healthy population in Croatia, the way to go would be to recruit healthy individuals from a general population during their regular annual health check up. On the other hand, a biased study would be one which recruits only volunteer blood donors because healthy blood donors are usually individuals who feel themselves healthy and who are not suffering from any condition or illness which might cause changes in hsCRP concentration. By recruiting only healthy blood donors we might conclude that hsCRP is much lower that it really is. This is a kind of sampling bias, which we call a volunteer bias .

Another example for volunteer bias occurs by inviting colleagues from a laboratory or clinical department to participate in the study on some new marker for anemia. It is very likely that such study would preferentially include those participants who might suspect to be anemic and are curious to learn it from this new test. This way, anemic individuals might be over-represented. A research would then be biased and it would not allow generalization of conclusions to the rest of the population.

Generally speaking, whenever cross-sectional or case control studies are done exclusively in hospital settings, there is a good chance that such study will be biased. This is called admission bias . Bias exists because the population studied does not reflect the general population.

Another example of sampling bias is the so called survivor bias which usually occurs in cross-sectional studies. If a study is aimed to assess the association of altered KLK6 (human Kallikrein-6) expression with a 10 year incidence of Alzheimer’s disease, subjects who died before the study end point might be missed from the study.

Misclassification bias is a kind of sampling bias which occurs when a disease of interest is poorly defined, when there is no gold standard for diagnosis of the disease or when a disease might not be easy detectable. This way some subjects are falsely classified as cases or controls whereas they should have been in another group. Let us say that a researcher wants to study the accuracy of a new test for an early detection of the prostate cancer in asymptomatic men. Due to absence of a reliable test for the early prostate cancer detection, there is a chance that some early prostate cancer cases would go misclassified as disease-free causing the under- or over-estimation of the accuracy of this new marker.

As a general rule, a research question needs to be considered with much attention and all efforts should be made to ensure that a sample is as closely matched to the population, as possible.

Bias in data analysis

A researcher can introduce bias in data analysis by analyzing data in a way which gives preference to the conclusions in favor of research hypothesis. There are various opportunities by which bias can be introduced during data analysis, such as by fabricating, abusing or manipulating the data. Some examples are:

  • reporting non-existing data from experiments which were never done (data fabrication);
  • eliminating data which do not support your hypothesis (outliers, or even whole subgroups);
  • using inappropriate statistical tests to test your data;
  • performing multiple testing (“fishing for P”) by pair-wise comparisons ( 4 ), testing multiple endpoints and performing secondary or subgroup analyses, which were not part of the original plan in order “to find” statistically significant difference regardless to hypothesis.

For example, if the study aim is to show that one biomarker is associated with another in a group of patients, and this association does not prove significant in a total cohort, researchers may start “torturing the data” by trying to divide their data into various subgroups until this association becomes statistically significant. If this sub-classification of a study population was not part of the original research hypothesis, such behavior is considered data manipulation and is neither acceptable nor ethical. Such studies quite often provide meaningless conclusions such as:

  • CRP was statistically significant in a subgroup of women under 37 years with cholesterol concentration > 6.2 mmol/L;
  • lactate concentration was negatively associated with albumin concentration in a subgroup of male patients with a body mass index in the lowest quartile and total leukocyte count below 4.00 × 10 9 /L.

Besides being biased, invalid and illogical, those conclusions are also useless, since they cannot be generalized to the entire population.

There is a very often quoted saying (attributed to Ronald Coase, but unpublished to the best of my knowledge), which says: “If you torture the data long enough, it will confess to anything”. This actually means that there is a good chance that statistical significance will be reached only by increasing the number of hypotheses tested in the work. The question is then: is this significant difference real or did it occur by pure chance?

Actually, it is well known that if 20 tests are performed on the same data set, at least one Type 1 error (α) is to be expected. Therefore, the number of hypotheses to be tested in a certain study needs to determined in advance. If multiple hypotheses are tested, correction for multiple testing should be applied or study should be declared as exploratory.

Bias in data interpretation

By interpreting the results, one needs to make sure that proper statistical tests were used, that results were presented correctly and that data are interpreted only if there was a statistical significance of the observed relationship ( 5 ). Otherwise, there may be some bias in a research.

However, wishful thinking is not rare in scientific research. Some researchers tend to believe so much in their original hypotheses that they tend to neglect the original findings and interpret them in favor of their beliefs. Examples are:

  • discussing observed differences and associations even if they are not statistically significant (the often used expression is “borderline significance”);
  • discussing differences which are statistically significant but are not clinically meaningful;
  • drawing conclusions about the causality, even if the study was not designed as an experiment;
  • drawing conclusions about the values outside the range of observed data (extrapolation);
  • overgeneralization of the study conclusions to the entire general population, even if a study was confined to the population subset;
  • Type I (the expected effect is found significant, when actually there is none) and type II (the expected effect is not found significant, when it is actually present) errors ( 6 ).

Even if this is done as an honest error or due to the negligence, it is still considered a serious misconduct.

Publication bias

Unfortunately, scientific journals are much more likely to accept for publication a study which reports some positive than a study with negative findings. Such behavior creates false impression in the literature and may cause long-term consequences to the entire scientific community. Also, if negative results would not have so many difficulties to get published, other scientists would not unnecessarily waste their time and financial resources by re-running the same experiments.

Journal editors are the most responsible for this phenomenon. Ideally, a study should have equal opportunity to be published regardless of the nature of its findings, if designed in a proper way, with valid scientific assumptions, well conducted experiments and adequate data analysis, presentation and conclusions. However, in reality, this is not the case. To enable publication of studies reporting negative findings, several journals have already been launched, such as Journal of Pharmaceutical Negative Results, Journal of Negative Results in Biomedicine, Journal of Interesting Negative Results and some other. The aim of such journals is to counterbalance the ever-increasing pressure in the scientific literature to publish only positive results.

It is our policy at Biochemia Medica to give equal consideration to submitted articles, regardless to the nature of its findings.

One sort of publication bias is the so called funding bias which occurs due to the prevailing number of studies funded by the same company, related to the same scientific question and supporting the interests of the sponsoring company. It is absolutely acceptable to receive funding from a company to perform a research, as long as the study is run independently and not being influenced in any way by the sponsoring company and as long as the funding source is declared as a potential conflict of interest to the journal editors, reviewers and readers.

It is the policy of our Journal to demand such declaration from the authors during submission and to publish this declaration in the published article ( 7 ). By this we believe that scientific community is given an opportunity to judge on the presence of any potential bias in the published work.

There are many potential sources of bias in research. Bias in research can cause distorted results and wrong conclusions. Such studies can lead to unnecessary costs, wrong clinical practice and they can eventually cause some kind of harm to the patient. It is therefore the responsibility of all involved stakeholders in the scientific publishing to ensure that only valid and unbiased research conducted in a highly professional and competent manner is published ( 8 ).

Potential conflict of interest

None declared.

IMAGES

  1. Research bias: What it is, Types & Examples

    what is reporting bias in research

  2. Reporting biases

    what is reporting bias in research

  3. A theoretical framework for reporting bias. Bullet points indicate the

    what is reporting bias in research

  4. Types of Bias in Research.

    what is reporting bias in research

  5. Reporting Bias: Definition, Types, Examples & Mitigation

    what is reporting bias in research

  6. Reporting Bias: Strategies for More Transparent Research

    what is reporting bias in research

VIDEO

  1. Bias in social media research

  2. Sampling Bias in Research

  3. Sadiq Khan on New Brexit Research

  4. Does Masculinity Boost Men's Mental Health?

  5. "BECAUSE YOU'RE WORTH IT" AND THE BIAS THAT MAKES YOU BELIEVE IT

  6. Prof. Dr. Tadashi Dohi on Interdisciplinary Collaborations: Bias, Diversity, Tech Findings

COMMENTS

  1. Causes of reporting bias: a theoretical framework

    The problem of selective reporting and research on reporting bias. Selective reporting of research findings presents a large-scale problem in science, substantially affecting the validity of the published body of knowledge ( Bouter et al., 2016; Dwan et al., 2014; van den Bogert et al., 2017).Reporting bias (publication bias or outcome reporting bias) occurs when the decision to report depends ...

  2. Types of Bias in Research

    Information bias occurs during the data collection step and is common in research studies that involve self-reporting and retrospective data collection. It can also result from poor interviewing techniques or differing levels of recall from participants. The main types of information bias are: Recall bias. Observer bias.

  3. Information bias in health research: definition, pitfalls, and

    Self-reporting bias. Self-reporting is a common approach for gathering data in epidemiologic and medical research. This method requires participants to respond to the researcher's questions without his/her interference. Examples of self-reporting include questionnaires, surveys, or interviews.

  4. Reporting bias

    Reporting bias. In epidemiology, reporting bias is defined as "selective revealing or suppression of information" by subjects (for example about past medical history, smoking, sexual experiences). [1] In artificial intelligence research, the term reporting bias is used to refer to people's tendency to under-report all the information available.

  5. Identifying and Avoiding Bias in Research

    Abstract. This narrative review provides an overview on the topic of bias as part of Plastic and Reconstructive Surgery 's series of articles on evidence-based medicine. Bias can occur in the planning, data collection, analysis, and publication phases of research. Understanding research bias allows readers to critically and independently review ...

  6. Reporting biases

    Our definition of reporting biases is a distortion of presented information from research due to the selective disclosure or withholding of information by parties involved with regards to the topic selected for study and the design, conduct, analysis, or dissemination of study methods, findings or both. Researchers have previously described ...

  7. Reporting Biases

    Clinical trials are experiments in human beings. Findings from these experiments, either by themselves or within research syntheses, are often meant to evidence-based clinical decision-making. These decisions can be misled when clinical trials are reported in a biased manner. For clinical trials to inform healthcare decisions without bias ...

  8. Reporting bias in medical research

    The reporting of research findings may depend on the nature and direction of results, which is referred to as "reporting bias" [1, 2].For example, studies in which interventions are shown to be ineffective are sometimes not published, meaning that only a subset of the relevant evidence on a topic may be available [1, 2].Various types of reporting bias exist (Table 1), including publication ...

  9. Reporting Bias: Definition, Types, Examples & Mitigation

    Reporting bias is a type of selection bias that occurs when only certain observations are reported or published. Reporting bias can greatly impact the accuracy of results, and it is important to consider reporting bias when conducting research. In this article, we will discuss reporting bias, the types, and the examples. ...

  10. Reporting Bias: Definition and Examples, Types

    Reporting bias means that only a selection of results are included in any analysis, which typically covers only a fraction of relevant evidence. This can lead to inappropriate decisions (for example, prescribing ineffective or harmful drugs), resource waste and misguided future research. The umbrella term "reporting bias" covers a wide ...

  11. Research Bias 101: Definition + Examples

    Research bias refers to any instance where the researcher, or the research design, negatively influences the quality of a study's results, whether intentionally or not. The three common types of research bias we looked at are: Selection bias - where a skewed sample leads to skewed results. Analysis bias - where the analysis method and/or ...

  12. Bias in Research and Why It's Important to Control for Reliable

    4. Reporting bias. Reporting bias has to do with selective reporting of outcomes or findings from a study. This type of bias is especially prone to having a malicious origin, i.e., dishonest or unethical conduct by researchers in order to reach a certain conclusion about the safety or efficacy of an intervention.

  13. Causes of reporting bias: a theoretical framework

    Reporting of research findings is often selective. This threatens the validity of the published body of knowledge if the decision to report depends on the nature of the results. ... The evidence derived from studies on causes and mechanisms underlying selective reporting may help to avoid or reduce reporting bias. Such research should be guided ...

  14. Reporting Bias: Strategies for More Transparent Research

    Prevention and Mitigation Strategies. You can use methods like trial registration, open science practices, and reporting guidelines to prevent and minimize reporting bias. Each of these plays a crucial role in making research more transparent, improving the quality of reporting, and lowering the risks of bias.

  15. Reporting bias in medical research

    In conclusion, reporting bias is a widespread phenomenon in the medical literature. Mandatory prospective registration of trials and public access to study data via results databases need to be introduced on a worldwide scale. This will allow for an independent review of research data, help fulfil ethical obligations towards patients, and ...

  16. Introduction to research integrity and selective reporting bias

    Outcome reporting bias - where the results of negative clinical trials are cherry-picked or distorted to improve the overall findings; ... Selective reporting bias, FFP, and other examples of research misconduct, all contribute to a culture of mistrust in science and academia. However, journal editors can play a role in helping change this ...

  17. Best Available Evidence or Truth for the Moment: Bias in Research

    Abstract. The subject of this column is the nature of bias in both quantitative and qualitative research. To that end, bias will be defined and then both the processes by which it enters into research will be entertained along with discussions on how to ameliorate this problem. Clinicians, who are in practice, frequently are called upon to make ...

  18. Tools for assessing risk of reporting biases in studies and syntheses

    Background The credibility of evidence syntheses can be compromised by reporting biases, which arise when dissemination of research findings is influenced by the nature of the results.1 For example, there may be bias due to selective publication, where a study is only published if the findings are considered interesting (also known as publication bias).2 In addition, bias due to selective non ...

  19. Revisiting Bias in Qualitative Research: Reflections on Its

    Recognizing and understanding research bias is crucial for determining the utility of study results and an essential aspect of evidence-based decision-making in the health professions. ... Seeing "bias" as a problem to be managed during the process and reporting of qualitative research may be a way of trying to establish a firmer footing on ...

  20. Explainer: What is Selective Reporting Bias?

    Selective reporting bias occurs when researchers choose to present a subset of results from their study by omitting certain data or complete outcomes. Consequently, the reported estimate or range of outcomes from the study may be biased. How to detect selective reporting bias. Most research generates large amounts of data that can be analyzed ...

  21. Unmasking bias in artificial intelligence: a systematic review of bias

    When biased data are used for research, the results may reflect the same biases if appropriate precautions are not taken. In this systematic review, researchers describe possible types of bias (e.g., implicit, selection) that can result from research with artificial intelligence (AI) using electronic health record (EHR) data. Along with recommendations to reduce introducing bias into the data ...

  22. Study Bias

    Channeling bias is a type of selection bias noted in observational studies. It occurs most frequently when patient characteristics, such as age or severity of illness, affect cohort assignment. This can occur, for example, in surgical studies where different interventions carry different levels of risk.

  23. Outcome-Reporting Bias in Special Education Intervention Research Using

    We conducted a conceptual replication of Pigott et al.'s study of outcome-reporting bias, wherein they compared intervention outcomes reported in unpublished education dissertations with corresponding published versions.

  24. Child abuse reports by medical staff linked to children's race

    The study highlights the potential for bias in doctors' and nurses' decisions about which injuries should be reported to Child Protective Services, according to the researchers. Medical caregivers are mandated reporters, obligated to report to CPS any situations in which they think children may be victims of abuse.

  25. NPR's new CEO Katherine Maher's woke tweets arise as editor claims bias

    The woke, anti-Trump tweets of NPR's new CEO are coming back to haunt her after she struggled to refute bombshell charges of journalistic bias lodged this week by a veteran editor. Sportsfile via ...

  26. eLife Latest: April 2024 update on our actions to promote equity

    Addressing bias in peer review; Encouraging inclusive and equitable research; Underpinning action with equitable infrastructure; Questions and comments on this update are welcome. Please feel free to share via a comment on this blog post or via email to [email protected]. Anonymous feedback may be shared via this form. Report prepared by:

  27. Moving towards less biased research

    Introduction. Bias, perhaps best described as 'any process at any stage of inference which tends to produce results or conclusions that differ systematically from the truth,' can pollute the entire spectrum of research, including its design, analysis, interpretation and reporting. 1 It can taint entire bodies of research as much as it can individual studies. 2 3 Given this extensive ...

  28. Political Typology Quiz

    Take our quiz to find out which one of our nine political typology groups is your best match, compared with a nationally representative survey of more than 10,000 U.S. adults by Pew Research Center. You may find some of these questions are difficult to answer. That's OK. In those cases, pick the answer that comes closest to your view, even if ...

  29. Bias in research

    Definition of bias. Bias is any trend or deviation from the truth in data collection, data analysis, interpretation and publication which can cause false conclusions. Bias can occur either intentionally or unintentionally ( 1 ). Intention to introduce bias into someone's research is immoral.