Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

9.4: Research Ethics in Quantitative Research

  • Last updated
  • Save as PDF
  • Page ID 76240

  • Josue Franco
  • Cuyamaca College

Learning Objectives

By the end of this section, you will be able to:

  • Explain the rationale behind the principle of data access and research transparency
  • Understand the benefit of increased openness in quantitative research

While quantitative/statistical analysis, when used properly, could yield powerful information to support one’s theoretical claims, improper use of such technique could ultimately challenge the integrity of the quantitative method as well as the research being conducted. Without proper precautions, statistics can lead to misunderstanding as well as intentional misrepresentation and manipulation of the findings.

One of the most important facts to consider when applying the quantitative method to one’s research, is to make sure that the principle of objectivity, which is at the heart of the scientific method, is reflected in practice (Johnson, Reynolds, and Mycoff 2015). In other words, in

addition to presenting the information in an objective manner as possible, one must ensure that all relevant information in interpreting the results is also accessible to the readers as well. The implication of this principle in practice is that not only should a researcher provide access to data used in a research project but also explain the process of how one has reached the conclusion that is presented in the research. This resonates with the current discourse on data access and research transparency in the political science discipline.

The most recent work on data access and research transparency in political science discipline were borne out of the concerns amongst practitioners that scholars were unable to replicate a significant proportion of research produced in top journals. In order for the discipline to advance knowledge across different subfields of political science and different methodological approaches, the principle of data sharing and research transparency became ever relevant in the discourse of the discipline. The idea is that evidence-informed knowledge needs to be accessible by the members of other research community whose research may rely on different methodological approaches. As a result of the growing concerns about the lack of norms of data sharing and research transparency culture amongst practitioners of various methodological communities and substantive subfield, the American Political Science Association (APSA), the national professional organization for political scientists, have produced an ethics guideline to ensure that the discipline as a whole can advance the data sharing and research transparency culture and practice.

The recently updated ethics guidelines published by APSA which is mentioned in (Lupia and Elman 2014) states that “researchers have an ethical obligation to facilitate the evaluation of their evidence-based knowledge claims through data access, production transparency, and analytic transparency so the at their work can be tested and replicated”. According to this document, quantitatively oriented research must meet the three prongs of research ethics: data access, production transparency, and analytical transparency. When conducting quantitative political research, all three needs to be incorporated for it to be considered meeting the ethical standard.

First, researchers must ensure data accessibility. Researchers should clearly reference the data used in their work, and if the data used were originally generated, collected, and/or compiled by the researcher, she should provide access to them. This is a practice already adopted by many journals where the condition of publication of an article is to provide access to data used in the manuscript. Some researchers include code and commands used in various statistical software, such as Stata, SAS, and R, so that one can replicate the published work.

Second, researchers need to practice production transparency. Not only should the researcher share the data themselves, but she also needs to provide a full account of the procedures used in the generation and collection of the data. First and foremost, this principle provides safeguards against unethical practice of misrepresenting or inventing data. One of the most famous recent cases of data fraud in political science research perhaps is the case involving Michael LaCour (Konnikova 2015). He completely fabricated the data he and his co-author Donald Green used in their research where many political scientists thought was miraculous findings. Only when two UC Berkeley grad students, David Broockman and Josh Kalla, tried to replicate the study and contacted the firm that LaCour supposedly used in the collection of the survey data, it was revealed that LaCour completely made up the “survey data” the authors used in their research.

Finally, researchers need to ensure analytical transparency where the link between the data and the conclusion of the research is clearly delineated. In other words, a researcher must explicitly explain the process that led to the conclusion of a research project based on the data being used in such a study. The empirical evidence must be clearly mapped on the theoretical framework of a given research project. Some scholars are concerned about the implication of radical honesty in political science research, identifying that the probability of successful journal publication may diminish as the level of transparency and radical honesty increases (Yom 2018). As a result, the idea of radical honesty in political science research requires the institutional buy-in beyond an ethical practice at the individual level. Unless such a practice is beneficial to a scholar, as opposed to being a challenge, the culture of analytical transparency may not cascade to the greater political science community beyond the pockets of ethical practitioners that currently exist.

It is important to note that increased openness in quantitative research provides political scientists with a number of benefits beyond what is promised in the ethical front (Lupia and Elman 2014). First, transparency and increased data access offer members of a particular research community to examine the current state of their own scholarship. Through such “internal” self-assessment within a particular subfield of political science, scholars are able to cultivate ̈an evidentiary and logical basis of treating claims as valid” (Lupia and Elman 2014). In many subfields, the validity of their knowledge requires replication of existing work. When access to quality data is limited, it becomes challenging to determine whether we should have confidence in the research findings presented. Without the culture and practice of data access and research transparency, it affects the confidence of a particular subfield as well.

In the literature of civil war onset, Hegre and Sambanis, for example, conducted a sensitivity study on the findings of various published works (Hegre and Sambanis 2006). Essentially, a sensitivity study is the examination of a numerical measurement (e.g, whether a civil war started or not) under a different condition than the original setting. In this particular case, the scholars of civil war literature uses different definitions of when a violent conflict constitute a civil war. The implication of this is that some scholars may have included or excluded certain cases from their dataset. Consequentially, it will influence the results of their study. So, one way to conduct a sensitivity study is to use the same definition, for example, of an outcome variable and replicate the study to examine the effect of such change.

This project was the result of the observation that several empirical results are not robust or replicable across studies. Because the authors of these articles in the sensitivity analysis practiced the ethical culture of data sharing and research transparency, scholars of civil war literature were able to reflect on the state of their research community. For the members of other research communities, the culture and practice of openness could contribute to the persuasiveness of the findings. This is based on the idea that the more one is empowered to understand the process through which the researchers have reached a particular conclusion, the more likely that the reader is likely to believe and value the knowledge.

Next, the culture and practice of openness help political scientists more effectively communicate with members of other communities, including non-political scientists. This is very important, for our research findings often carry real political and social implications. Generally speaking, good political research must contribute to the field of political science as well as to the real world (King, Keohane, and Verba 1994). Our findings are often used by political actors, policy advocates as well as various non-profit organizations that affect many lives of the general public. For example, Dr. Tom Wong, an expert on immigration policy, has worked as an expert advisor in the Obama administration and testified in various federal court cases to advocate for the rights of undocumented immigrants. He supported his position by relying on his research on the impact of undocumented immigrants which were primarily written for academics. However, he was also able to communicate with non-political scientists partly because of the fact that his research reflected the value of data access and research transparency (Wong 2015, 2017).

Although political scientists should intrinsically adopt ethical research practices, it is also quite effective to identify the potential benefit of such practices to their research communities so that the practitioners have the incentive to adopt the culture of data sharing and research transparency and becomes second nature.

  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Greek and Roman Papyrology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Agriculture
  • History of Education
  • History of Emotions
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Acquisition
  • Language Variation
  • Language Families
  • Language Evolution
  • Language Reference
  • Lexicography
  • Linguistic Theories
  • Linguistic Typology
  • Linguistic Anthropology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature

Bibliography

  • Children's Literature Studies
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Religion
  • Music and Culture
  • Music and Media
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Oncology
  • Medical Toxicology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Clinical Neuroscience
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Neuroscience
  • Cognitive Psychology
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Strategy
  • Business History
  • Business Ethics
  • Business and Government
  • Business and Technology
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic Systems
  • Economic Methodology
  • Economic History
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Politics and Law
  • Public Administration
  • Public Policy
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Quantitative Methods in Psychology, Vol. 1

  • < Previous chapter
  • Next chapter >

3 Quantitative Methods and Ethics

Ralph L. Rosnow, Department of Psychology, Temple University, Philadelphia, PA

Robert Rosenthal, Department of Psychology, University of California, Riverside

  • Published: 01 October 2013
  • Cite Icon Cite
  • Permissions Icon Permissions

The purpose of this chapter is to provide a context for thinking about the role of ethics in quantitative methodology. We begin by reviewing the sweep of events that led to the creation and expansion of legal and professional rules for the protection of research subjects and society against unethical research. The risk-benefit approach has served as an instrument of prior control by institutional review boards. After discussing the nature of that approach, we sketch a model of the costs and utilities of the “doing” and “not doing” of research. We illustrate some implications of the expanded model for particular data analytic and reporting practices. We then outline a 5 × 5 matrix of general ethical standards crossed with general data analytic and reporting standards to encourage thinking about opportunities to address quantitative methodological problems in ways that may have mutual ethical and substantive rewards. Finally, we discuss such an opportunity in the context of problems associated with risk statistics that tend to exaggerate the absolute effects of therapeutic interventions in randomized trials.

Introduction

In this chapter we sketch an historic and heuristic framework for assessing certain ethical implications of the term quantitative methods . We use this term in the broadest sense to include not only statistical procedures but also what is frequently described as quantitative research (in contrast to qualitative research) in psychology and some other disciplines. As defined in the APA Dictionary of Psychology , the traditional distinction between these two general types of research rests on whether “the approach to science” does ( quantitative research ) or does not ( qualitative research ) “employ the quantification (expression in numerical form) of the observations made” ( VandenBos, 2007 , pp. 762–763). Of course, quantitative and qualitative methods should not be seen as mutually exclusive, as it can often be illuminating to use both types in the same research. For example, in the typical psychological experiment in which the observations take a numerical form, it may be edifying to ask some of the participants in postexperimental interviews to reflect on the context in which the experiment was conducted and to speculate on the ways in which it may have influenced their own and other participants’ behaviors ( Orne, 1962 , 1969 ). By the same token, it is usually possible to quantify nonquantitative observations by, for example, decomposing the qualitative subject matter element by element and then numerically and visually analyzing and summarizing the results. Blogs and online discussion groups are currently a popular source of qualitative subject matter, which researchers have trolled for patterns or relationships that can be quantified by the use of simple summary statistics (e.g., Bordia & Rosnow, 1995 ) or coded and visually mapped out using social network analysis to highlight links and nodes in the observed relationships (e.g., Kossinets & Watts, 2006 ; see also   Wasserman & Faust, 1994 ). Whether blogs and online discussion groups’ data are treated quantitatively or qualitatively, their use may raise ethical questions regarding the invasion of privacy. The fact that bloggers and participants in online discussion groups are typically fully aware that their communications are quite public minimizes the risk of invasion of privacy.

The term ethics was derived from the Greek ethos , meaning “character” or “disposition.” We use the term here to refer to the dos and don’ts of codified and/or culturally ingrained rules by which morally “right” and “wrong” conduct can be differentiated. Conformity to such rules is usually taken to mean morality , and our human ability to make ethical judgments is sometimes described as a moral sense (a tradition that apparently goes back to David Hume’s A Treatise of Human Nature in the eighteenth century). Philosophers and theologians have frequently disagreed over the origin of the moral sense, but on intuitive grounds it would seem that morality is subject to societal sensitivities, group values, and social pressures. It is not surprising that researchers have documented systematic biases in ethical judgments. For example, in a study by Kimmel (1991) , psychologists were asked to make ethical judgments about hypothetical research cases. Kimmel reported that those psychologists who were more (as compared to less) approving in their ethical judgments were more often men; had held an advanced degree for a longer period of time; had received the advanced degree in an area such as experimental, developmental, or social psychology rather than counseling, school, or community psychology; and were employed in a research-oriented context as opposed to a service-oriented context. Citing this work of Kimmel’s (1991) , an American Psychological Association (APA) committee raised the possibility that inconsistent implementation of ethical standards by review boards might result not only from the expanded role of review boards but also from the composition of particular boards ( Rosnow, Rotheram-Borus, Ceci, Blanck, & Koocher, 1993 ). Assuming that morality is also predicated on people’s abilities to figure out the meaning of other people’s actions and underlying intentions, it might be noted that there is also empirical evidence of (1) individual differences in this ability (described as interpersonal acumen ) and (2) a hierarchy of intention–action combinations ranging from the least to most cognitively taxing ( Rosnow, Skleder, Jaeger, & Rind, 1994 ).

Societal sensitivities, group values, and situational pressures are subject to change in the face of significant events. On the other hand, some moral values seem to be relatively enduring and universal, such as the golden rule, which is frequently expressed as “Do unto others as you would have them do unto you.” In the framework of quantitative methods and ethics, a categorical imperative might be phrased as “Thou shalt not lie with statistics.” Still, Huff, in his book, How to Lie with Statistics , first cautioned the public in 1954 that the reporting of statistical data was rife with “bungling and chicanery” ( Huff, 1982 , p. 6). The progress of science depends on the good faith that scientists have in the integrity of one another’s work and the unbiased communication of findings and conclusions. Lying with statistics erodes the credibility of the scientific enterprise, and it can also present an imminent danger to the general public. “Lying with statistics” can refer to a number of more specific practices: for example, reporting only the data that agree with the researcher’s bias, omitting any data not supporting the researcher’s bias, and, most serious of all, fabricating the results of the research. For example, there was a case reported in 2009 in which an anesthesiologist fabricated the statistical data that he had published in 21 journal articles purporting to give the results of clinical trials of a pain medicine marketed by the company that funded much of the doctor’s research ( Harris, 2009 ). Another case, around the same time, involved a medical researcher whose accounts of a blood test for diagnosing prostate cancer had generated considerable excitement in the medical community, but who was now being sued for scientific fraud by his industry sponsor ( Kaiser, 2009 ). As the detection of lying with statistics is often difficult in the normal course of events, there have been calls for the public sharing of raw data so that, as one scientist put it, “Anyone with the skills can conduct their own analyses, draw their own conclusions, and share those conclusions with others” ( Allison, 2009 , p. 522). That would probably help to reduce some of the problems of biased data analysis, but it would not help much if the shared data had been fabricated to begin with.

In the following section, we review the sweep of events that led to the development and growth of restraints for the protection of human subjects and society against unethical research. 1 A thread running throughout the discussion is the progression of the APA’s code of conduct for psychological researchers who work with human subjects. We assume that many readers of this Handbook will have had a primary or consulting background in some area of psychology or a related research area. The development of the APA principles gives us a glimpse of the specific impact of legal regulations and societal sensitivities in an area in which human research has been constantly expanding into new contexts, including “field settings and biomedical contexts where research priorities are being integrated with the priorities and interests of nonresearch institutions, community leaders, and diverse populations” ( Sales & Folkman, 2000 , p. ix). We then depict an idealized risk–benefit approach that review boards have used as an instrument of prior control of research, and we also describe an expanded model focused on the costs and utilities of “doing” and “not doing” research. The model can also be understood in terms of the cost–utility of adopting versus not adopting particular data analytic and reporting practices. We then outline a matrix of general ethical standards crossed with general data analytic and reporting standards as (1) a reminder of the basic distinction between ethical and technical mandates and (2) a framework for thinking about promising opportunities for ethical and substantive rewards in quantitative methodology (cf. Blanck, Bellack, Rosnow, Rotheram-Borus, & Schooler, 1992 ; Rosenthal, 1994 ; Rosnow, 1997 ). We discuss such an opportunity in the context of the way in which a fixation on relative risk (RR) in large sample randomized trials of therapeutic interventions can lead to misconceptions about the practical meaning to patients and health-care providers of the particular intervention tested.

The Shaping of Principles to Satisfy Ethical and Legal Standards

If it can be said that a single historical event in modern times is perhaps most responsible for initially galvanizing changes in the moral landscape of science, then it would be World War II. On December 9, 1946 (the year after the surrender of Germany on May 8, 1945 and the surrender of Japan on August 14, 1945), criminal proceedings against Nazi physicians and administrators who had participated in war crimes and crimes against humanity were presented before a military tribunal in Nuernberg, Germany. For allied atomic scientists, Hiroshima had been an epiphany that vaporized the old iconic image of a morally neutral science. For researchers who work with human participants, the backdrop to the formation of ethical and legal principles to protect the rights and welfare of all research participants were the shocking revelations of the war crimes documented in meticulous detail at the Nuernberg Military Tribunal. Beginning with the German invasion of Poland at the outbreak of World War II, Jews and other ethnic minority inmates of concentration camps had been subjected to sadistic tortures and other barbarities in “medical experiments” by Nazi physicians in the name of science. As methodically described in the multivolume report of the trials, “in every one of the experiments the subjects experienced extreme pain or torture, and in most of them they suffered permanent injury, mutilation, or death” ( Trials of War Criminals before the Nuernberg Military Tribunals under Control Council Law No. 10 , p. 181). Table 3.1 reprints the principles of the Nuernberg Code, which have resonated to varying degrees in all ensuing codes for biomedical research with human participants as well as having had a generative influence on the development of principles for the conduct of behavioral and social research.

We pick up the story again in the 1960s in the United States, a period punctuated by the shocking assassinations of President John F. Kennedy in 1963 and then of Dr. Martin Luther King, Jr., and Senator Robert F. Kennedy in 1968. The 1960s were also the beginning of the end of what Pattullo (1982) called “the hitherto sacrosanct status” of the human sciences, which moved “into an era of uncommonly active concern for the rights and welfare of segments of the population that had traditionally been neglected or exploited” (p. 375). One highly publicized case in 1963 involved a noted cancer researcher who had injected live cancer cells into elderly, noncancerous patients, “many of whom were not competent to give free, informed consent” (Pattullo, p. 375). In 1966, the U.S. Surgeon General issued a set of regulations governing the use of subjects by researchers whose work was funded by the National Institutes of Health (NIH). Most NIH grants funded biomedical research, but there was also NIH support for research in the behavioral and social sciences. In 1969, following the exposure of further instances in which the welfare of subjects had been ignored or endangered in biomedical research (cf. Beecher, 1966 , 1970 ; Katz, 1972 ), the Surgeon General extended the earlier safeguards to all human research. In a notorious case (not made public until 1972), a study conducted by the U.S. Public Health Service (USPHS) simply followed the course of syphilis in more than 400 low-income African-American men residing in Tuskegee, Alabama, from 1932 to 1972 ( Jones, 1993 ). Recruited from churches and clinics with the promise of free medical examinations and free health care, the men who were subjects in this study were never informed they had syphilis but only told they had “bad blood.” They also were not offered penicillin when it was discovered in 1943 and became widely available in the 1950s, and they were warned not to seek treatment elsewhere or they would be dropped from the study. The investigators went so far as to have local doctors promise not to treat the men in the study with antibiotics ( Stryker, 1997 ). As the disease progressed in its predictable course without any treatment, the men experienced damage to their skeletal, cardiovascular, and central nervous systems and, in some cases, death. In 1972, the appalling details were finally made public by a lawyer who had been an epidemiologist for the USPHS, and the study was halted ( Fairchild & Bayer, 1999 ). The following year, the Senate Health Subcommittee (chaired by Senator Edward Kennedy) aired the issue of scientific misconduct in public hearings.

Reprinted from pp. 181—182 in Trials of War Criminals before the Nuernberg Military Tribunals under Control Council Law No. 10, October 1946-April 1949, Vol. II. Washington, DC: U.S. Government Printing Office.

The early 1960s was also a period when emotions about invasions of privacy were running high in the United States after a rash of reports of domestic wiretapping and other clandestine activities by federal agencies. In the field of psychology, the morality of the use of deception was being debated. As early as the 1950s, there had been concerned statements issued about the use of deception in social psychological experiments ( Vinacke, 1954 ). The spark that lit a fuse in the 1960s in the field of psychology was the publication of Stanley Milgram’s studies on obedience to authority, in which he had used an elaborate deception and found that a majority of ordinary research subjects were willing to administer an allegedly dangerous level of shock to another person when “ordered” to do so by a person in authority, although no shock was actually administered (cf. Blass, 2004 ; Milgram, 1963 , 1975 ). Toward the end of the 1960s, there were impassioned pleas by leading psychologists for the ethical codification of practices commonly used in psychological research ( Kelman, 1968 ; Smith, 1969 ). As there were new methodological considerations and federal regulations since the APA had formulated a professional code of ethics in 1953, a task force was appointed to draft a set of ethical principles for research with human subjects. Table 3.2 shows the final 10 principles adopted by the APA’s Council of Representatives in 1972, which were elucidated in a booklet that was issued the following year, Ethical Principles in the Conduct of Research with Human Participants (APA, 1973). An international survey conducted 1 year later found there were by then two dozen codes of ethics that had been either adopted or were under review by professional organizations of social scientists ( Reynold, 1975 ). Although violations of such professional codes were supported by penalties such as loss of membership in the organization, the problem was that many researchers engaged in productive, rewarding careers did not belong to these professional organizations.

By the end of the 1970s, the pendulum had swung again, as accountability had become the watchword of the decade ( National Commission on Research, 1980 ). In 1974, the guidelines provided by the Department of Health, Education, and Welfare (DHEW) 3 years earlier were codified as government regulations by the National Research Act of July 12, 1974 (Pub. L. 93–348). Among the requirements instituted by the government regulations was that institutions that received federal funding establish an institutional review board (IRB) for the purpose of making prior assessments of the possible risks and benefits of proposed research. 2 This federal act also created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. Following hearings that were held over a 3-year period, the document called “The Belmont Report” was issued in April, 1979 (available online and also reprinted in Sales & Folkman, 2000 ). Unlike other reports of the Commission, the Belmont Report did not provide a list of specific recommendations for administrative action by the DHEW, but the Belmont Report recommended that the report be adopted in its entirety as a statement of DHEW policy. In the preamble, the report mentioned the standards set by the Nuernberg (“Nuremberg”) Code as the prototype of many later codes consisting of rules, some general and others specific, to guide researchers and assure that research involving human participants would be carried out in an ethical manner. Noting that the rules were often inadequate to cover complex situations, that they were often difficult to apply or interpret, and that they often came into conflict with one another, the National Commission had decided to issue broad ethical principles to provide a basis on which specific rules could then be formulated, criticized, and interpreted. As we track the development of the APA principles in this discussion, we will see that there has been a similar progression, and later we will emphasize some broad ethical principles when we discuss the interface of ethical and technical standards in quantitative methodology. For now, however, it can be noted that the Belmont Report proposed that (1) respect for persons, (2) beneficence, and (3) justice provide the foundation for research ethics. The report also proposed norms for scientific conduct in six major areas: (1) the use of valid research designs, (2) the competence of researchers, (3) the identification of risk–benefit consequences, (4) the selection of research participants, (5) the importance of obtaining informed voluntary consent, and (6) compensation for injury. 3

In 1982, the earlier APA code was updated, and a new version of Ethical Principles in the Conduct of Research with Human Participants was published by the APA. In the earlier version and in the 1982 version, the principles were based on actual ethical problems that researchers had experienced, and extensive discussion throughout the profession was incorporated in each edition of Ethical Principles . The principles in the 1982 code are reprinted in Table 3.3 . Notice that there were several new terms ( subject at risk and subject at minimal risk ) and also an addendum sentence to informed consent (referring to “research with children or with participants who have impairments that would limit understanding and/or communication”). The concept of minimal risk (which came out of the Belmont Report) means that the likelihood and extent of harm to the participants are presumed to be no greater than what may be typically experienced in everyday life or in routine physical or psychological examinations ( Scott-Jones & Rosnow, 1998 , p. 149). In actuality, the extent of harm may not be completely anticipated, and estimating the likelihood of harm is frequently difficult or impossible. Regarding the expanded statement on deception, the use of deception in research had been frowned upon for some years although there had long been instances in which active and passive deceptions had been used routinely. An example was the withholding of information (passive deception). Randomized clinical trials would be considered of dubious value in medical research had the experimenters and the participants not been deprived of information regarding which condition was assigned to each participant. On the other hand, in some areas of behavioral experimentation, the use of deception has been criticized as having “reached a ‘taken-for-granted’ status” ( Smith, Kimmel, & Klein, 2009 , p. 486). 4

Quoted from pp. 1—2 in Ethical Principles in the Conduct of Research with Human Participants. Washington, DC: American Psychological Association. Copyright © 1973 by the American Psychological Association.

Given the precedence of federal (and state) regulations since the guidelines developed by the DHEW were codified by the National Research Act in 1974 (and revised as of November 6, 1975), researchers were perhaps likely to take their ethical cues from the legislated morality and its oversight by IRBs as opposed to the aspirational principles embodied in professional codes, such as the APA code. Another complication in this case is that there was a fractious splintering of the APA in the late-1980s, which resulted in many members resigning from the APA and the creation of the rival American Psychological Society, subsequently renamed the Association for Psychological Science (APS). For a time in the 1990s, a joint task force of the APA and the APS attempted to draft a revised ethics code, but the APS then withdrew its participation following an apparently irresolvable disagreement. In 2002, after a 5-year revision process, APA adopted a reworked ethics code that emphasized the five general principles defined (by APA) in Table 3.4 and also “specific standards” that fleshed out these principles. 5 The tenor of the final document was apparently intended to reflect the remaining majority constituency of the APA (practitioners) but also the residual constituency of psychological scientists who perform either quantitative or qualitative research in fundamental and applied contexts. Of the specific principles with some relevance to data analysis or quantitative methods, there were broadly stated recommendations such as sharing the research data for verification by others (Section 8.14), not making deceptive or false statements (Section 8.10), using valid and reliable instruments (Section 9.02), drawing on current knowledge for design, standardization, validation, and the reduction or elimination of bias when constructing any psychometric instruments (Section 9.05). We turn next to the risk–benefit process, but we should also note that ethical values with relevance to statistical practices are embodied in the codes developed by statistical organizations (e.g., American Statistical Association, 1999 ; see also   Panter & Sterba, 2011 ).

Quoted from pp. 5—7’ in Ethical Principles in the Conduct of Research with Human Participants. Washington, DC: American Psychological Association. Copyright ©1982 by the American Psychological Association.

Expanding the Calculation of Risks and Benefits

After the Belmont Report, it seemed that everything changed permanently for scientists engaged in human subject research, and it made little difference whether they were engaged in biomedical, behavioral, or social research. As the philosopher John E. Atwell (1981) put it, the moral dilemma was to defend the justification of using human subjects as the means to an end that was beneficial in some profoundly significant way (e.g., the progression of science, public health, or public policy) while protecting the moral “ideals of human dignity, respect for persons, freedom and self-determination, and a sense of personal worth” (p. 89). Review boards were now delegated the responsibility of making prior assessments of the future consequences of proposed research on the basis of the probability that a certain magnitude of psychological, physical, legal, social, or economic harm might result, weighed against the likelihood that “something of positive value to health or welfare” might result. Quoting the Belmont Report, “risk is properly contrasted to probability of benefits, and benefits are properly contrasted with harms rather than risks of harms,” where the “risks and benefits of research may affect the individual subjects, the families of the individual subjects, and society at large (or special groups of subjects in society).” The moral calculus of benefits to risks was said to be “in a favorable ratio” when the anticipated risks were outweighed by the anticipated benefits to the subjects (assuming this was applicable) and the anticipated benefit to society in the form of the advancement of knowledge. Put into practice, however, researchers and members of review boards found it difficult to “exorcize the devil from the details” when challenged by ethical guidelines that frequently conflicted with traditional technical criteria ( Mark, Eyssell, & Campbell, 1999 , p. 48). As human beings are not omniscient, there was also the problem that “neither the risks nor the benefits … can be perfectly known in advance” ( Mark et al., 1999 , p. 49).

These complications notwithstanding, another catch-22 of the risk–benefit assessment is that it focuses only on the doing of research . Some years ago, we proposed a way of visualizing this predicament—first, in terms of an idealized representation of the risk–benefit assessment and, second, in terms of an alternative model focused on the costs and benefits of both the doing and not doing of research ( Rosenthal & Rosnow, 1984 ). The latter model also has implications for the risk–benefit (we prefer the term cost–utility ) of using or not using particular quantitative methods (we return to this idea in a moment). First, however, Figure 3.1 shows an idealized representation of the traditional risk–benefit assessment. Risk (importance or probability of harm) is plotted from low (C) to high (A) on the vertical axis, and the benefit is plotted from low (C) to high (D) on the horizontal axis. In other words, studies in which the risk–benefit assessment is close to A would presumably be less likely to be approved; studies close to D would be more likely to be approved; and studies falling along the B–C “diagonal of indecision” exist in a limbo of uncertainty until relevant information nudges the assessment to either side of the diagonal. The idea of “zero risk” is a methodological conceit, however, because all human subject research can be understood as carrying some degree of risk. The potential risk in the most benign behavioral and social research, for example, is the “danger of violating someone’s basic rights, if only the right of privacy” ( Atwell, 1981 , p. 89). However, the fundamental problem of the traditional model represented in Figure 3.1 is that it runs the risk of ignoring the “not doing of research.” Put another way, there are also moral costs when potentially useful research is forestalled, or if the design or implementation is compromised in a way that jeopardizes the integrity of the research (cf. Haywood, 1976 ).

Quoted from the American Psychological Association’s Ethical Principles of Psychologists and Code of Conduct ( http://www.apa.org/ethics/code2002.html ). Effective date June 1, 2003, copyrighted in 2002 by the American Psychological Association.

Figure 3.2 shows an alternative representing a cost–utility assessment of both the doing and not doing of research. In Part A, the decision plane model on the left corresponds to a cost–utility appraisal of the “doing of research,” and the model on the right corresponds to an appraisal of the “not doing of research.” We use the terms cost and utility each in a collective sense. That is, the cost of doing and the cost of not doing a particular research study include more than only the risk of psychological or physical harm; they also include the cost to society, funding agencies, and to scientific knowledge when imagination and new scientifically based solutions are stifled. As one scientist observed, “Scientists know that questions are not settled; rather, they are given provisional answers for which it is contingent upon the imagination of followers to find more illuminating solutions” ( Baltimore, 1997 , p. 8). We also use utility in a collective sense, not just in the way that a “tool” can immediately be instrumentally useful, but in a way that may have no immediate application and instead “speaks to our sense of wonder and paves the way for future advances” ( Committee on Science, Engineering, and Public Policy, 2009 , p. 3). These figurative definitions of cost and utility aside, Part B of Figure 3.2 suggests a way of transforming the three dimensions of Part A to a two-dimensional model. Suppose an A–D “decision diagonal” for each of the decision planes in Part A (in contrast to B–C and B’–C’, the diagonals of indecision). For any point in the plane of doing , there would be a location on the cost axis and on the utility axis, where any point could be translated to an equivalent position on the decision diagonal. Thus, if a point were twice as far from A as from D, the transformed point would then be located two-thirds of the way on the decision diagonal A–D (closer to D than to A). Similar reasoning is applicable to not doing , with the exception that closeness to A would mean “do” rather than “not do.” Points near D tell us the research should be done, and points near D’ tell us the research should not be done. 6

Idealized decision-plane model representing the relative risks and benefits of research submitted to a review board for prior approval (after Rosenthal & Rosnow, 1984 ; Rosnow & Rosenthal, 1997 ).

Decision-planes representing the ethical assessment of the costs and utilities of doing and not doing research (after Rosenthal & Rosnow, 1984 , 2008 ). (A) Costs and utilities of doing (left plane) and not doing (right plane) research. (B) Composite plane representing both cases in Part A (above).

Figure 3.2 can also be a way of thinking about cost–utility dilemmas regarding quantitative methods and statistical reporting practices. In the 2009 edition of the U.S. National Academy of Sciences (NAS) guide to responsible conduct in scientific research, there are several hypothetical scenarios, including one in which a pair of researchers (a postdoctoral and a graduate student) discuss how they should deal with two anomalous data points in a graph they are preparing to present in a talk ( Committee on Science, Engineering, and Public Policy, 2009 ). They want to put the best face on their research, but they fear that discussing the two outliers will draw people’s attention away from the bulk of the data. One option would be to drop the outliers, but, as one researcher cautions, this could be viewed as “manipulating” the data, which is unethical. The other person comments that if they include the anomalous points, and if a senior person then advises them to include the anomalous data in a paper they are drafting for publication, this could make it harder to have the paper accepted by a top journal. That is, the reported results will not be unequivocal (a potential reason for rejection), and the paper will also then be too wordy (another reason to reject it?). In terms of Figure 3.2 , not including the two anomalous data points is analogous to the “not doing of research.” There are, of course, additional statistical options, which can also be framed in cost–utility terms, such as using a suitable transformation to pull in the outlying stragglers and make them part of the group (cf. Rosenthal & Rosnow, 2008 , pp. 310–311). On the other hand, outliers that are not merely recording errors or instrument errors can sometimes provide a clue as to a plausible moderator variable. Suppressing this information could potentially impede scientific progress (cf. Committee on Science, Engineering, and Public Policy, 2009 , p. 8).

Unfortunately, there are also cases involving the suppression of data where the cost is not only that it impedes progress in the field, but it also undermines the authority and trustworthiness of scientific research and, in some instances, can cause harm to the broader society, such as when public policy is based on only partial information or when there is selective outcome reporting of the efficacy of clinical interventions in published reports of randomized trials ( Turner, Matthews, Linardatos, Tell, & Rosenthal, 2008 ; Vedula, Bero, Scherer, & Dickersin, 2009 ). In an editorial in Science , Cicerone (2010) , then president of the NAS, stated that his impression—based on information from scattered public opinion polls and various assessments of leaders in science, business, and government—was that “public opinion has moved toward the view that scientists often try to suppress alternative hypotheses and ideas and that scientists will withhold data and try to manipulate some aspects of peer review to prevent dissent” (p. 624). Spielmans and Parry (2010) described a number of instances of “marketing-based medicine” by pharmaceutical firms. Cases included the “cherry-picking” of data for publication, the suppression or understatement of negative results, and the publication (and distribution to doctors) of journal articles that were not written by the academic authors who lent their names, titles, and purported independence to the papers but instead had been written by ghost writers hired by pharmaceutical and medical-device firms to promote company products. Spielmans and Parry displayed a number of screen shots of company e-mails, which we do not usually get to see because they go on behind the curtain. In an editorial in PLoS Medicine (2009) lamenting the problem of ghost writers and morally dubious practices in the medical marketing of pharmaceutics, the editors wrote:

How did we get to the point that falsifying the medical literature is acceptable? How did an industry whose products have contributed to astounding advances in global health over the past several decades come to accept such practices as the norm? Whatever the reasons, as the pipeline for new drugs dries up and companies increasingly scramble for an ever-diminishing proportion of the market in “me-too” drugs, the medical publishing and pharmaceutical industries and the medical community have become locked into a cycle of mutual dependency, in which truth and a lack of bias have come to be seen as optional extras. Medical journal editors need to decide whether they want to roll over and just join the marketing departments of pharmaceutical companies. Authors who put their names to such papers need to consider whether doing so is more important than having a medical literature that can be believed in. Politicians need to consider the harm done by an environment that incites companies into insane races for profit rather than for medical need. And companies need to consider whether the arms race they have started will in the end benefit anyone. After all, even drug company employees get sick; do they trust ghost authors?

Ethical Standards and Quantitative Methodological Standards

We turn now to Table 3.5 , which shows a matrix of general ethical standards crossed with quantitative methodological standards (after Rosnow & Rosenthal, 2011 ). We do not claim that the row and column standards are either exhaustive or mutually exclusive but only that they are broadly representative of (1) aspirational ideals in the society as a whole and (2) methodological, data analytic, and reporting standards in science and technology. The matrix is a convenient way of reminding ourselves of the distinction between (1) and (2), and it is also a way of visualizing a potential clash between (1) and (2) and, frequently, the opportunity to exploit this situation in a way that could have rewarding ethical and scientific implications. Before we turn specifically to the definitions of the row and column headings in Table 3.5 , we will give a quick example of what we mean by “rewarding ethical and scientific implications” in the context of the recruitment of volunteers. For this example, we draw on some of our earlier work on specific threats to validity (collectively described as artifacts ) deriving from the volunteer status of the participants for research participation. Among our concerns when we began to study the volunteer was that ethical sensitivities seemed to be propelling psychological science into a science of informed volunteers (e.g., Rosenthal & Rosnow, 1969 ; Rosnow & Rosenthal, 1970 ). It was long suspected that people who volunteered for behavioral and social research might not be fully adequate models for the study of behavior in general. To the extent that volunteers differ from nonvolunteers on dimensions of importance, the use of volunteers could have serious effects on such estimated parameters as means, medians, proportions, variances, skewness, and kurtosis. The estimation of parameters such as these is the principal goal in survey research, whereas in experimental research the focus is usually on the magnitude of the difference between the experimental and control group means. Such differences, we and other investigators observed, were sometimes affected by the use of volunteers ( Rosenthal & Rosnow, 1975 , 2009 ).

With problems such as these serving as beginning points for empirical and meta-analytic investigations, we explored the characteristics that differentiated volunteers and nonvolunteers, the situational determinants of volunteering, some possible interactions of volunteer status with particular treatment effects, the implications for predicting the direction and, sometimes, the magnitude of the biasing effects in research situations, and we also thought about the broader ethical implications of these findings ( Rosenthal & Rosnow, 1975 ; Rosnow & Rosenthal, 1997 ). For example, in one aspect of our meta-analytic inquiry, we put the following question to the research literature: What are the variables that tend to increase or decrease the rates of volunteering obtained? Our preliminary answers to this question may have implications for both the theory and practice of behavioral science. That is, if we continue to learn more about the situational determinants of volunteering, we can learn more about the social psychology of social influence processes. Methodologically, once we learn more about the situational determinants of volunteering, we should be in a better position to reduce the bias in our samples that derives from the volunteer subjects being systematically different from nonvolunteers in a variety of characteristics. For example, one situational correlate was that the more important the research was perceived, the more likely people were to volunteer for it. Thus, mentioning the importance of the research during the recruitment phase might coax more of the “nonvolunteers” into the sampling pool. It would be unethical to exaggerate or misrepresent the importance of the research. By being honest, transparent, and informative, we are treating people with respect and also giving them a well-founded justification for asking them to volunteer their valuable time, attention, and cooperation. In sum, the five column headings of Table 3.5 frequently come precorrelated in the real world of research, often with implications for the principles in the row headings of the table.

Turning more specifically to the row headings in Table 3.5 , rows A, B, C, and E reiterate the three “basic ethical principles” in the Belmont Report, which were described there as respect for persons, beneficence, and justice. Beneficence (the ethical ideal of “doing good”) was conflated with the principle (b) of nonmaleficence (“not doing harm”), and the two were also portrayed as obligations assimilating two complementary responsibilities: (1) do not harm and (2) maximize possible benefits and minimize possible harms. Next in Table 3.5 is justice , by which we mean a sense of “fairness in distribution” or “what is observed” (quoting from the Belmont Report). As the Belmont Report went on to explain: “Injustice occurs when some benefit to which a person is entitled is denied without good reason or when some burden is imposed unduly.” Conceding that “what is equal?” and “what is unequal?” are often complex, highly nuanced questions in a specific research situation (just as they are when questions of justice are associated with social practices, such as punishment, taxation, and political representation), justice was nonetheless considered a basic moral precept relevant to the ethics of research involving human subjects. Next in Table 3.5 is integrity , an ethical standard that was not distinctly differentiated in the Belmont Report but that was discussed in detail in the NAS guide ( Committee on Science, Engineering, and Public Policy, 2009 ). Integrity implies honesty and truthfulness; it also implies a prudent use of research funding and other resources and, of course, the disclosure of any conflicts of interest, financial or otherwise, so as not to betray public trust. Finally, respect was described in the Belmont Report as assimilating two obligations: “first, that individuals should be treated as autonomous agents, and second, that persons with diminished autonomy are entitled to protection.” In the current APA code, respect is equated with civil liberties—that is, privacy, confidentiality, and self-determination.

Inspecting the column headings in Table 3.5 , first by transparency , we mean here that the quantitative results are presented in an open, frank, and candid way, that any technical language used is clear and appropriate, and that visual displays do not obfuscate the data but instead are as crystal clear as possible. Elements of graphic design are explained and illustrated in a number of very useful books and articles, particularly the work of Tufte (1983 , 1990 , 2006 ) and Wainer (1984 , 1996 , 2000 , 2009 ; Wainer & Thissen, 1981 ), and there is a burgeoning literature in every area of science on the visual display of quantitative data. Second, by informativeness , we mean that there is enough information reported to enable readers to make up their own minds on the basis of the primary results and enough to enable others to re-analyze the summary results for themselves. The development of meta-analysis, with emphasis on effect sizes and moderator variables, has stimulated ways of recreating summary data sets and vital effect size information, often from minimal raw ingredients. Third, the term precision is used not in a statistical sense (the likely spread of estimates of a parameter) but rather in a more general sense to mean that quantitative results should be reported to the degree of exactitude required by the given situation. For example, reporting the average scores on an attitude questionnaire to a high degree of decimal places is psychologically meaningless ( false precision ), and reporting the weight of mouse subjects to six decimal places is pointless ( needless precision ). Fourth, accuracy means that a conscientious effort is made to identify and correct mistakes in measurements, calculations, and the reporting of numbers. Accuracy also means not exaggerating results by, for example, making false claims that applications of the results are unlikely to achieve. Fifth, groundedness implies that the method of choice is appropriate to the question of interest, as opposed to using whatever is fashionable or having a computer program repackage the data in a one-size-fits-all conceptual framework. The methods we choose must be justifiable on more than just the grounds that they are what we were taught in graduate school, or that “this is what everyone else does” (cf. Cohen, 1990 , 1994 ; Rosnow & Rosenthal, 1995 , 1996 ; Zuckerman, Hodgins, Zuckerman, & Rosenthal, 1993 ).

Clinical Significance and the Consequences of Statistical Illiteracy

To bring this discussion of quantitative methods and ethics full circle, we turn finally to a problem that has been variously described as innumeracy ( Paulos, 1990 ) and statistical illiteracy. The terms are used to connote a lack of knowledge or understanding of the meaning of numbers, statistical concepts, or the numeric expression of summary statistics. As the authors of a popular book, The Numbers Game , put it: “Numbers now saturate the news, politics, life…. For good or for evil, they are today’s preeminent public language—and those who speak it rule” ( Blastland & Dilnot, 2009 , p. x). To be sure, even people who are most literate in the language of numbers are prone to wishful thinking and fearful thinking and, therefore, sometimes susceptible to those who use numbers and gimmicks to sway, influence, or even trick people. The mathematician who coined the term innumeracy told of how his vulnerability to whim “entrained a series of ill-fated investment decisions,” which he still found “excruciating to recall” ( Paulos, 2003 , p. 1). The launching point for the remainder of our discussion was an editorial in a medical journal several years ago, in which the writers of the editorial lamented “the premature dissemination of research and the exaggeration of medical research findings” ( Schwartz & Woloshin, 2003 , p. 153). A large part of the problem is an emphasis on RR statistics that hook general readers into making unwarranted assumptions, a problem that may often begin with researchers, funders, and journals that “court media attention through press releases” ( Woloshin, Schwartz, Casella, Kennedy, & Larson, 2009 , p. 613). Confusion about risk and risk statistics is not limited to the general public (cf. Prasad, Jaeschke, Wyer, Keitz, & Guyatt, 2008 ), but it is the susceptible public ( Carling, Kristoffersen, Herrin, Treweek, Oxman, Schünemann, Akl, & Montori, 2008 ) that must ultimately pay the price of the accelerating costs of that confusion. Stirring the concept of statistical significance into this mix can frequently produce a truly astonishing amount of confusion. For example, writing in the Journal of the National Cancer Institute , Miller (2007) mentioned that many doctors equate the level of statistical significance of cancer data with the “degree of improvement a new treatment must make for it to be clinically meaningful” (p. 1832). 7

In the space remaining, we concentrate on misconceptions and illusions regarding the concepts of RR and statistical significance when the clinical significance of interventions is appraised through the lens of these concepts in randomized clinical trials (RCTs). As a case in point, a highly cited report on the management of depression, a report that was issued by the National Institute for Health and Clinical Excellence (NICE), used RR of 0.80 or less as a threshold indicator of clinical significance in RCTs with dichotomous outcomes and statistically significant results. 8 We use the term clinical significance here in the way that it was defined in an authoritative medical glossary, although we recognize that it is a hypothetical construct laden with surplus meaning as well (cf. Jacobson & Truax, 1991 ). In the glossary, clinical significance was taken to mean that “an intervention has an effect that is of practical meaning to patients and health care providers” ( NICHSR, 2010 ; cf. Jeans, 1992 ; Kazdin, 1977 , 2008 ). By intervention , we mean a treatment or involvement such as a vaccine used in a public health immunization program to try to eradicate a preventable disease (e.g., the Salk poliomyelitis vaccine), or a drug that can be prescribed for a patient in the doctor’s office, or an over-the-counter medicine (e.g., aspirin) used to reduce pain or lessen the risk of an adverse event (e.g., heart attack), or a medication and/or psychotherapy to treat depression. By tradition, RCTs are the gold standard in evidence-based medicine when the goal is to appraise the clinical significance of interventions in a carefully controlled scientific manner. Claims contradicted by RCTs are not always immediately rejected in evidence-based medicine, as it has been noted that some “claims from highly cited observational studies persist and continue to be supported in the medical literature despite strong contradictory evidence from randomized trials” ( Tatsioni, Bonitsis, & Ioannidis, 2007 ). Of course, just as gold can fluctuate in value, so can conclusions based on the belief that statistical significance is a proxy for clinical significance, or when it is believed that given statistical significance, clinical significance is achieved only if the reduction in RR reaches some arbitrary fixed magnitude ( recall , for example, NICE, 2004 ). The challenge is to reverse the accelerating cost curve of statistical illiteracy in an area that affects us all ( see , for example, Gigerenzer, Gaissmaier, Kurz-Milcke, Schwartz, & Woloshin, 2008 ).

Table 3.6 helps us illustrate the folly of a delicate balancing act that is sometimes required between statistical significance and RR. The table shows a portion of the results from the aspirin component of a highly cited double-blind, placebo-controlled, randomized trial to test whether 325 milligrams of aspirin every other day reduces the mortality from cardiovascular disease and whether beta-carotene decreases the incidence of cancer ( Steering Committee of the Physicians’ Health Study Research Group, 1989 ). The aspirin component of the study was terminated earlier than planned on finding “a statistically significant, 44 [sic] percent reduction in the risk of myocardial infarction for both fatal and nonfatal events … [although] there continued to be an apparent but not significantly increased risk of stroke” (p. 132). RR (for relative risk) refers to the ratio of the incidence rate of the adverse event (the illness) in the treated sample to the control sample; RRR is the relative risk reduction; and RRI, is the relative risk increase (the computation of these indices is described in Table 3.7 ). When tables of independent counts are set up as shown in Tables 3.6 and 3.7 , an RR less than 1.0 indicates that the treated sample fared better than the control sample (thereby implying RRR), and an RR greater than 1.0 indicates the treated sample did more poorly than the control (thereby implying RRI). Observe that the “slightly increased risk of stroke” (RRI = 92%) was actually more than twice the reduction in risk of heart attack (RRR = 42%)! Suppose the study had continued, and one more case of stroke had turned up in the aspirin group. The p -value would have reached the 0.05 level, and the researchers might have arrived at a different conclusion, possibly that the benefit with respect to heart attack was more than offset by the increased risk in stroke. Apparently, a p -value only a hair’s-breadth greater than 0.05 can trump a RR increase of 92%. On the other hand, the event rate of stroke in the study as a whole was only 0.16%, less than one-tenth the magnitude of the event rate of 1.7% of heart attack in the study as a whole. 9

The fact is that RR statements are oblivious to event rates in the total N . To give a quick example, suppose in a study with 100 people each in the treated and control samples that 1 treated person and 5 untreated people (controls) became ill. RR and RRR would be 0.20 and 80%, respectively. Stating there was an 80% reduction in risk of the adverse event conveys hope. However, suppose we increase each sample size to 1,000 but still assume 1 case of illness in the treated sample and 5 cases of illness in the control sample. We would still find RR = 0.20 and RRR = 80%. It makes no difference how large we make the sample sizes, as RR and RRR will not budge from 0.20 and 80% so long as we assume 1 case of illness in the treated sample and 5 cases of illness in the control sample. Suppose we now hold the N constant and see what happens to the RR and RRR when the event rate in the overall N changes from one study to another. In Figure 3.3 , we see the results of six hypothetical studies in which the event rates increased from 1% in Studies 1 and 4, to 25% in Studies 2 and 5, to 50% in Studies 3 and 6. Nonetheless, in Studies 1, 2, and 3, RR remained constant at 0.05 and RRR remained constant at an attention-getting 95%. In Studies 4, 5, and 6, RR and RRR stayed constant at 0.82 and 18%, respectively.

Histograms based on the six studies in Table 3.7 , in which the total sample size ( N ) was 2,000 in each study. Darkened areas of the bars indicate the number of adverse outcomes (event rates), which increased from 1% (20 cases out of 2,000) in Studies 1 and 4, to 25% (500 cases out of 2,000) in Studies 2 and 5, to 50% (1,000 cases out of 2,000) in Studies 3 and 6. However, the relative risk (RR) and relative risk reduction (RRR) were insensitive to these vastly different event rates. In Studies 1, 2, and 3, the RR and RRR remained constant at 0.05 and 94.7%, respectively, whereas in Studies 4, 5, and 6, the RR and RRR remained constant at 0.82 and 18.2%, respectively.

Further details of the studies in Figure 3.3 are given in Table 3.7 . The odds ratio (OR), for the ratio of two odds, was for a time widely promoted as a measure of association in 2 × 2 tables of counts ( Edwards, 1963 ; Mosteller, 1968 ) and is still frequently reported in epidemiological studies ( Morris & Gardner, 2000 ). As Table 3.7 shows, OR and RR are usually highly correlated. The absolute risk reduction (ARR), also called the risk difference (RD), refers to the absolute reduction in risk of the adverse event (illness) in the treated patients compared with the level of baseline risk in the control group. Gigerenzer et al. (2008) recommended using the absolute risk reduction (RD) rather than the RR. As Table 3.7 shows, RD (or ARR) is sensitive to the differences in the event rates. There are other advantages as well to RD, which are discussed elsewhere ( Rosenthal & Rosnow, 2008 , pp. 631–632). Phi is the product-moment correlation ( r ) when the two correlated variables are dichotomous, and Table 3.7 shows it is sensitive to the event rates and natural frequencies. Another useful index is NNT, for the number of patients that need to be treated to prevent a single case of the adverse event. Relative risk may be an easy-to-handle description, but it is only an alerting indicator that tells us that something happened and we need to explore the data further. As Tukey (1977) , the consummate exploratory data analyst, stated: “Anything that makes a simpler description possible makes the description more easily handleable; anything that looks below the previously described surface makes the description more effective” (p. v). And, we can add, that any index of the magnitude of effect that is clear enough, transparent enough, and accurate enough to inform the nonspecialist of exactly what we have learned from the quantitative data increases the ethical value of those data ( Rosnow & Rosenthal, 2011 ).

In a cultural sphere in which so many things compete for our attention, it is not surprising that people seem to gravitate to quick, parsimonious forms of communication and, in the case of health statistics, to numbers that appear to speak directly to us. For doctors with little spare time to do more than browse abstracts of clinical trials or the summaries of summaries, the emphasis on parsimonious summary statistics such as RR communications in large sample RCTs may seem heavily freighted with clinical meaning. For the general public, reading about a 94.7% reduction in the risk of some illness, either in a pharmaceutical advertisement or in a news story about a “miracle drug that does wonders,” is attention-riveting. It is the kind of information that is especially likely to arouse an inner urgency in patients but also in anyone who is anxious and uncertain about their health. Insofar as such information exaggerates the absolute effects, it is not only the patient or the public that will suffer the consequences; the practice of medicine and the progress of science will as well. As Gigerenzer et al. (2008) wrote, “Statistical literacy is a necessary precondition for an educated citizenship in a technological democracy” (p. 53). There are promising opportunities for moral (and societal) rewards for quantitative methodologists who can help us educate our way out of statistical illiteracy. And that education will be beneficial, not only to the public but to many behavioral, social, and medical researchers as well. As that education takes place, there will be increased clarity, transparency, and accuracy of the quantitative methods employed, thereby increasing their ethical value.

Future Directions

An important theoretical and practical question remains to be addressed: To what extent is there agreement among quantitative methodologists in their evaluation of quantitative procedures as to the degree to which each procedure in a particular study meets the methodological standards of transparency, informativeness, precision, accuracy, and groundedness? The research program called for to address these psychometric questions of reliability will surely find that specific research contexts, specific disciplinary affiliations, and other specific individual differences (e.g., years of experience) will be moderators of the magnitudes of agreement (i.e., reliabilities) achieved. We believe that the results of such research will demonstrate that there will be some disagreement (that is, some unreliability) in quantitative methodologists’ evaluations of various standards of practice. And, as we noted above, that is likely to be associated with some disagreement (that is, some unreliability) in their evaluations of the ethical value of various quantitative procedures.

Another important question would be addressed by research asking the degree to which the specific goals and specific sponsors of the research may serve as causal factors in researchers’ choices of quantitative procedures. Teams of researchers (e.g., graduate students in academic departments routinely employing quantitative procedures in their research) could be assigned at random to analyze the data of different types of sponsors with different types of goals. It would be instructive to learn that choice of quantitative procedure was predictable from knowing who was paying for the research and what results the sponsors were hoping for. Recognition of the possibility that the choice of quantitative procedures used might be affected by the financial interests of the investigator is reflected in the increased frequency with which scientific journals (e.g., medical journals) require a statement from all co-authors of their financial interest in the company sponsoring the research (e.g., pharmaceutical companies).

Finally, it would be valuable to quantify the costs and utilities of doing and not doing a wide variety of specific studies, including classic and not-so-classic studies already conducted, and a variety of studies not yet conducted. Over time, there may develop a disciplinary consensus over the costs and utilities of a wide array of experimental procedures. And, although such a consensus is building over time, it will be of considerable interest to psychologists and sociologists of science to study disciplinary differences in such consensus-building. Part of such a program of self-study of disciplines doing quantitative research would focus on the quantitative procedures used, but the primary goal would be to apply survey research methods to establish the degree of consensus on research ethics of the behavioral, social, educational, and biomedical sciences. The final product of such a program of research would include the costs and utilities of doing, and of not doing, a wide variety of research studies.

Author Note

Ralph L. Rosnow is Thaddeus Bolton Professor Emeritus at Temple University, Philadelphia, PA ( [email protected] ). Robert Rosenthal is a Distinguished Professor at the University of California at Riverside ( [email protected] ) and Edgar Pierce Professor of Psychology, Emeritus, Harvard University.

Where we quote from a document but do not give the page numbers of the quoted material, it is because either there was no pagination or there was no consistent pagination in the online and hard copy versions that we consulted. Tables 3.1–3.4 reprint only the original material, as there were slight discrepancies between original material and online versions.

Pattullo (1982) described the logical basis on which “rulemakers” (like DHEW) had proceeded in terms of a syllogism emphasizing not the potential benefits of research but only the avoidance of risks of harm: “(a) Research can harm subjects; (2) Only impartial outsiders can judge the risk of harm; (3) Therefore, all research must be approved by an impartial outside group” (p. 376).

Hearings on the recommendations in the Belmont Report were conducted by the President’s Commission for the Study of Ethical Problems in Medicine, Biomedical, and Behavioral Research. Proceeding on the basis of the information provided at these hearings and on other sources of advice, the Department of Health and Human Services (DHHS) then issued a set of regulations in the January 26, 1981, issue of the Federal Register . A compendium of regulations and guidelines that now govern the implementation of the National Research Act and subsequent amendments can be found in the DHHS manual known as the “Gray Booklet,” specifically titled Guidelines for theConduct of Research Involving Human Subjects at the National Institutes of Health (available online at http://ohsr.od.nih.gov/guidelines/index.html ).

Smith, Kimmel, and Klein (2009) reported that 43.4% of the articles on consumer research in leading journals in the field in 1975 through 1976 described some form of deception in the research. By 1989 through 1990, the number of such articles increased to 57.7%, where it remained steady at 56% in 1996 through 1997, increased to 65.7% in 2001 through 2002, and jumped to 80.4% in 2006 through 2007. The issue of deception is further complicated by the fact that active and passive deceptions are far from rare in our society. Trial lawyers manipulate the truth in court on behalf of their clients; prosecutors surreptitiously record private conversations; journalists get away with using hidden cameras and undercover practices to obtain stories; and the police use sting operations and entrapment procedures to gather incriminating evidence (cf. Bok, 1978 , 1984 ; Saxe, 1991 ; Starobin, 1997 ).

The document, titled “Ethical Principles of Psychologists and Code of Conduct,” is available online at http://www.apa.org/ETHICS/code2002.html .

Adaptations of the models in Figures 3.1 and 3.2 have been used to cue students about possible ethical dilemmas in research and data analysis (cf. Bragger & Freeman, 1999 ; Rosnow, 1990 ; Strohmetz & Skleder, 1992 ).

The confusion of statistical significance with practical importance may be a more far-reaching problem in science. In a letter in Science , the writers noted that “almost all reviews and much of the original research [about organic foods] report only the statistical significance of the differences in nutrient levels—not whether they are nutritionally important” ( Clancy, Hamm, Levine, & Wilkins, 2009 , p. 676).

NICE (2004) also recommended that researchers use a standardized mean difference (SMD) of half a standard deviation or more (i.e., d or g ≥ 0.5) with continuous outcomes as the threshold of clinical significance for initial assessments of statistically significant summary statistics ( NICE, 2004 ). However, effects far below the 0.5 threshold for SMDs have been associated with important interventions. For example, in the classic Salk vaccine trial ( Brownlee, 1955 ; Francis, Korns, Voight, Boisen, Hemphill, Napier, & Tolchinsky, 1955 ), phi = 0.011, which has a d -equivalent of 0.022 ( Rosnow & Rosenthal, 2008 ). It is probably the case across the many domains in which clinical significance is studied that larger values of d or g are in fact generally associated with greater intervention benefit, efficacy, or clinical importance. But it is also possible for large SMDs to have little or no clinical significance. Suppose a medication was tested on 100 pairs of identical twins with fever, and in each and every pair, the treated twin loses exactly one-tenth of 1 degree more than the control twin. The SMD will be infinite, inasmuch as the variability (the denominator of d or g ) will be 0, but few doctors would consider this ES clinically significant. As Cohen (1988) wisely cautioned, “the meaning of any given ES is, in the final analysis, a function of the context in which it is embedded” (p. 535).

The high RR of HS in this study, in which participants (male physicians) took 325 milligrams every other day, might explain in part why the current dose for MI prophylaxis is tempered at only81 milligrams per day.

Allison, D. B. ( 2009 ). The antidote to bias in research.   Science, 326, 522.

Google Scholar

American Psychological Association. ( 1973 ). Ethical principles in the conduct of research with human participants . Washington, DC: Author.

Google Preview

American Psychological Association. ( 1982 ). Ethical principles in the conduct of research with human participants . Washington, DC: Author.

American Statistical Association (1999). Ethical guidelines for statistical practice . http://www.amstat.org/profession/index.cfm?fusaction=ethical

Atwell, J. E. ( 1981 ). Human rights in human subjects research. In A. J. Kimmel (Ed.), New directions for methodology of social and behavioral science: Ethics of human subject research (No. 10, pp. 81–90). San Francisco, CA: Jossey-Bass.

Baltimore, D. (January 27, 1997 ). Philosophical differences. The New Yorker , p. 8.

Beecher, H. K. (July 2, 1966 ). Documenting the abuses. Saturday Review , 45–46.

Beecher, H. K. ( 1970 ). Research and the individual . Boston: Little Brown.

Blanck, P. D. , Bellack, A. S. , Rosnow, R. L. , Rotheram-Borus, M. J. , & Schooler, N. R. ( 1992 ). Scientific rewards and conflicts of ethical choices in human subjects research.   American Psychologist, 47, 959–965.

Blass, T. ( 2004 ). The man who shocked the world: The life and legacy of Stanley Milgram . New York: Basic Books.

Blastland, M. , & Dilnot, A. ( 2009 ). The numbers game . New York: Gotham Books.

Bok, S. ( 1978 ). Lying: Moral choice in public and private life . New York: Pantheon.

Bok, S. ( 1984 ). Secrets: On the ethics of concealment and revelation . New York: Vintage Books.

Bordia, P. , & Rosnow, R. L. ( 1995 ). Rumor rest stops on the information highway: A naturalistic study of transmission patterns in a computer-mediated rumor chain.   Human Communication Research, 25, 163–179.

Bragger, J. D. , & Freeman, M. A. ( 1999 ). Using a cost–benefit analysis to teach ethics and statistics.   Teaching of Psychology, 26, 34–36.

Brownlee, K. A. ( 1955 ). Statistics of the 1954 polio vaccine trials.   Journal of the American Statistical Association, 50, 1005–1013.

Carling, C. , Kristoffersen, D. T. , Herrin, J. , Treweek, S. , Oxman, A. D. , Schünemann, H. , et al. ( 2008 ). How should the impact of different presentations of treatment effects on patient choice be evaluated?   PLoS One, 3(11), e3693.

Cicerone, R. J. ( 2010 ). Ensuring integrity in science.   Science, 327, 624.

Clancy, K. , Hamm, M. , Levine, A. S. , & Wilkins, J. ( 2009 ), Organics: Evidence of health benefits lacking.   Science, 325, 676.

Cohen, J. ( 1988 ). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.

Cohen, J. ( 1990 ). Things I have learned (so far).   American Psychologist, 45, 1304–1312.

Cohen, J. ( 1994 ). The earth is round ( p <.05). American Psychologist, 49, 997–1003.

Committee on Science, Engineering, and Public Policy ( 2009 ). On being a scientist: A guide to responsible conduct in research (3rd ed.). Washington, DC: National Academies Press.

Edwards, A. W. F. ( 1963 ). The measure of association in a 2×2 table.   Journal of the Royal Statistical Society, 126, 109–114.

Fairchild, A. L. , & Bayer, R. ( 1999 ). Uses and abuses of Tuskegee.   Science, 284, 918–921.

Francis, T., Jr. , Korns, R. F. , Voight, R. B. , Boisen, M. , Hemphill, F. , Napier, J. , & Tolchinsky, E. ( 1955 ). An evaluation of the 1954 poliomyelitis vaccine trials: A summary report.   American Journal of Public Health, 45(5), 1–63.

Gigerenzer, G. , Gaissmaier, W. , Kurz-Milcke, E. , Schwartz, L. M. , & Woloshin, S. ( 2008 ). Helping doctors and patients make sense of health statistics.   Psychological Science in the Public Interest, 8, 53–96.

Harris, G (March 10, 2009). Doctor admits pain studies were frauds, hospital says. The New York Times . Retrieved March 20, 2009, www.nytimes.com/2009/03/11/health/research/11pain.html?ref=us .

Haywood, H. C. ( 1976 ). The ethics of doing research … and of not doing it.   American Journal of Mental Deficiency, 81, 311–317.

Huff, D. ( 1982 ). How to lie with statistics . New York: Norton.

Jacobson, N. S. , & Truax, P. ( 1991 ). Clinical significance: A statistical approach to defining meaningful change in psychotherapy research.   Journal of Consulting and Clinical Psychology, 59, 12–19.

Jeans, M. E. ( 1992 ). Clinical significance of research: A growing concern.   Canadian Journal of Nursing, 24, 1–2.

Jones, H. H. ( 1993 ). Bad blood: The Tuskegee syphilis experiment (Revised edition). New York: Free Press.

Kaiser, J. (September 18, 2009 ). Researcher, two universities sued over validity of prostate cancer test.   Science, 235, 1484.

Kazdin, A. E. ( 1977 ). Assessing the clinical or applied importance of behavior change through social validation.   Behavior Modification, 1, 427–451.

Kazdin, A. E. ( 2008 ). Evidence-based treatment and practice: New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care.   American Psychologist, 63, 146–159.

Katz, J. ( 1972 ). Experimentation with human beings . New York: Russell Sage.

Kelman, H. C. ( 1968 ). A time to speak: On human values and social research . San Francisco, CA: Jossey-Bass.

Kimmel, A. J. ( 1991 ). Predictable biases in the ethical decision making of psychologists.   American Psychologist, 46, 786–788.

Kossinets, G. , & Watts, D. J. ( 2006 ). Empirical analysis of an evolving social network.   Science, 311, 88–90.

Mark, M. M. , Eyssell, K. M. , & Campbell, B. ( 1999 ). The ethics of data collection and analysis. In J. L. Fitzpatrick & M. Morris (Eds.), Ethical issues in program evaluation (pp. 47–56). San Francisco, CA: Jossey-Bass.

Milgram, S. ( 1963 ). Behavioral study of obedience.   Journal of Abnormal and Social Psychology, 67, 371–378.

Milgram, S. ( 1975 ). Obedience to authority: An experimental view . New York: Harper Colophon Books.

Miller, J. D. ( 2007 ). Finding clinical meaning in cancer data.   Journal of the National Cancer Institute, 99(24), 1832–1835.

Morris, J. A. , & Gardner, M. J. ( 2000 ). Epidemiological studies. In D. A. Altman , D. Machin , T. N. Bryant , & M. J. Gardner (Eds.). Statistics with confidence (2nd. ed., pp. 57–72). London: British Medical Journal Books.

Mosteller, F. ( 1968 ). Association and estimation in contingency tables.   Journal of the American Statistical Association, 63, 1–28.

National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (April 18, 1979). The Belmont Report: Ethical principles and guidelines for the protection of human subjects of research . Retrieved April 8, 2009, http://ohsr.od.nih.gov/guidelines/belmont.html .

National Commission on Research. ( 1980 ). Accountability: Restoring the quality of the partnership.   Science, 207, 1177–1182.

NICE. ( 2004 ). Depression: Management of depression in primary and secondary care (Clinical practice guideline No. 23). London: National Institute for Health and Clinical Excellence

NICHSR. ( 2010 ). HTA 101: Glossary . Retrieved March 3, 2010 from http://www.nlm.nih.gov/nichsr/hta101/ta101014.html

Orne, M. T. ( 1962 ). On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications.   American Psychologist, 17, 776–783.

Orne, M. T. ( 1969 ). Demand characteristics and the concept of quasi-control. In R. Rosenthal & R. L. Rosnow (Eds.). Artifact in behavioral research (pp. 143–179). New York: Academic Press. (Reissued in Rosenthal & Rosnow, 2009, pp. 110–137).

Panter, A. T. , & Sterba, S. K. ( 2011 ). Handbook of ethics in quantitative methodology . Taylor & Francis.

Pattullo, E. L. ( 1982 ). Modesty is the best policy: The federal role in social research. In T. L. Beauchamp , R. R. Faden , R. J. Wallace, Jr. , & L. Walters (Eds.), Ethical issues in social research (pp. 373–390). Baltimore, MD: Johns Hopkins University Press.

Paulos, J. A. ( 1990 ). Innumeracy: Mathematical illiteracy and its consequences . New York: Vintage Books.

Paulos, J. A. ( 2003 ). A mathematician plays the stock market . New York: Basic Books.

PLoS Medicine Editors (September, 2009 ). Ghostwriting: The dirty little secret of medical publishing that just got bigger.   PLoS Medicine, 6 , Issue 9, e1000156. Accessed September 19, 2009 from http://www.plosmedicine.org/static/ghostwriting. action .

Prasad, K. , Jaeschke, R. , Wyer, P. , Keitz, S. , & Guyatt, G. ( 2008 ). Tips for teachers of evidence-based medicine: Understanding odds ratios and their relationship to risk ratios.   Journal of General Internal Medicine, 23(5), 635–640.

Reynold, P. D. ( 1975 ). Value dilemmas in the professional conduct of social science.   International Social Science Journal, 27, 563–611.

Rosenthal, R. ( 1994 ). Science and ethics in conducting, analyzing, and reporting psychological research.   Psychological Science, 5, 127–134.

Rosenthal, R. , & Rosnow, R. L. ( 1969 ). The volunteer subject. In R. Rosenthal & R. L. Rosnow (Eds.), Artifact in behavioral research (pp. 59–118). New York: Academic Press. (Reissued in Rosenthal & Rosnow, 2009, pp. 48–92).

Rosenthal, R. , & Rosnow, R. L. ( 1975 ). The volunteer subject . New York: Wiley-Interscience. (Reissued in Rosenthal & Rosnow, 2009, pp. 667–862).

Rosenthal, R. , & Rosnow, R. L. ( 1984 ). Applying Hamlet’s question to the ethical conduct of research.   American Psychologist, 45, 775–777.

Rosenthal, R. , & Rosnow, R. L. ( 2008 ). Essentials of behavioral research: Methods and data analysis (3rd ed.). New York: McGraw-Hill.

Rosenthal, R. , & Rosnow, R. L. ( 2009 ). Artifacts in behavioral research . New York: Oxford University Press.

Rosnow, R. L. ( 1990 ). Teaching research ethics through role-play and discussion.   Teaching of Psychology, 17, 179–181.

Rosnow, R. L. ( 1997 ). Hedgehogs, foxes, and the evolving social contract in psychological science: Ethical challenges and methodological opportunities.   Psychological Methods, 2, 345–356.

Rosnow, R. L. , & Rosenthal, R. ( 1970 ). Volunteer effects in behavioral research. In K. H. Craik , B. Kleinmuntz , R. L. Rosnow , R. Rosenthal , J. A. Cheyne , & R. H. Walters , New directions in psychology (pp. 211–277). New York: Holt, Rinehart & Winston.

Rosnow, R. L. , & Rosenthal, R. ( 1995 ). “Some things you learn aren’t so”: Cohen’s paradox, Asch’s paradigm, and the interpretation of interaction.   Psychological Science, 6, 3–9.

Rosnow, R. L. , & Rosenthal, R. ( 1996 ). Contrasts and interactions redux: Five easy pieces.   Psychological Science, 7, 253–257.

Rosnow, R. L. , & Rosenthal, R. ( 1997 ). People studying people: Artifacts and ethics in behavioral research . New York: W. H. Freeman.

Rosnow, R. L. , & Rosenthal, R. ( 2008 ). Assessing the effect size of outcome research. In A. M. Nezu & C. M. Nezu (Eds.), Evidence-based outcome research (pp. 379–401). New York: Oxford University Press.

Rosnow, R. L. , & Rosenthal, R. ( 2011 ). Ethical principles in data analysis: An overview. In A. T. Panter & S. K. Sterba (Eds.), Handbook of ethics in quantitative methodology (pp. 37–58). New York: Routledge.

Rosnow, R. L. , Rotheram-Borus, M. J. , Ceci, S. J. , Blanck, P. D. , & Koocher, G. P. ( 1993 ). The institutional review board as a mirror of scientific and ethical standards.   American Psychologist, 48, 821–826.

Rosnow, R. L. , Skleder, A. A. , Jaeger, M. E. , & Rind, B. ( 1994 ). Intelligence and the epistemics of interpersonal acumen: Testing some implications of Gardner’s theory. Intelligence, 19 , 93–116.

Sales, B. D. , & Folkman, S. (Eds.). ( 2000 ). Ethics in research with human participants . Washington, DC: American Psychological Association.

Saxe, L. ( 1991 ). Lying: Thoughts of an applied social psychologist.   American Psychologist, 46, 409–415.

Schwartz, L. M. , & Woloshin, S. ( 2003 ). On the prevention and treatment of exaggeration.   Journal of General Internal Medicine, 18(2), 153–154.

Scott-Jones, D. , & Rosnow, R. L. ( 1998 ). Ethics and mental health research. In H. Friedman (Ed.), Encyclopedia of mental health (Vol. 2, pp. 149–160). San Diego, CA: Academic Press.

Smith, M. B. ( 1969 ). Social psychology and human values . Chicago, IL: Aldine.

Smith, N. C. , Kimmel, A. J. , & Klein, J. G. ( 2009 ). Social contract theory and the ethics of deception in consumer research.   Journal of Consumer Psychology, 19, 486–496.

Spielmans, G. I. , & Parry, P. I. ( 2010 ). From evidence-based medicine to marketing-based medicine: Evidence from internal industry documents.   Journal of Bioethical Inquiry: doi:10.1007/s11673-010-9208-8.

Starobin, P. (January 28, 1997 ). Why those hidden cameras hurt journalism. The New York Times , p. A21.

Steering Committee of the Physicians’ Health Study Research Group. ( 1989 ). Final report on the aspirin component of the ongoing Physicians’ Health Study.   New England Journal of Medicine, 321, 129–135.

Strohmetz, D. B. , & Skleder, A. A. ( 1992 ). The use of role-play in teaching research ethics: A validation study.   Teaching of Psychology, 19, 106–108.

Stryker, J. (April 13, 1997 ). Tuskegee’s long arm still touches a nerve. The New York Times , p. 4.

Tatsioni, A. , Bonitsis, N. G. , & Ioannidis, J. P. A. ( 2007 ). Persistence of contradicted claims in the literature.   Journal of the American Medical Association, 298, 2517–2526.

Trials of War Criminals before the Nuernberg Military Tribunals under Control Council Law No. 10, October 1946-April 1949 , Vol. II. Washington, DC: U.S. Government Printing Office.

Tufte, E. R. ( 1983 ). The visual display of quantitative information . Cheshire, CT: Graphics Press.

Tufte, E. T. ( 1990 ). Envisioning information . Cheshire, CT: Graphics Press.

Tufte, E. T. ( 2006 ). Beautiful evidence . Cheshire, CT: Graphics Press.

Tukey, J. W. ( 1977 ). Exploratory data analysis . Reading, MA: Addison-Wesley.

Turner, E. H. , Matthews, A. M. , Linardatos, E. , Tell, R. A. , & Rosenthal, R. ( 2008 ). Selective publication of antidepressant trials and its influence on apparent efficacy.   New England Journal of Medicine, 358(3), 32–40.

VandenBos, G. R. (Ed.). ( 2007 ). APA dictionary of psychology . Washington, DC: American Psychological Association.

Vedula, S. S. , Bero, L. , Scherer, R. W. , & Dickersin, K. ( 2009 ). Outcome reporting in industry-sponsored trials of Gabapentin for off-label use.   New England Journal of Medicine, 361(20), 1963–1971.

Vinacke, W. E. ( 1954 ). Deceiving experimental subjects.   American Psychologist, 9, 155.

Wainer, H. ( 1984 ). How to display data badly.   American Statistician, 38, 137–147.

Wainer, H. ( 1996 ). Depicting error.   American Statistician, 50(2), 101–111.

Wainer, H. ( 2000 ). Visual revelations: Graphical tales of fate and deception from Napoleon Bonaparte to Ross Perot . Mahwah, NJ: Erlbaum.

Wainer, H. ( 2009 ). Picturing the uncertain world . Princeton, NJ: Princeton University Press.

Wainer, H. , & Thissen, D. ( 1981 ). Graphical data analysis.   Annual Review of Psychology, 32, 191–241.

Wasserman, S. , & Faust, K. ( 1994 ). Social network analysis: Methods and applications . Cambridge, England: Cambridge University Press.

Woloshin, S. , Schwartz, L. M. , Casella, S. L. , Kennedy, A. T. , & Larson, R. J. ( 2009 ). Press releases by academic medical centers: Not so academic?   Annals of Internal Medicine, 150(9), 613–618.

Zuckerman, M. , Hodgins, H. S. , Zuckerman, A. , & Rosenthal, R. ( 1993 ). Contemporary issues in the analysis of data: A survey of 551 psychologists.   Psychological Science, 4, 49–53.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

Chapter 3: Research Ethics

In 1998 a medical journal called  The Lancet  published an article of interest to many psychologists. The researchers claimed to have shown a statistical relationship between receiving the combined measles, mumps, and rubella (MMR) vaccine and the development of autism—suggesting furthermore that the vaccine might even cause autism. One result of this report was that many parents decided not to have their children vaccinated (becoming a cultural phenomenon known as “anti-vaxxers”), which of course put them at higher risk for measles, mumps, and rubella. However, follow-up studies by other researchers consistently failed to find a statistical relationship between the MMR vaccine and autism—and it is generally accepted now that there is no relationship. In addition, several more serious problems with the original research were uncovered. Among them were that the lead researcher stood to gain financially from his conclusions because he had patented a competing measles vaccine. He had also used biased methods to select and test his research participants and had used unapproved and medically unnecessary procedures on them. In 2010  The Lancet  retracted the article, and the lead researcher’s right to practice medicine was revoked (Burns, 2010) [1] .

In 2011 Diederik Stapel, a prominent and well-regarded social psychologist at Tilburg University in the Netherlands, was found to have perpetrated an audacious academic crime – fabricating data [2] .Following a multi-university investigation, Stapel confessed to having made-up the data for at least 55 studies that he published in scientific journals since 2004. This revelation came as a shock to researchers, including some of his colleagues who had spent time and valuable resources designing and conducting studies that built on some of Stapel’s fraudulently published findings. Even more tragically, Stapel revealed that he had perpetrated the same fraud in 10 doctoral dissertations he oversaw, actions that caused harm to the academic careers of his former students. At a more general level, however, Stapel’s actions inflicted a serious blow to the honour code that scientists abide by. Science is, after all, a shared process of discovery that requires researchers to be honest about their work and findings – whether or not their research hypotheses are supported by the data they collect. Breaching this trust as seriously as Stapel did undermines the entire foundation of this process. Needless to say, Stapel was suspended from his position at Tilburg University. In addition, the American Psychological Association retracted a Career Trajectory Award it had presented to Stapel in 2009, and the Dutch government launched an investigation into his misuse of research funding. Stapel has since returned the doctorate he received from the University of Amsterdam, noting that his “behaviour of the past years are inconsistent with the duties associated with the doctorate.” Stapel also apologized to his colleagues, saying, “I have failed as a scientist and researcher. I feel ashamed for it and have great regret.” [3]

In political psychology, a contentious case of fraudulent data has resulted in a retracted paper from the prestigious journal, Science , as well as a rescinded job offer from Princeton University.  Michael LaCour, a graduate student in political science published a surprising result with Donald Green, an established professor at Columbia University: interacting with a gay canvasser can change a voter’s opinion of gay equality.  The myriad of LaCour’s fabricated information includes grants, awards, and ethical approval.  Although Green requested the retraction of the Science  article without consulting his co-author, LaCour stands by the data [4] .

In this chapter we explore the ethics of scientific research in psychology. We begin with a general framework for thinking about the ethics of scientific research in psychology. Then we look at some specific ethical codes for biomedical and behavioural researchers—focusing on the Ethics Code of the American Psychological Association and the Tri-Council Policy Statement (TCPS 2). Finally, we consider some practical tips for conducting ethical research in psychology.

  • Burns, J. F. (2010, May 24). British medical council bars doctor who linked vaccine to autism.  The New York Times . Retrieved from http://www.nytimes.com/2010/05/25/health/policy/25autism.html?ref=andrew_wakefield ↵
  • Jump, P. (2011, November 28). A star’s collapse.  Inside Higher Ed . Retrieved from http://www.insidehighered.com/news/2011/11/28/scholars-analyze-case-massive-research-fraud ↵
  • Carey, B. (2011, November 2). Fraud case seen as a red flag for psychology research.  The New York Times . Retrieved from http://www.nytimes.com/2011/11/03/health/research/noted-dutch-psychologist-stapel-accused-of-research-fraud.html ↵
  • Singal, J. (2015, May 29). The Case of the Amazing Gay-Marriage Data: How a Graduate Student Reluctantly Uncovered a Huge Scientific Fraud. New York Magazine . Retrieved from http://nymag.com/scienceofus/2015/05/how-a-grad-student-uncovered-a-huge-fraud.html ↵

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

chapter 3 ethical consideration in quantitative research sample

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Ethical Considerations in Research | Types & Examples

Ethical Considerations in Research | Types & Examples

Published on 7 May 2022 by Pritha Bhandari .

Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people.

The goals of human research often include understanding real-life phenomena, studying effective treatments, investigating behaviours, and improving lives in other ways. What you decide to research and how you conduct that research involve key ethical considerations.

These considerations work to:

  • Protect the rights of research participants
  • Enhance research validity
  • Maintain scientific integrity

Table of contents

Why do research ethics matter, getting ethical approval for your study, types of ethical issues, voluntary participation, informed consent, confidentiality, potential for harm, results communication, examples of ethical failures, frequently asked questions about research ethics.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe for research subjects.

You’ll balance pursuing important research aims with using ethical research methods and procedures. It’s always necessary to prevent permanent or excessive harm to participants, whether inadvertent or not.

Defying research ethics will also lower the credibility of your research because it’s hard for others to trust your data if your methods are morally questionable.

Even if a research idea is valuable to society, it doesn’t justify violating the human rights or dignity of your study participants.

Prevent plagiarism, run a free check.

Before you start any study involving data collection with people, you’ll submit your research proposal to an institutional review board (IRB) .

An IRB is a committee that checks whether your research aims and research design are ethically acceptable and follow your institution’s code of conduct. They check that your research materials and procedures are up to code.

If successful, you’ll receive IRB approval, and you can begin collecting data according to the approved procedures. If you want to make any changes to your procedures or materials, you’ll need to submit a modification application to the IRB for approval.

If unsuccessful, you may be asked to re-submit with modifications or your research proposal may receive a rejection. To get IRB approval, it’s important to explicitly note how you’ll tackle each of the ethical issues that may arise in your study.

There are several ethical issues you should always pay attention to in your research design, and these issues can overlap with each other.

You’ll usually outline ways you’ll deal with each issue in your research proposal if you plan to collect data from participants.

Voluntary participation means that all research subjects are free to choose to participate without any pressure or coercion.

All participants are able to withdraw from, or leave, the study at any point without feeling an obligation to continue. Your participants don’t need to provide a reason for leaving the study.

It’s important to make it clear to participants that there are no negative consequences or repercussions to their refusal to participate. After all, they’re taking the time to help you in the research process, so you should respect their decisions without trying to change their minds.

Voluntary participation is an ethical principle protected by international law and many scientific codes of conduct.

Take special care to ensure there’s no pressure on participants when you’re working with vulnerable groups of people who may find it hard to stop the study even when they want to.

Informed consent refers to a situation in which all potential participants receive and understand all the information they need to decide whether they want to participate. This includes information about the study’s benefits, risks, funding, and institutional approval.

  • What the study is about
  • The risks and benefits of taking part
  • How long the study will take
  • Your supervisor’s contact information and the institution’s approval number

Usually, you’ll provide participants with a text for them to read and ask them if they have any questions. If they agree to participate, they can sign or initial the consent form. Note that this may not be sufficient for informed consent when you work with particularly vulnerable groups of people.

If you’re collecting data from people with low literacy, make sure to verbally explain the consent form to them before they agree to participate.

For participants with very limited English proficiency, you should always translate the study materials or work with an interpreter so they have all the information in their first language.

In research with children, you’ll often need informed permission for their participation from their parents or guardians. Although children cannot give informed consent, it’s best to also ask for their assent (agreement) to participate, depending on their age and maturity level.

Anonymity means that you don’t know who the participants are and you can’t link any individual participant to their data.

You can only guarantee anonymity by not collecting any personally identifying information – for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, and videos.

In many cases, it may be impossible to truly anonymise data collection. For example, data collected in person or by phone cannot be considered fully anonymous because some personal identifiers (demographic information or phone numbers) are impossible to hide.

You’ll also need to collect some identifying information if you give your participants the option to withdraw their data at a later stage.

Data pseudonymisation is an alternative method where you replace identifying information about participants with pseudonymous, or fake, identifiers. The data can still be linked to participants, but it’s harder to do so because you separate personal information from the study data.

Confidentiality means that you know who the participants are, but you remove all identifying information from your report.

All participants have a right to privacy, so you should protect their personal data for as long as you store or use it. Even when you can’t collect data anonymously, you should secure confidentiality whenever you can.

Some research designs aren’t conducive to confidentiality, but it’s important to make all attempts and inform participants of the risks involved.

As a researcher, you have to consider all possible sources of harm to participants. Harm can come in many different forms.

  • Psychological harm: Sensitive questions or tasks may trigger negative emotions such as shame or anxiety.
  • Social harm: Participation can involve social risks, public embarrassment, or stigma.
  • Physical harm: Pain or injury can result from the study procedures.
  • Legal harm: Reporting sensitive data could lead to legal risks or a breach of privacy.

It’s best to consider every possible source of harm in your study, as well as concrete ways to mitigate them. Involve your supervisor to discuss steps for harm reduction.

Make sure to disclose all possible risks of harm to participants before the study to get informed consent. If there is a risk of harm, prepare to provide participants with resources, counselling, or medical services if needed.

Some of these questions may bring up negative emotions, so you inform participants about the sensitive nature of the survey and assure them that their responses will be confidential.

The way you communicate your research results can sometimes involve ethical issues. Good science communication is honest, reliable, and credible. It’s best to make your results as transparent as possible.

Take steps to actively avoid plagiarism and research misconduct wherever possible.

Plagiarism means submitting others’ works as your own. Although it can be unintentional, copying someone else’s work without proper credit amounts to stealing. It’s an ethical problem in research communication because you may benefit by harming other researchers.

Self-plagiarism is when you republish or re-submit parts of your own papers or reports without properly citing your original work.

This is problematic because you may benefit from presenting your ideas as new and original even though they’ve already been published elsewhere in the past. You may also be infringing on your previous publisher’s copyright, violating an ethical code, or wasting time and resources by doing so.

In extreme cases of self-plagiarism, entire datasets or papers are sometimes duplicated. These are major ethical violations because they can skew research findings if taken as original data.

You notice that two published studies have similar characteristics even though they are from different years. Their sample sizes, locations, treatments, and results are highly similar, and the studies share one author in common.

Research misconduct

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement about data analyses.

Research misconduct is a serious ethical issue because it can undermine scientific integrity and institutional credibility. It leads to a waste of funding and resources that could have been used for alternative research.

Later investigations revealed that they fabricated and manipulated their data to show a nonexistent link between vaccines and autism. Wakefield also neglected to disclose important conflicts of interest, and his medical license was taken away.

This fraudulent work sparked vaccine hesitancy among parents and caregivers. The rate of MMR vaccinations in children fell sharply, and measles outbreaks became more common due to a lack of herd immunity.

Research scandals with ethical failures are littered throughout history, but some took place not that long ago.

Some scientists in positions of power have historically mistreated or even abused research participants to investigate research problems at any cost. These participants were prisoners, under their care, or otherwise trusted them to treat them with dignity.

To demonstrate the importance of research ethics, we’ll briefly review two research studies that violated human rights in modern history.

These experiments were inhumane and resulted in trauma, permanent disabilities, or death in many cases.

After some Nazi doctors were put on trial for their crimes, the Nuremberg Code of research ethics for human experimentation was developed in 1947 to establish a new standard for human experimentation in medical research.

In reality, the actual goal was to study the effects of the disease when left untreated, and the researchers never informed participants about their diagnoses or the research aims.

Although participants experienced severe health problems, including blindness and other complications, the researchers only pretended to provide medical care.

When treatment became possible in 1943, 11 years after the study began, none of the participants were offered it, despite their health conditions and high risk of death.

Ethical failures like these resulted in severe harm to participants, wasted resources, and lower trust in science and scientists. This is why all research institutions have strict ethical guidelines for performing research.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information – for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2022, May 07). Ethical Considerations in Research | Types & Examples. Scribbr. Retrieved 15 April 2024, from https://www.scribbr.co.uk/research-methods/ethical-considerations/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, a quick guide to experimental design | 5 steps & examples, data collection methods | step-by-step guide & examples, how to avoid plagiarism | tips on citing sources.

Ethical considerations in research: Best practices and examples

chapter 3 ethical consideration in quantitative research sample

To conduct responsible research, you’ve got to think about ethics. They protect participants’ rights and their well-being - and they ensure your findings are valid and reliable. This isn’t just a box for you to tick. It’s a crucial consideration that can make all the difference to the outcome of your research.

In this article, we'll explore the meaning and importance of research ethics in today's research landscape. You'll learn best practices to conduct ethical and impactful research.

Examples of ethical considerations in research

As a researcher, you're responsible for ethical research alongside your organization. Fulfilling ethical guidelines is critical. Organizations must ensure employees follow best practices to protect participants' rights and well-being.

Keep these things in mind when it comes to ethical considerations in research:

Voluntary participation

Voluntary participation is key. Nobody should feel like they're being forced to participate or pressured into doing anything they don't want to. That means giving people a choice and the ability to opt out at any time, even if they've already agreed to take part in the study.

Informed consent

Informed consent isn't just an ethical consideration. It's a legal requirement as well. Participants must fully understand what they're agreeing to, including potential risks and benefits.

The best way to go about this is by using a consent form. Make sure you include:

  • A brief description of the study and research methods.
  • The potential benefits and risks of participating.
  • The length of the study.
  • Contact information for the researcher and/or sponsor.
  • Reiteration of the participant’s right to withdraw from the research project at any time without penalty.

Anonymity means that participants aren't identifiable in any way. This includes:

  • Email address
  • Photographs
  • Video footage

You need a way to anonymize research data so that it can't be traced back to individual participants. This may involve creating a new digital ID for participants that can’t be linked back to their original identity using numerical codes.

Confidentiality

Information gathered during a study must be kept confidential. Confidentiality helps to protect the privacy of research participants. It also ensures that their information isn't disclosed to unauthorized individuals.

Some ways to ensure confidentiality include:

  • Using a secure server to store data.
  • Removing identifying information from databases that contain sensitive data.
  • Using a third-party company to process and manage research participant data.
  • Not keeping participant records for longer than necessary.
  • Avoiding discussion of research findings in public forums.

Potential for harm

​​The potential for harm is a crucial factor in deciding whether a research study should proceed. It can manifest in various forms, such as:

  • Psychological harm
  • Social harm
  • Physical harm

Conduct an ethical review to identify possible harms. Be prepared to explain how you’ll minimize these harms and what support is available in case they do happen.

Fair payment

One of the most crucial aspects of setting up a research study is deciding on fair compensation for your participants. Underpayment is a common ethical issue that shouldn't be overlooked. Properly rewarding participants' time is critical for boosting engagement and obtaining high-quality data. While Prolific requires a minimum payment of £6.00 / $8.00 per hour, there are other factors you need to consider when deciding on a fair payment.

First, check your institution's reimbursement guidelines to see if they already have a minimum or maximum hourly rate. You can also use the national minimum wage as a reference point.

Next, think about the amount of work you're asking participants to do. The level of effort required for a task, such as producing a video recording versus a short survey, should correspond with the reward offered.

You also need to consider the population you're targeting. To attract research subjects with specific characteristics or high-paying jobs, you may need to offer more as an incentive.

We recommend a minimum payment of £9.00 / $12.00 per hour, but we understand that payment rates can vary depending on a range of factors. Whatever payment you choose should reflect the amount of effort participants are required to put in and be fair to everyone involved.

Ethical research made easy with Prolific

At Prolific, we believe in making ethical research easy and accessible. The findings from the Fairwork Cloudwork report speak for themselves. Prolific was given the top score out of all competitors for minimum standards of fair work.

With over 25,000 researchers in our community, we're leading the way in revolutionizing the research industry. If you're interested in learning more about how we can support your research journey, sign up to get started now.

You might also like

chapter 3 ethical consideration in quantitative research sample

High-quality human data to deliver world-leading research and AIs.

chapter 3 ethical consideration in quantitative research sample

Follow us on

All Rights Reserved Prolific 2024

Book cover

Scientific Research in Information Systems pp 197–214 Cite as

Ethical Considerations in Research

  • Jan Recker   ORCID: orcid.org/0000-0002-2072-5792 2  
  • First Online: 22 October 2021

3210 Accesses

1 Citations

Part of the book series: Progress in IS ((PROIS))

This chapter draws attention to ethical considerations as they pertain to research in information systems (IS). Ethics define the principles of right and wrong conduct in the community of IS scholars. This chapter discusses the role of ethics in IS research, the difficulty of acting ethically in research, and presents guidelines for ethical conduct in performing research and publishing research.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Allen, G. N., Ball, N. L., & Smith, H. J. (2011). Information Systems Research Behaviors: What Are the Normative Standards? MIS Quarterly, 35 (3), 533–551.

Article   Google Scholar  

Bedeian, A. G., Taylor, S. G., & Miller, A. N. (2010). Management Science on the Credibility Bubble: Cardinal Sins and Various Misdemeanors. Academy of Management Learning & Education, 9 (4), 715–725.

Google Scholar  

Bettis, R. A., Ethiraj, S., Gambardella, A., Helfat, C., & Mitchell, W. (2016). Creating Repeatable Cumulative Knowledge in Strategic Management. Strategic Management Journal, 37 (2), 257–261.

Bhattacharjee, Y. (2013). The Mind of a Con Man. The New York Times Magazine . Retrieved February 8, from https://www.nytimes.com/2013/04/28/magazine/diederik-stapels-audacious-academic-fraud.html

Carver, J. D., Dellva, B., Emmanuel, P. J., & Parchure, R. (2011). Ethical Considerations in Scientific Writing. Indian Journal of Sexually Transmitted Diseases and AIDS, 32 (2), 124–128.

CITI Program. (2010). The Trusted Standard in Research, Ethics, and Compliance Training . CITI Program. Retrieved February 8, 2021 from https://about.citiprogram.org/en/homepage/

Clarke, R. (2006). Plagiarism by Academics: More Complex Than It Seems. Journal of the Association for Information Systems, 7 (5), 91–121.

Dennis, A. R., Brown, S. A., Wells, T. M., & Rai, A. (2020). Editor’s Comments: Replication Crisis or Replication Reassurance: Results of the IS Replication Project. MIS Quarterly, 44 (3), iii–vii.

Gray, P. (2009). Journal Self-Citation I: Overview of the Journal Self-Citation Papers—The Wisdom of the IS Crowd. Communications of the Association for Information Systems, 25 (1), 1–10.

Kerr, N. L. (1998). HARKing: Hypothesizing After the Results are Known. Personality and Social Psychology Review, 2 (3), 196–217.

Kock, N. (2001). A Case of Academic Plagiarism. Communications of the ACM, 42 (7), 96–104.

Kock, N., & Davison, R. (2003). Dealing with Plagiarism in the Information Systems Research Community: A Look at Factors that Drive Plagiarism and Ways to Address Them. MIS Quarterly, 27 (4), 511–532.

Makri, A. (2021). What do Journalists say About Covering Science During the COVID-19 Pandemic? Nature Medicine, 27 , 17–20. https://doi.org/10.1038/s41591-020-01207-3

McNutt, M. (2016). Taking Up TOP. Science, 352 (6290), 1147.

Mertens, W., & Recker, J. (2020). New Guidelines for Null Hypothesis Significance Testing in Hypothetico-Deductive IS Research. Journal of the Association for Information Systems, 21 (4), 1072–1102. https://doi.org/10.17705/1jais.00629

Molloy, J. C. (2011). The Open Knowledge Foundation: Open Data Means Better Science. PLoS Biology, 9 (12), e1001195.

Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., Buck, S., Chambers, C. D., Chin, G., Christensen, G., Contestabile, M., Dafoe, A., Eich, E., Freese, J., Glennerster, R., Goroff, D., Green, D. P., Hesse, B., Humphreys, M., et al. (2015). Promoting an Open Research Culture. Science, 348 (6242), 1422–1425.

O’Boyle, E. H., Jr., Banks, G. C., & Gonzalez-Mulé, E. (2017). The Chrysalis Effect: How Ugly Initial Results Metamorphosize Into Beautiful Articles. Journal of Management, 43 (2), 376–399.

Recker, J., & Lekse, D. (2016). A Field Study of Spatial Preferences in Enterprise Microblogging. Journal of Information Technology, 31 (2), 115–129.

Recker, J., & Mendling, J. (2016). The State-of-the-Art of Business Process Management Research as Published in the BPM Conference: Recommendations for Progressing the Field. Business & Information Systems Engineering, 58 (1), 55–72.

Recker, J., Safrudin, N., & Rosemann, M. (2012). How Novices Design Business Processes. Information Systems, 37 (6), 557–573.

Resnik, D. B. (2016). Ethics in Science. In P. Humphreys (Ed.), The Oxford Handbook of Philosophy of Science (pp. 252–273). Oxford University Press.

Resnik, D. B. (2020). What Is Ethics in Research & Why Is It Important? National Institute of Environmental Health Sciences. Retrieved February 3, 2021 from https://www.niehs.nih.gov/research/resources/bioethics/whatis/index.cfm

Resnik, D. B., & Dinse, G. E. (2012). Do U.S. Research Institutions Meet or Exceed Federal Mandates for Instruction in Responsible Conduct of Research? A National Survey. Academic Medicine, 87 (9), 1237–1242. https://doi.org/10.1097/ACM.0b013e318260fe5c

Starbuck, W. H. (2016). 60th Anniversary Essay: How Journals Could Improve Research Practices in Social Science. Administrative Science Quarterly, 61 (2), 165–183.

Warren, M. (2018). First Analysis of ‘Pre-registered’ Studies Shows Sharp Rise in Null Findings. Nature , October 24. d41586-41018-07118-41581.

Wasserstein, R. L., & Lazar, N. A. (2016). The ASA’s Statement on P-values: Context, Process, and Purpose. The American Statistician, 70 (2), 129–133.

Wasserstein, R. L., Schirm, A. L., & Lazar, N. A. (2019). Moving to a World Beyond “p < 0.05”. The American Statistician, 73 (Suppl 1), 1–19.

Zelt, S., Recker, J., Schmiedel, T., & vom Brocke, J. (2018). Development and Validation of an Instrument to Measure and Manage Organizational Process Variety. Plos ONE, 13 (10), e0206198.

Download references

Author information

Authors and affiliations.

University of Hamburg, Hamburg, Germany

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Cite this chapter.

Recker, J. (2021). Ethical Considerations in Research. In: Scientific Research in Information Systems. Progress in IS. Springer, Cham. https://doi.org/10.1007/978-3-030-85436-2_7

Download citation

DOI : https://doi.org/10.1007/978-3-030-85436-2_7

Published : 22 October 2021

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-85435-5

Online ISBN : 978-3-030-85436-2

eBook Packages : Business and Management Business and Management (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

1Library

  • No results found

Ethical Considerations

3 chapter three: research design and methodology, 3.11 ethical considerations.

Research ethics can be defined as the moral principles that should be adhered to by researchers when conducting studies (Greener, 2008). Research ethics are made important by the fact that research participants and the researcher have different interests in the study. Adherence with the research ethics by the researcher is therefore intended to protect the interests of the research participants (Greener, 2008). Researchers are required to comply with research principles by professional bodies and research committees at universities for their studies to be approved (Howitt and Cramer, 2011). Failure to comply with research ethics can result in the researcher being reported to professional bodies for penalties (Howitt and Cramer, 2011). Complying with research ethics can also enhance the credibility of the study in the eyes of the public or the research participants (Greener, 2008). In fact, the targeted research participants would be willing to participate in a study if the researcher is complying with the research ethics (Greener, 2008). It is, therefore, of paramount importance for the researcher to be ethical during the course of conducting a study. Existing research ethics were complied with in this study. The next sections explain how the ethics were adhered to.

It is the requirement of the university for students to get ethical clearance before proceeding with data collection (Greener, 2008). An ethical clearance form was completed by the researcher and ethical clearance was granted by the university’s research committee upon being satisfied with the researcher’s ethical conduct. The granting of the ethical clearance by the university research committee gave the researcher the go-ahead to proceed with the research. Research participants ought to be given adequate information about the study, in order to make informed decisions about participating in the study (Bhattacherjee, 2012). In fact, it is unethical for a researcher to coerce research participants to take part in the study or trick them into participating (Bhattacherjee, 2012). The research participants of this study were given all the pertinent information of the study, through a cover letter that was attached to the questionnaire for them to make informed decisions about participating. The attachment of the informational cover letter assisted the research participants with understanding the purpose of the study, the fact that participation was voluntary and that their rights to confidentiality and anonymity would be respected in the study. The cover letter also made it clear that the research participants had the right to withdraw from the study at any time when they felt their rights were being violated.

Respecting of anonymity entails the concealing of the identity of the research participants in the research report, while confidentiality entails the respecting of the privacy of the respondents by keeping their private information secure (Saunders, et al., 2009). Saunders, et al. (2009)

argue that the rights of the research participants ought to be respected in the research report by respecting their confidentiality and anonymity. No names of research participants and their companies had been reported in the research report, in order to conceal their identity. The private information of the research participants is also not reported in the research report, in order to protect them.

Researchers are required not to expose the research participants to any form of harm (Howitt and Cramer, 2011). The harm can either be in the form of physical harm of emotional harm (Kumar, 2011). There was no possibility of physical harm to the research participants who participated in this study. However, the possibility of emotional harm was a reality through sensitive questions. Pilot testing of the research instrument was also aimed at eliminating sensitive questions from the research instrument. Thus, the study complied with avoiding emotional harm to the respondents.

Researchers are also expected to be honest during the course of conducting a study (Greener, 2008). The honest conduct can be through recording data objectively, using appropriate research methods in conducting the study and not misleading readers in the research findings (Greener, 2008). Researcher bias in data collection was eliminated in the study by using self-administered questionnaires. The research participants completed the questionnaires on their own without undue influence from the researcher. The bias of the research participants was eliminated by the use of close-ended questions. Suitable research methods were employed in the study. All the research methods used in the study have been justified by what is reported in existing literature on research design and methodology. Finally, the reported research findings are the true outcome from the quantitative analysis employed in the study. Thus, the research findings have not been falsified.

3.12 Conclusion

This chapter outlined the research design and methodology that were used in the study. The study used the explanatory and descriptive research designs, as well as the quantitative research methodology that is compatible with the two research designs. The survey research strategy was used as the guiding strategy in data collection. A questionnaire which was compatible with survey strategy and the quantitative and qualitative research approaches was used in collecting data from a sample size of 30 contractors, 50 consultants and 10 clients who were selected at random. Two quantitative analysis methods, namely descriptive and inferential analysis, were used to make sense of the data that were collected. The results from the quantitative and qualitative analysis are presented and discussed in the next chapter.

  • Construction Health and Safety Performance
  • Financial Impacts of Accidents
  • South Africa – Key Legislation on Health and Safety
  • Research Methodology
  • Target Population
  • Data Collection Instrument: The Questionnaire
  • Reliability and Validity of Research Instrument
  • Ethical Considerations (You are here)
  • Level of Awareness and Understanding of the Construction Regulations
  • Commitment for Implementation of the Construction Regulations
  • Impact of Implementation of the Construction Regulations
  • SECTION 3: QUALITATIVE DATA ANALYSIS

Related documents

IMAGES

  1. Chapter 3

    chapter 3 ethical consideration in quantitative research sample

  2. Ethical Considerations

    chapter 3 ethical consideration in quantitative research sample

  3. Chapter 3 Ethical Research

    chapter 3 ethical consideration in quantitative research sample

  4. Chapter 3 Introduction To Quantitative Research 1-18

    chapter 3 ethical consideration in quantitative research sample

  5. chapter 3 of quantitative research

    chapter 3 ethical consideration in quantitative research sample

  6. (PDF) CHAPTER THREE RESEARCH METHODOLOGY 3.1 Introduction

    chapter 3 ethical consideration in quantitative research sample

VIDEO

  1. Quarter 1 Week 3 Qualitative and Quantitative Research

  2. Ethical and cultural consideration in Dementia Research

  3. Qualitative Chapter 3

  4. Quantitative Research Analyst Interview Questions and Answers

  5. How to conduct quantitative research (8 Major Steps)

  6. ETHICS IN RESEARCH (TAMIL)

COMMENTS

  1. PDF 3. CHAPTER 3 RESEARCH METHODOLOGY

    3.1 Introduction . This Chapter presents the description of the research process. It provides ... was not required the researcher ensured that research ethics were adhered to during the research process. Several ethical considerations were taken into account to ensure that the study was conducted in an appropriate manner (Babbie & Mouton, 2001

  2. Ethical Considerations in Research

    Research ethics are a set of principles that guide your research designs and practices in both quantitative and qualitative research. In this article, you will learn about the types and examples of ethical considerations in research, such as informed consent, confidentiality, and avoiding plagiarism. You will also find out how to apply ethical principles to your own research projects with ...

  3. CHAPTER THREE RESEARCH METHODOLOGY 3.0. Introduction

    3.11 Ethical consideration Research ethics are the codes of behavior adopted by a group suggesting what member of a group thought to do under a given circumstance (Zikmund, 2000)and the researcher ...

  4. 9.4: Research Ethics in Quantitative Research

    This page titled 9.4: Research Ethics in Quantitative Research is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Josue Franco, Charlotte Lee, Kau Vue, Dino Bozonelos, Masahiro Omae, & Steven Cauchon (ASCCC Open Educational Resources Initiative (OERI)) via source content that was edited to the style and ...

  5. PDF Section 3: Ethical Considerations in The Design, Development, Review

    guidance on additional ethical considerations . that may apply to: • the use of human biospecimens in laboratory based research (Chapter 3.2) • genomic research (Chapter 3.3) • xenotransplantation research (Chapter 3.4). This guidance applies to research, but . sometimes the distinction between research

  6. Quantitative Methods and Ethics

    In 1966, the U.S. Surgeon Gen-eral issued a set of regulations governing the use of subjects by researchers whose work was funded by the National Institutes of Health (NIH). Most NIH grants funded biomedical research, but there was also NIH support for research in the behav-ioral and social sciences.

  7. Handbook of ethics in quantitative methodology.

    The book uses an ethical framework that emphasizes the human cost of quantitative decision making to help researchers understand the specific implications of their choices. The order of the chapters parallels the chronology of the research process: determining the research design and data collection; data analysis; and communicating findings.

  8. 3 Quantitative Methods and Ethics

    The purpose of this chapter is to provide a context for thinking about the role of ethics in quantitative methodology. We begin by reviewing the sweep of events that led to the creation and expansion of legal and professional rules for the protection of research subjects and society against unethical research.

  9. Handbook of Ethics in Quantitative Methodology

    Part 2 focuses on ideas for disseminating ethical training in statistics courses. Part 3 considers the ethical aspects of selecting measurement instruments and sample size planning and explores issues related to high stakes testing, the defensibility of experimental vs. quasi-experimental research designs, and ethics in program evaluation.

  10. PDF Chapter 3 Ethical Issues in Research

    This chapter considers all of these important ethical con-cerns as associated with research in general and with qualitative research in particular. Chapter 3 Ethical Issues in Research Learning Objectives After studying this chapter, you should be able to: 13. Explain why questionable research practices involving humans signaled the need for ...

  11. Chapter III Russel

    Chapter III. Methodology. This chapter discusses the research design, locale of the study, research instrument, data collection procedures, and ethical considerations. A. Research Design. In this study, the qualitative research method was used, which results in a more in-depth and broad understanding of a situation.

  12. PDF Chapter Three Methodology Table of Contents

    E. Ethical considerations The following ethical guidelines were put into place for the research period: 1. The dignity and wellbeing of students was protected at all times. 2. The research data remained confidential throughout the study and the researcher obtained the students' permission to use their real names in the research report.

  13. Chapter 3: Research Ethics

    Chapter 3: Research Ethics In 1998 a medical journal called The Lancet published an article of interest to many psychologists. The researchers claimed to have shown a statistical relationship between receiving the combined measles, mumps, and rubella (MMR) vaccine and the development of autism—suggesting furthermore that the vaccine might even cause autism.

  14. (PDF) Ethical Considerations in Research

    of power and authority are all 'ethical considerations inherent in and raised. by ESL research' (p. . 1) . Koulouriotis further reiterates the point that a great. proportion of research in ESL ...

  15. PDF Chapter 3 RESEARCH METHODS 3.1. INTRODUCTION

    This chapter goes on to discuss the approach used to develop and validate the multidimensional model for transnational education programs; it describes the research design, the population and sample, instrumentation, data collection, and data analyses that were used in the study. Section 3.2 describes methods used in the development and ...

  16. Ethical Considerations in Research

    Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people. The goals of human research often include understanding real-life phenomena, studying effective treatments, investigating ...

  17. Chapter 3. Research Ethics

    Chapter 3. Research Ethics Conducting research with human subjects Ethics is the area of study that involves investigation, organization and pro-tection of societal concepts of right and wrong behaviour. There are multiple ways in which ethics relates to research. The information that arises from human subjects in research studies may

  18. Ethical considerations in research: Best practices and examples

    At Prolific, we believe in making ethical research easy and accessible. The findings from the Fairwork Cloudwork report speak for themselves. Prolific was given the top score out of all competitors for minimum standards of fair work. With over 25,000 researchers in our community, we're leading the way in revolutionizing the research industry.

  19. PDF Chapter 15: Ethical Considerations in Quantitative Tourism and

    Consequently, this chapter aims to provide a practical review of ethical issues in quantitative methods in tourism and hospitality researches and focuses on several potential issues that might emerge in conducting quantitative research methods in tourism context. Moreover, based on the principles of ethical issues in business research, this ...

  20. Ethical Considerations in Research

    Ethical considerations related to conducting research also involve standards for the storage and analysis of research data so cases like the Schön scandal can be reviewed expeditiously. Generally speaking, all research data, including primary materials and raw data, such as survey questionnaires, measurements, recordings, and computer results, must be stored in secure and durable storage ...

  21. (Pdf) Chapter Three Research Methodology 3.1

    CHAPTER THREE RESEARCH METHODOLOGY 3.1. February 2020; Authors: ... The study sample will be 10 NBC Bank Arusha Head Quarters employees and 40 customers. ... 3.7 Ethical Consideration .

  22. Ethical Considerations

    3 CHAPTER THREE: RESEARCH DESIGN AND METHODOLOGY. 3.11 Ethical Considerations. Research ethics can be defined as the moral principles that should be adhered to by researchers when conducting studies (Greener, 2008). Research ethics are made important by the fact that research participants and the researcher have different interests in the study ...

  23. CHAPTER THREE RESEARCH METHODOLOGY 3.1 Introduction

    December 2019. Monica Lengere. PDF | On Jan 28, 2020, Monica Lengere published CHAPTER THREE RESEARCH METHODOLOGY 3.1 Introduction | Find, read and cite all the research you need on ResearchGate.