Biology library

Course: biology library   >   unit 1, the scientific method.

  • Controlled experiments
  • The scientific method and experimental design

Introduction

  • Make an observation.
  • Ask a question.
  • Form a hypothesis , or testable explanation.
  • Make a prediction based on the hypothesis.
  • Test the prediction.
  • Iterate: use the results to make new hypotheses or predictions.

Scientific method example: Failure to toast

1. make an observation..

  • Observation: the toaster won't toast.

2. Ask a question.

  • Question: Why won't my toaster toast?

3. Propose a hypothesis.

  • Hypothesis: Maybe the outlet is broken.

4. Make predictions.

  • Prediction: If I plug the toaster into a different outlet, then it will toast the bread.

5. Test the predictions.

  • Test of prediction: Plug the toaster into a different outlet and try again.
  • If the toaster does toast, then the hypothesis is supported—likely correct.
  • If the toaster doesn't toast, then the hypothesis is not supported—likely wrong.

Logical possibility

Practical possibility, building a body of evidence, 6. iterate..

  • Iteration time!
  • If the hypothesis was supported, we might do additional tests to confirm it, or revise it to be more specific. For instance, we might investigate why the outlet is broken.
  • If the hypothesis was not supported, we would come up with a new hypothesis. For instance, the next hypothesis might be that there's a broken wire in the toaster.

Want to join the conversation?

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Incredible Answer

Six Steps of the Scientific Method

Learn What Makes Each Stage Important

ThoughtCo. / Hugo Lin 

  • Scientific Method
  • Chemical Laws
  • Periodic Table
  • Projects & Experiments
  • Biochemistry
  • Physical Chemistry
  • Medical Chemistry
  • Chemistry In Everyday Life
  • Famous Chemists
  • Activities for Kids
  • Abbreviations & Acronyms
  • Weather & Climate
  • Ph.D., Biomedical Sciences, University of Tennessee at Knoxville
  • B.A., Physics and Mathematics, Hastings College

The scientific method is a systematic way of learning about the world around us and answering questions. The key difference between the scientific method and other ways of acquiring knowledge are forming a hypothesis and then testing it with an experiment.

The Six Steps

The number of steps can vary from one description to another (which mainly happens when data and analysis are separated into separate steps), however, this is a fairly standard list of the six scientific method steps that you are expected to know for any science class:

  • Purpose/Question Ask a question.
  • Research Conduct background research. Write down your sources so you can cite your references. In the modern era, a lot of your research may be conducted online. Scroll to the bottom of articles to check the references. Even if you can't access the full text of a published article, you can usually view the abstract to see the summary of other experiments. Interview experts on a topic. The more you know about a subject, the easier it will be to conduct your investigation.
  • Hypothesis Propose a hypothesis . This is a sort of educated guess about what you expect. It is a statement used to predict the outcome of an experiment. Usually, a hypothesis is written in terms of cause and effect. Alternatively, it may describe the relationship between two phenomena. One type of hypothesis is the null hypothesis or the no-difference hypothesis. This is an easy type of hypothesis to test because it assumes changing a variable will have no effect on the outcome. In reality, you probably expect a change but rejecting a hypothesis may be more useful than accepting one.
  • Experiment Design and perform an experiment to test your hypothesis. An experiment has an independent and dependent variable. You change or control the independent variable and record the effect it has on the dependent variable . It's important to change only one variable for an experiment rather than try to combine the effects of variables in an experiment. For example, if you want to test the effects of light intensity and fertilizer concentration on the growth rate of a plant, you're really looking at two separate experiments.
  • Data/Analysis Record observations and analyze the meaning of the data. Often, you'll prepare a table or graph of the data. Don't throw out data points you think are bad or that don't support your predictions. Some of the most incredible discoveries in science were made because the data looked wrong! Once you have the data, you may need to perform a mathematical analysis to support or refute your hypothesis.
  • Conclusion Conclude whether to accept or reject your hypothesis. There is no right or wrong outcome to an experiment, so either result is fine. Accepting a hypothesis does not necessarily mean it's correct! Sometimes repeating an experiment may give a different result. In other cases, a hypothesis may predict an outcome, yet you might draw an incorrect conclusion. Communicate your results. The results may be compiled into a lab report or formally submitted as a paper. Whether you accept or reject the hypothesis, you likely learned something about the subject and may wish to revise the original hypothesis or form a new one for a future experiment.

When Are There Seven Steps?

Sometimes the scientific method is taught with seven steps instead of six. In this model, the first step of the scientific method is to make observations. Really, even if you don't make observations formally, you think about prior experiences with a subject in order to ask a question or solve a problem.

Formal observations are a type of brainstorming that can help you find an idea and form a hypothesis. Observe your subject and record everything about it. Include colors, timing, sounds, temperatures, changes, behavior, and anything that strikes you as interesting or significant.

When you design an experiment, you are controlling and measuring variables. There are three types of variables:

  • Controlled Variables:  You can have as many  controlled variables  as you like. These are parts of the experiment that you try to keep constant throughout an experiment so that they won't interfere with your test. Writing down controlled variables is a good idea because it helps make your experiment  reproducible , which is important in science! If you have trouble duplicating results from one experiment to another, there may be a controlled variable that you missed.
  • Independent Variable:  This is the variable you control.
  • Dependent Variable:  This is the variable you measure. It is called the dependent variable because it  depends  on the independent variable.
  • Scientific Method Flow Chart
  • What Is an Experiment? Definition and Design
  • How To Design a Science Fair Experiment
  • What Is a Hypothesis? (Science)
  • Scientific Variable
  • What Are the Elements of a Good Hypothesis?
  • Scientific Method Vocabulary Terms
  • Understanding Simple vs Controlled Experiments
  • What Are Independent and Dependent Variables?
  • Null Hypothesis Examples
  • Null Hypothesis Definition and Examples
  • Scientific Method Lesson Plan
  • Dependent Variable Definition and Examples
  • What Is a Testable Hypothesis?
  • How to Write a Lab Report
  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Scientific Method Steps in Psychology Research

Steps, Uses, and Key Terms

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

the scientific research process is

Emily is a board-certified science editor who has worked with top digital publishing brands like Voices for Biodiversity, Study.com, GoodTherapy, Vox, and Verywell.

the scientific research process is

Verywell / Theresa Chiechi

How do researchers investigate psychological phenomena? They utilize a process known as the scientific method to study different aspects of how people think and behave.

When conducting research, the scientific method steps to follow are:

  • Observe what you want to investigate
  • Ask a research question and make predictions
  • Test the hypothesis and collect data
  • Examine the results and draw conclusions
  • Report and share the results 

This process not only allows scientists to investigate and understand different psychological phenomena but also provides researchers and others a way to share and discuss the results of their studies.

Generally, there are five main steps in the scientific method, although some may break down this process into six or seven steps. An additional step in the process can also include developing new research questions based on your findings.

What Is the Scientific Method?

What is the scientific method and how is it used in psychology?

The scientific method consists of five steps. It is essentially a step-by-step process that researchers can follow to determine if there is some type of relationship between two or more variables.

By knowing the steps of the scientific method, you can better understand the process researchers go through to arrive at conclusions about human behavior.

Scientific Method Steps

While research studies can vary, these are the basic steps that psychologists and scientists use when investigating human behavior.

The following are the scientific method steps:

Step 1. Make an Observation

Before a researcher can begin, they must choose a topic to study. Once an area of interest has been chosen, the researchers must then conduct a thorough review of the existing literature on the subject. This review will provide valuable information about what has already been learned about the topic and what questions remain to be answered.

A literature review might involve looking at a considerable amount of written material from both books and academic journals dating back decades.

The relevant information collected by the researcher will be presented in the introduction section of the final published study results. This background material will also help the researcher with the first major step in conducting a psychology study: formulating a hypothesis.

Step 2. Ask a Question

Once a researcher has observed something and gained some background information on the topic, the next step is to ask a question. The researcher will form a hypothesis, which is an educated guess about the relationship between two or more variables

For example, a researcher might ask a question about the relationship between sleep and academic performance: Do students who get more sleep perform better on tests at school?

In order to formulate a good hypothesis, it is important to think about different questions you might have about a particular topic.

You should also consider how you could investigate the causes. Falsifiability is an important part of any valid hypothesis. In other words, if a hypothesis was false, there needs to be a way for scientists to demonstrate that it is false.

Step 3. Test Your Hypothesis and Collect Data

Once you have a solid hypothesis, the next step of the scientific method is to put this hunch to the test by collecting data. The exact methods used to investigate a hypothesis depend on exactly what is being studied. There are two basic forms of research that a psychologist might utilize: descriptive research or experimental research.

Descriptive research is typically used when it would be difficult or even impossible to manipulate the variables in question. Examples of descriptive research include case studies, naturalistic observation , and correlation studies. Phone surveys that are often used by marketers are one example of descriptive research.

Correlational studies are quite common in psychology research. While they do not allow researchers to determine cause-and-effect, they do make it possible to spot relationships between different variables and to measure the strength of those relationships. 

Experimental research is used to explore cause-and-effect relationships between two or more variables. This type of research involves systematically manipulating an independent variable and then measuring the effect that it has on a defined dependent variable .

One of the major advantages of this method is that it allows researchers to actually determine if changes in one variable actually cause changes in another.

While psychology experiments are often quite complex, a simple experiment is fairly basic but does allow researchers to determine cause-and-effect relationships between variables. Most simple experiments use a control group (those who do not receive the treatment) and an experimental group (those who do receive the treatment).

Step 4. Examine the Results and Draw Conclusions

Once a researcher has designed the study and collected the data, it is time to examine this information and draw conclusions about what has been found.  Using statistics , researchers can summarize the data, analyze the results, and draw conclusions based on this evidence.

So how does a researcher decide what the results of a study mean? Not only can statistical analysis support (or refute) the researcher’s hypothesis; it can also be used to determine if the findings are statistically significant.

When results are said to be statistically significant, it means that it is unlikely that these results are due to chance.

Based on these observations, researchers must then determine what the results mean. In some cases, an experiment will support a hypothesis, but in other cases, it will fail to support the hypothesis.

So what happens if the results of a psychology experiment do not support the researcher's hypothesis? Does this mean that the study was worthless?

Just because the findings fail to support the hypothesis does not mean that the research is not useful or informative. In fact, such research plays an important role in helping scientists develop new questions and hypotheses to explore in the future.

After conclusions have been drawn, the next step is to share the results with the rest of the scientific community. This is an important part of the process because it contributes to the overall knowledge base and can help other scientists find new research avenues to explore.

Step 5. Report the Results

The final step in a psychology study is to report the findings. This is often done by writing up a description of the study and publishing the article in an academic or professional journal. The results of psychological studies can be seen in peer-reviewed journals such as  Psychological Bulletin , the  Journal of Social Psychology ,  Developmental Psychology , and many others.

The structure of a journal article follows a specified format that has been outlined by the  American Psychological Association (APA) . In these articles, researchers:

  • Provide a brief history and background on previous research
  • Present their hypothesis
  • Identify who participated in the study and how they were selected
  • Provide operational definitions for each variable
  • Describe the measures and procedures that were used to collect data
  • Explain how the information collected was analyzed
  • Discuss what the results mean

Why is such a detailed record of a psychological study so important? By clearly explaining the steps and procedures used throughout the study, other researchers can then replicate the results. The editorial process employed by academic and professional journals ensures that each article that is submitted undergoes a thorough peer review, which helps ensure that the study is scientifically sound.

Once published, the study becomes another piece of the existing puzzle of our knowledge base on that topic.

Before you begin exploring the scientific method steps, here's a review of some key terms and definitions that you should be familiar with:

  • Falsifiable : The variables can be measured so that if a hypothesis is false, it can be proven false
  • Hypothesis : An educated guess about the possible relationship between two or more variables
  • Variable : A factor or element that can change in observable and measurable ways
  • Operational definition : A full description of exactly how variables are defined, how they will be manipulated, and how they will be measured

Uses for the Scientific Method

The  goals of psychological studies  are to describe, explain, predict and perhaps influence mental processes or behaviors. In order to do this, psychologists utilize the scientific method to conduct psychological research. The scientific method is a set of principles and procedures that are used by researchers to develop questions, collect data, and reach conclusions.

Goals of Scientific Research in Psychology

Researchers seek not only to describe behaviors and explain why these behaviors occur; they also strive to create research that can be used to predict and even change human behavior.

Psychologists and other social scientists regularly propose explanations for human behavior. On a more informal level, people make judgments about the intentions, motivations , and actions of others on a daily basis.

While the everyday judgments we make about human behavior are subjective and anecdotal, researchers use the scientific method to study psychology in an objective and systematic way. The results of these studies are often reported in popular media, which leads many to wonder just how or why researchers arrived at the conclusions they did.

Examples of the Scientific Method

Now that you're familiar with the scientific method steps, it's useful to see how each step could work with a real-life example.

Say, for instance, that researchers set out to discover what the relationship is between psychotherapy and anxiety .

  • Step 1. Make an observation : The researchers choose to focus their study on adults ages 25 to 40 with generalized anxiety disorder.
  • Step 2. Ask a question : The question they want to answer in their study is: Do weekly psychotherapy sessions reduce symptoms in adults ages 25 to 40 with generalized anxiety disorder?
  • Step 3. Test your hypothesis : Researchers collect data on participants' anxiety symptoms . They work with therapists to create a consistent program that all participants undergo. Group 1 may attend therapy once per week, whereas group 2 does not attend therapy.
  • Step 4. Examine the results : Participants record their symptoms and any changes over a period of three months. After this period, people in group 1 report significant improvements in their anxiety symptoms, whereas those in group 2 report no significant changes.
  • Step 5. Report the results : Researchers write a report that includes their hypothesis, information on participants, variables, procedure, and conclusions drawn from the study. In this case, they say that "Weekly therapy sessions are shown to reduce anxiety symptoms in adults ages 25 to 40."

Of course, there are many details that go into planning and executing a study such as this. But this general outline gives you an idea of how an idea is formulated and tested, and how researchers arrive at results using the scientific method.

Erol A. How to conduct scientific research ? Noro Psikiyatr Ars . 2017;54(2):97-98. doi:10.5152/npa.2017.0120102

University of Minnesota. Psychologists use the scientific method to guide their research .

Shaughnessy, JJ, Zechmeister, EB, & Zechmeister, JS. Research Methods In Psychology . New York: McGraw Hill Education; 2015.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Starting the research process

A Beginner's Guide to Starting the Research Process

Research process steps

When you have to write a thesis or dissertation , it can be hard to know where to begin, but there are some clear steps you can follow.

The research process often begins with a very broad idea for a topic you’d like to know more about. You do some preliminary research to identify a  problem . After refining your research questions , you can lay out the foundations of your research design , leading to a proposal that outlines your ideas and plans.

This article takes you through the first steps of the research process, helping you narrow down your ideas and build up a strong foundation for your research project.

Table of contents

Step 1: choose your topic, step 2: identify a problem, step 3: formulate research questions, step 4: create a research design, step 5: write a research proposal, other interesting articles.

First you have to come up with some ideas. Your thesis or dissertation topic can start out very broad. Think about the general area or field you’re interested in—maybe you already have specific research interests based on classes you’ve taken, or maybe you had to consider your topic when applying to graduate school and writing a statement of purpose .

Even if you already have a good sense of your topic, you’ll need to read widely to build background knowledge and begin narrowing down your ideas. Conduct an initial literature review to begin gathering relevant sources. As you read, take notes and try to identify problems, questions, debates, contradictions and gaps. Your aim is to narrow down from a broad area of interest to a specific niche.

Make sure to consider the practicalities: the requirements of your programme, the amount of time you have to complete the research, and how difficult it will be to access sources and data on the topic. Before moving onto the next stage, it’s a good idea to discuss the topic with your thesis supervisor.

>>Read more about narrowing down a research topic

Prevent plagiarism. Run a free check.

So you’ve settled on a topic and found a niche—but what exactly will your research investigate, and why does it matter? To give your project focus and purpose, you have to define a research problem .

The problem might be a practical issue—for example, a process or practice that isn’t working well, an area of concern in an organization’s performance, or a difficulty faced by a specific group of people in society.

Alternatively, you might choose to investigate a theoretical problem—for example, an underexplored phenomenon or relationship, a contradiction between different models or theories, or an unresolved debate among scholars.

To put the problem in context and set your objectives, you can write a problem statement . This describes who the problem affects, why research is needed, and how your research project will contribute to solving it.

>>Read more about defining a research problem

Next, based on the problem statement, you need to write one or more research questions . These target exactly what you want to find out. They might focus on describing, comparing, evaluating, or explaining the research problem.

A strong research question should be specific enough that you can answer it thoroughly using appropriate qualitative or quantitative research methods. It should also be complex enough to require in-depth investigation, analysis, and argument. Questions that can be answered with “yes/no” or with easily available facts are not complex enough for a thesis or dissertation.

In some types of research, at this stage you might also have to develop a conceptual framework and testable hypotheses .

>>See research question examples

The research design is a practical framework for answering your research questions. It involves making decisions about the type of data you need, the methods you’ll use to collect and analyze it, and the location and timescale of your research.

There are often many possible paths you can take to answering your questions. The decisions you make will partly be based on your priorities. For example, do you want to determine causes and effects, draw generalizable conclusions, or understand the details of a specific context?

You need to decide whether you will use primary or secondary data and qualitative or quantitative methods . You also need to determine the specific tools, procedures, and materials you’ll use to collect and analyze your data, as well as your criteria for selecting participants or sources.

>>Read more about creating a research design

Finally, after completing these steps, you are ready to complete a research proposal . The proposal outlines the context, relevance, purpose, and plan of your research.

As well as outlining the background, problem statement, and research questions, the proposal should also include a literature review that shows how your project will fit into existing work on the topic. The research design section describes your approach and explains exactly what you will do.

You might have to get the proposal approved by your supervisor before you get started, and it will guide the process of writing your thesis or dissertation.

>>Read more about writing a research proposal

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

Methodology

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Is this article helpful?

Other students also liked.

  • Writing Strong Research Questions | Criteria & Examples

What Is a Research Design | Types, Guide & Examples

  • How to Write a Research Proposal | Examples & Templates

More interesting articles

  • 10 Research Question Examples to Guide Your Research Project
  • How to Choose a Dissertation Topic | 8 Steps to Follow
  • How to Define a Research Problem | Ideas & Examples
  • How to Write a Problem Statement | Guide & Examples
  • Relevance of Your Dissertation Topic | Criteria & Tips
  • Research Objectives | Definition & Examples
  • What Is a Fishbone Diagram? | Templates & Examples
  • What Is Root Cause Analysis? | Definition & Examples

Unlimited Academic AI-Proofreading

✔ Document error-free in 5minutes ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Sciencing_Icons_Science SCIENCE

Sciencing_icons_biology biology, sciencing_icons_cells cells, sciencing_icons_molecular molecular, sciencing_icons_microorganisms microorganisms, sciencing_icons_genetics genetics, sciencing_icons_human body human body, sciencing_icons_ecology ecology, sciencing_icons_chemistry chemistry, sciencing_icons_atomic & molecular structure atomic & molecular structure, sciencing_icons_bonds bonds, sciencing_icons_reactions reactions, sciencing_icons_stoichiometry stoichiometry, sciencing_icons_solutions solutions, sciencing_icons_acids & bases acids & bases, sciencing_icons_thermodynamics thermodynamics, sciencing_icons_organic chemistry organic chemistry, sciencing_icons_physics physics, sciencing_icons_fundamentals-physics fundamentals, sciencing_icons_electronics electronics, sciencing_icons_waves waves, sciencing_icons_energy energy, sciencing_icons_fluid fluid, sciencing_icons_astronomy astronomy, sciencing_icons_geology geology, sciencing_icons_fundamentals-geology fundamentals, sciencing_icons_minerals & rocks minerals & rocks, sciencing_icons_earth scructure earth structure, sciencing_icons_fossils fossils, sciencing_icons_natural disasters natural disasters, sciencing_icons_nature nature, sciencing_icons_ecosystems ecosystems, sciencing_icons_environment environment, sciencing_icons_insects insects, sciencing_icons_plants & mushrooms plants & mushrooms, sciencing_icons_animals animals, sciencing_icons_math math, sciencing_icons_arithmetic arithmetic, sciencing_icons_addition & subtraction addition & subtraction, sciencing_icons_multiplication & division multiplication & division, sciencing_icons_decimals decimals, sciencing_icons_fractions fractions, sciencing_icons_conversions conversions, sciencing_icons_algebra algebra, sciencing_icons_working with units working with units, sciencing_icons_equations & expressions equations & expressions, sciencing_icons_ratios & proportions ratios & proportions, sciencing_icons_inequalities inequalities, sciencing_icons_exponents & logarithms exponents & logarithms, sciencing_icons_factorization factorization, sciencing_icons_functions functions, sciencing_icons_linear equations linear equations, sciencing_icons_graphs graphs, sciencing_icons_quadratics quadratics, sciencing_icons_polynomials polynomials, sciencing_icons_geometry geometry, sciencing_icons_fundamentals-geometry fundamentals, sciencing_icons_cartesian cartesian, sciencing_icons_circles circles, sciencing_icons_solids solids, sciencing_icons_trigonometry trigonometry, sciencing_icons_probability-statistics probability & statistics, sciencing_icons_mean-median-mode mean/median/mode, sciencing_icons_independent-dependent variables independent/dependent variables, sciencing_icons_deviation deviation, sciencing_icons_correlation correlation, sciencing_icons_sampling sampling, sciencing_icons_distributions distributions, sciencing_icons_probability probability, sciencing_icons_calculus calculus, sciencing_icons_differentiation-integration differentiation/integration, sciencing_icons_application application, sciencing_icons_projects projects, sciencing_icons_news news.

  • Share Tweet Email Print
  • Home ⋅
  • Science Fair Project Ideas for Kids, Middle & High School Students ⋅
  • Probability & Statistics

Steps & Procedures for Conducting Scientific Research

Scientist in front of her chalkboard

Laboratory Observation Methods

A good scientist practices objectivity to avoid errors and personal biases that may lead to falsified research. The entire scientific research process--from defining the research question to drawing conclusions about data--requires the researcher to think critically and approach issues in an organized and systematic way. Scientific research can lead to the confirmation or re-evaluation of existing theories or to the development of entirely new theories.

Defining Problem and Research

Scientist researching

The first step of the scientific research process involves defining the problem and conducting research. First, a broad topic is selected concerning some topic or a research question is asked. The scientist researches the question to determine if it has been answered or the types of conclusions other researchers have drawn and experiments that have been carried out in relation to the question. Research involves reading scholarly journal articles from other scientists, which can be found on the Internet via research databases and journals that publish academic articles online. During research, the scientist narrows down the broad topic into a specific research question about some issue.

Young scientist conducting research

The hypothesis is a concise, clear statement containing the main idea or purpose of your scientific research. A hypothesis must be testable and falsifiable, meaning there must be a way to test the hypothesis and it can either be supported or rejected based on examining data. Crafting a hypothesis requires you to define the variables you're researching (e.g., who or what you're studying), explain them with clarity and explain your position. When writing the hypothesis, scientists either make a specific cause-and-effect statement about the variables being studied or make a general statement about the relationship between such variables.

Design Experiment

Research team designing an experiment

Designing a scientific experiment involves planning how you're going to collect data. Often, the nature of the research question influences how the scientific research will be conducted. For example, researching people's opinions naturally requires conducting surveys. When designing the experiment, the scientists selects from where and how the sample being studied will be obtained, the dates and times for the experiment, the controls being used and the other measures needed to carry out the research.

Collect Data

Scientist performing a chemical analysis

Data collection involves carrying out the experiment the scientist designed. During this process, the scientists record the data and complete the tasks required to conduct the experiments. In other words, the scientist goes to the research site to perform the experiment, such as a laboratory or some other setting. Tasks involved with conducting the experiment vary depending on the type of research. For example, some experiments require bringing human participants in for a test, conducting observations in the natural environment or experimenting with animal subjects.

Analyze Data

Analyzing data for the scientific research process involves bringing the data together and calculating statistics. Statistical tests can help the scientist understand the data better and tell whether a significant result is found. Calculating the statistics for a scientific research experiment uses both descriptive statistics and inferential statistics measures. Descriptive statistics describe the data and samples collected, such as sample averages or means, as well as the standard deviation that tells the scientists how the data is distributed. Inferential statistics involves conducting tests of significance that have the power to either confirm or reject the original hypothesis.

Draw Conclusions

After the data from an experiment is analyzed, the scientist examines the information and makes conclusions based on the findings. The scientist compares the results both to the original hypothesis and the conclusions of previous experiments by other researchers. When drawing conclusions, the scientist explains what the results mean and how to view them in the context of the scientific field or real-world environment, as well as making suggestions for future research.

Related Articles

How to eliminate bias in qualitative research, 5 components of a well-designed scientific experiment, types of observation in the scientific method, distinguishing between descriptive & causal studies, essential tenets of the scientific method, difference between proposition & hypothesis, how to write results for a science fair project, research methods in science, what are the 8 steps in scientific research, 10 characteristics of a science experiment, what is normative & descriptive science, the six parts of an experimental science project, why should you only test for one variable at a time..., what is the next step if an experiment fails to confirm..., writing objectives for lab reports, how to calculate success rate, the differences between concepts, theories & paradigms, how to use the scientific method in everyday life.

About the Author

Matthew Schieltz has been a freelance web writer since August 2006, and has experience writing a variety of informational articles, how-to guides, website and e-book content for organizations such as Demand Studios. Schieltz holds a Bachelor of Arts in psychology from Wright State University in Dayton, Ohio. He plans to pursue graduate school in clinical psychology.

Photo Credits

diego cervo/iStock/Getty Images

Find Your Next Great Science Fair Project! GO

We Have More Great Sciencing Articles!

Overview of the Research Process

  • First Online: 01 January 2012

Cite this chapter

Book cover

  • Phyllis G. Supino EdD 3  

6252 Accesses

1 Citations

1 Altmetric

Research is a rigorous problem-solving process whose ultimate goal is the discovery of new knowledge. Research may include the description of a new phenomenon, definition of a new relationship, development of a new model, or application of an existing principle or procedure to a new context. Research is systematic, logical, empirical, reductive, replicable and transmittable, and generalizable. Research can be classified according to a variety of dimensions: basic, applied, or translational; hypothesis generating or hypothesis testing; retrospective or prospective; longitudinal or cross-sectional; observational or experimental; and quantitative or qualitative. The ultimate success of a research project is heavily dependent on adequate planning.

  • Coronary Artery Bypass Grafting
  • Prospective Research
  • Control Hospital
  • Putative Risk Factor
  • Elective Coronary Artery Bypass Grafting

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Calvert J, Martin BR (2001) Changing conceptions of basic research? Brighton, England: Background document for the Workshop on Policy Relevance and Measurement of Basic Research, Oslo, 29–30 Oct 2001. Brighton, England: SPRU.

Google Scholar  

Leedy PD. Practical research. Planning and design. 6th ed. Upper Saddle River: Prentice Hall; 1997.

Tuckman BW. Conducting educational research. 3rd ed. New York: Harcourt Brace Jovanovich; 1972.

Tanenbaum SJ. Knowing and acting in medical practice. The epistemological policies of outcomes research. J Health Polit Policy Law. 1994;19:27–44.

Article   PubMed   CAS   Google Scholar  

Richardson WS. We should overcome the barriers to evidence-based clinical diagnosis! J Clin Epidemiol. 2007;60:217–27.

Article   PubMed   Google Scholar  

MacCorquodale K, Meehl PE. On a distinction between hypothetical constructs and intervening variables. Psych Rev. 1948;55:95–107.

Article   CAS   Google Scholar  

The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research: The Belmont Report: Ethical principles and guidelines for the protection of human subjects of research. Washington: DHEW Publication No. (OS) 78–0012, Appendix I, DHEW Publication No. (OS) 78–0013, Appendix II, DHEW Publication (OS) 780014; 1978.

Coryn CLS. The fundamental characteristics of research. J Multidisciplinary Eval. 2006;3:124–33.

Smith NL, Brandon PR. Fundamental issues in evaluation. New York: Guilford; 2008.

Committee on Criteria for Federal Support of Research and Development, National Academy of Sciences, National Academy of Engineering, Institute of Medicine, National Research Council. Allocating federal funds for science and technology. Washington, DC: The National Academies; 1995.

Busse R, Fleming I. A critical look at cardiovascular translational research. Am J Physiol Heart Circ Physiol. 1999;277:H1655–60.

CAS   Google Scholar  

Schuster DP, Powers WJ. Translational and experimental clinical research. Philadelphia: Lippincott, Williams & Williams; 2005.

Woolf SH. The meaning of translational research and why it matters. JAMA. 2008;299:211–21.

Robertson D, Williams GH. Clinical and translational science: principles of human research. London: Elsevier; 2009.

Goldblatt EM, Lee WH. From bench to bedside: the growing use of translational research in cancer medicine. Am J Transl Res. 2010;2:1–18.

PubMed   Google Scholar  

Milloy SJ. Science without sense: the risky business of public health research. In: Chapter 5, Mining for statistical associations. Cato Institute. 2009. http://www.junkscience.com/news/sws/sws-chapter5.html . Retrieved 29 Oct 2009.

Gawande A. The cancer-cluster myth. The New Yorker, 8 Feb 1999, p. 34–37.

Kerlinger F. [Chapter 2: problems and hypotheses]. In: Foundations of behavioral research 3rd edn. Orlando: Harcourt, Brace; 1986.

Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2:e124. Epub 2005 Aug 30.

Andersen B. Methodological errors in medical research. Oxford: Blackwell Scientific Publications; 1990.

DeAngelis C. An introduction to clinical research. New York: Oxford University Press; 1990.

Hennekens CH, Buring JE. Epidemiology in medicine. 1st ed. Boston: Little Brown; 1987.

Jekel JF. Epidemiology, biostatistics, and preventive medicine. 3rd ed. Philadelphia: Saunders Elsevier; 2007.

Hess DR. Retrospective studies and chart reviews. Respir Care. 2004;49:1171–4.

Wissow L, Pascoe J. Types of research models and methods (chapter four). In: An introduction to clinical research. New York: Oxford University Press; 1990.

Bacchieri A, Della Cioppa G. Fundamentals of clinical research: bridging medicine, statistics and operations. Milan: Springer; 2007.

Wood MJ, Ross-Kerr JC. Basic steps in planning nursing research. From question to proposal. 6th ed. Boston: Jones and Barlett; 2005.

DeVita VT, Lawrence TS, Rosenberg SA, Weinberg RA, DePinho RA. Cancer. Principles and practice of oncology, vol. 1. Philadelphia: Wolters Klewer/Lippincott Williams & Wilkins; 2008.

Portney LG, Watkins MP. Foundations of clinical research. Applications to practice. 2nd ed. Upper Saddle River: Prentice Hall Health; 2000.

Marks RG. Designing a research project. The basics of biomedical research methodology. Belmont: Lifetime Learning Publications: A division of Wadsworth; 1982.

Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. Publication bias in clinical research. Lancet. 1991;337:867–72.

Download references

Author information

Authors and affiliations.

Department of Medicine, College of Medicine, SUNY Downstate Medical Center, 450 Clarkson Avenue, 1199, Brooklyn, NY, 11203, USA

Phyllis G. Supino EdD

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Phyllis G. Supino EdD .

Editor information

Editors and affiliations.

, Cardiovascular Medicine, SUNY Downstate Medical Center, Clarkson Avenue, box 1199 450, Brooklyn, 11203, USA

Phyllis G. Supino

, Cardiovascualr Medicine, SUNY Downstate Medical Center, Clarkson Avenue 450, Brooklyn, 11203, USA

Jeffrey S. Borer

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer Science+Business Media, LLC

About this chapter

Supino, P.G. (2012). Overview of the Research Process. In: Supino, P., Borer, J. (eds) Principles of Research Methodology. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-3360-6_1

Download citation

DOI : https://doi.org/10.1007/978-1-4614-3360-6_1

Published : 18 April 2012

Publisher Name : Springer, New York, NY

Print ISBN : 978-1-4614-3359-0

Online ISBN : 978-1-4614-3360-6

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • U.S. Department of Health & Human Services

National Institutes of Health (NIH) - Turning Discovery into Health

  • Virtual Tour
  • Staff Directory
  • En Español

You are here

Science, health, and public trust.

September 8, 2021

Explaining How Research Works

Understanding Research infographic

We’ve heard “follow the science” a lot during the pandemic. But it seems science has taken us on a long and winding road filled with twists and turns, even changing directions at times. That’s led some people to feel they can’t trust science. But when what we know changes, it often means science is working.

Expaling How Research Works Infographic en español

Explaining the scientific process may be one way that science communicators can help maintain public trust in science. Placing research in the bigger context of its field and where it fits into the scientific process can help people better understand and interpret new findings as they emerge. A single study usually uncovers only a piece of a larger puzzle.

Questions about how the world works are often investigated on many different levels. For example, scientists can look at the different atoms in a molecule, cells in a tissue, or how different tissues or systems affect each other. Researchers often must choose one or a finite number of ways to investigate a question. It can take many different studies using different approaches to start piecing the whole picture together.

Sometimes it might seem like research results contradict each other. But often, studies are just looking at different aspects of the same problem. Researchers can also investigate a question using different techniques or timeframes. That may lead them to arrive at different conclusions from the same data.

Using the data available at the time of their study, scientists develop different explanations, or models. New information may mean that a novel model needs to be developed to account for it. The models that prevail are those that can withstand the test of time and incorporate new information. Science is a constantly evolving and self-correcting process.

Scientists gain more confidence about a model through the scientific process. They replicate each other’s work. They present at conferences. And papers undergo peer review, in which experts in the field review the work before it can be published in scientific journals. This helps ensure that the study is up to current scientific standards and maintains a level of integrity. Peer reviewers may find problems with the experiments or think different experiments are needed to justify the conclusions. They might even offer new ways to interpret the data.

It’s important for science communicators to consider which stage a study is at in the scientific process when deciding whether to cover it. Some studies are posted on preprint servers for other scientists to start weighing in on and haven’t yet been fully vetted. Results that haven't yet been subjected to scientific scrutiny should be reported on with care and context to avoid confusion or frustration from readers.

We’ve developed a one-page guide, "How Research Works: Understanding the Process of Science" to help communicators put the process of science into perspective. We hope it can serve as a useful resource to help explain why science changes—and why it’s important to expect that change. Please take a look and share your thoughts with us by sending an email to  [email protected].

Below are some additional resources:

  • Discoveries in Basic Science: A Perfectly Imperfect Process
  • When Clinical Research Is in the News
  • What is Basic Science and Why is it Important?
  • ​ What is a Research Organism?
  • What Are Clinical Trials and Studies?
  • Basic Research – Digital Media Kit
  • Decoding Science: How Does Science Know What It Knows? (NAS)
  • Can Science Help People Make Decisions ? (NAS)

Connect with Us

  • More Social Media from NIH

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Scientific Method

Science is an enormously successful human enterprise. The study of scientific method is the attempt to discern the activities by which that success is achieved. Among the activities often identified as characteristic of science are systematic observation and experimentation, inductive and deductive reasoning, and the formation and testing of hypotheses and theories. How these are carried out in detail can vary greatly, but characteristics like these have been looked to as a way of demarcating scientific activity from non-science, where only enterprises which employ some canonical form of scientific method or methods should be considered science (see also the entry on science and pseudo-science ). Others have questioned whether there is anything like a fixed toolkit of methods which is common across science and only science. Some reject privileging one view of method as part of rejecting broader views about the nature of science, such as naturalism (Dupré 2004); some reject any restriction in principle (pluralism).

Scientific method should be distinguished from the aims and products of science, such as knowledge, predictions, or control. Methods are the means by which those goals are achieved. Scientific method should also be distinguished from meta-methodology, which includes the values and justifications behind a particular characterization of scientific method (i.e., a methodology) — values such as objectivity, reproducibility, simplicity, or past successes. Methodological rules are proposed to govern method and it is a meta-methodological question whether methods obeying those rules satisfy given values. Finally, method is distinct, to some degree, from the detailed and contextual practices through which methods are implemented. The latter might range over: specific laboratory techniques; mathematical formalisms or other specialized languages used in descriptions and reasoning; technological or other material means; ways of communicating and sharing results, whether with other scientists or with the public at large; or the conventions, habits, enforced customs, and institutional controls over how and what science is carried out.

While it is important to recognize these distinctions, their boundaries are fuzzy. Hence, accounts of method cannot be entirely divorced from their methodological and meta-methodological motivations or justifications, Moreover, each aspect plays a crucial role in identifying methods. Disputes about method have therefore played out at the detail, rule, and meta-rule levels. Changes in beliefs about the certainty or fallibility of scientific knowledge, for instance (which is a meta-methodological consideration of what we can hope for methods to deliver), have meant different emphases on deductive and inductive reasoning, or on the relative importance attached to reasoning over observation (i.e., differences over particular methods.) Beliefs about the role of science in society will affect the place one gives to values in scientific method.

The issue which has shaped debates over scientific method the most in the last half century is the question of how pluralist do we need to be about method? Unificationists continue to hold out for one method essential to science; nihilism is a form of radical pluralism, which considers the effectiveness of any methodological prescription to be so context sensitive as to render it not explanatory on its own. Some middle degree of pluralism regarding the methods embodied in scientific practice seems appropriate. But the details of scientific practice vary with time and place, from institution to institution, across scientists and their subjects of investigation. How significant are the variations for understanding science and its success? How much can method be abstracted from practice? This entry describes some of the attempts to characterize scientific method or methods, as well as arguments for a more context-sensitive approach to methods embedded in actual scientific practices.

1. Overview and organizing themes

2. historical review: aristotle to mill, 3.1 logical constructionism and operationalism, 3.2. h-d as a logic of confirmation, 3.3. popper and falsificationism, 3.4 meta-methodology and the end of method, 4. statistical methods for hypothesis testing, 5.1 creative and exploratory practices.

  • 5.2 Computer methods and the ‘new ways’ of doing science

6.1 “The scientific method” in science education and as seen by scientists

6.2 privileged methods and ‘gold standards’, 6.3 scientific method in the court room, 6.4 deviating practices, 7. conclusion, other internet resources, related entries.

This entry could have been given the title Scientific Methods and gone on to fill volumes, or it could have been extremely short, consisting of a brief summary rejection of the idea that there is any such thing as a unique Scientific Method at all. Both unhappy prospects are due to the fact that scientific activity varies so much across disciplines, times, places, and scientists that any account which manages to unify it all will either consist of overwhelming descriptive detail, or trivial generalizations.

The choice of scope for the present entry is more optimistic, taking a cue from the recent movement in philosophy of science toward a greater attention to practice: to what scientists actually do. This “turn to practice” can be seen as the latest form of studies of methods in science, insofar as it represents an attempt at understanding scientific activity, but through accounts that are neither meant to be universal and unified, nor singular and narrowly descriptive. To some extent, different scientists at different times and places can be said to be using the same method even though, in practice, the details are different.

Whether the context in which methods are carried out is relevant, or to what extent, will depend largely on what one takes the aims of science to be and what one’s own aims are. For most of the history of scientific methodology the assumption has been that the most important output of science is knowledge and so the aim of methodology should be to discover those methods by which scientific knowledge is generated.

Science was seen to embody the most successful form of reasoning (but which form?) to the most certain knowledge claims (but how certain?) on the basis of systematically collected evidence (but what counts as evidence, and should the evidence of the senses take precedence, or rational insight?) Section 2 surveys some of the history, pointing to two major themes. One theme is seeking the right balance between observation and reasoning (and the attendant forms of reasoning which employ them); the other is how certain scientific knowledge is or can be.

Section 3 turns to 20 th century debates on scientific method. In the second half of the 20 th century the epistemic privilege of science faced several challenges and many philosophers of science abandoned the reconstruction of the logic of scientific method. Views changed significantly regarding which functions of science ought to be captured and why. For some, the success of science was better identified with social or cultural features. Historical and sociological turns in the philosophy of science were made, with a demand that greater attention be paid to the non-epistemic aspects of science, such as sociological, institutional, material, and political factors. Even outside of those movements there was an increased specialization in the philosophy of science, with more and more focus on specific fields within science. The combined upshot was very few philosophers arguing any longer for a grand unified methodology of science. Sections 3 and 4 surveys the main positions on scientific method in 20 th century philosophy of science, focusing on where they differ in their preference for confirmation or falsification or for waiving the idea of a special scientific method altogether.

In recent decades, attention has primarily been paid to scientific activities traditionally falling under the rubric of method, such as experimental design and general laboratory practice, the use of statistics, the construction and use of models and diagrams, interdisciplinary collaboration, and science communication. Sections 4–6 attempt to construct a map of the current domains of the study of methods in science.

As these sections illustrate, the question of method is still central to the discourse about science. Scientific method remains a topic for education, for science policy, and for scientists. It arises in the public domain where the demarcation or status of science is at issue. Some philosophers have recently returned, therefore, to the question of what it is that makes science a unique cultural product. This entry will close with some of these recent attempts at discerning and encapsulating the activities by which scientific knowledge is achieved.

Attempting a history of scientific method compounds the vast scope of the topic. This section briefly surveys the background to modern methodological debates. What can be called the classical view goes back to antiquity, and represents a point of departure for later divergences. [ 1 ]

We begin with a point made by Laudan (1968) in his historical survey of scientific method:

Perhaps the most serious inhibition to the emergence of the history of theories of scientific method as a respectable area of study has been the tendency to conflate it with the general history of epistemology, thereby assuming that the narrative categories and classificatory pigeon-holes applied to the latter are also basic to the former. (1968: 5)

To see knowledge about the natural world as falling under knowledge more generally is an understandable conflation. Histories of theories of method would naturally employ the same narrative categories and classificatory pigeon holes. An important theme of the history of epistemology, for example, is the unification of knowledge, a theme reflected in the question of the unification of method in science. Those who have identified differences in kinds of knowledge have often likewise identified different methods for achieving that kind of knowledge (see the entry on the unity of science ).

Different views on what is known, how it is known, and what can be known are connected. Plato distinguished the realms of things into the visible and the intelligible ( The Republic , 510a, in Cooper 1997). Only the latter, the Forms, could be objects of knowledge. The intelligible truths could be known with the certainty of geometry and deductive reasoning. What could be observed of the material world, however, was by definition imperfect and deceptive, not ideal. The Platonic way of knowledge therefore emphasized reasoning as a method, downplaying the importance of observation. Aristotle disagreed, locating the Forms in the natural world as the fundamental principles to be discovered through the inquiry into nature ( Metaphysics Z , in Barnes 1984).

Aristotle is recognized as giving the earliest systematic treatise on the nature of scientific inquiry in the western tradition, one which embraced observation and reasoning about the natural world. In the Prior and Posterior Analytics , Aristotle reflects first on the aims and then the methods of inquiry into nature. A number of features can be found which are still considered by most to be essential to science. For Aristotle, empiricism, careful observation (but passive observation, not controlled experiment), is the starting point. The aim is not merely recording of facts, though. For Aristotle, science ( epistêmê ) is a body of properly arranged knowledge or learning—the empirical facts, but also their ordering and display are of crucial importance. The aims of discovery, ordering, and display of facts partly determine the methods required of successful scientific inquiry. Also determinant is the nature of the knowledge being sought, and the explanatory causes proper to that kind of knowledge (see the discussion of the four causes in the entry on Aristotle on causality ).

In addition to careful observation, then, scientific method requires a logic as a system of reasoning for properly arranging, but also inferring beyond, what is known by observation. Methods of reasoning may include induction, prediction, or analogy, among others. Aristotle’s system (along with his catalogue of fallacious reasoning) was collected under the title the Organon . This title would be echoed in later works on scientific reasoning, such as Novum Organon by Francis Bacon, and Novum Organon Restorum by William Whewell (see below). In Aristotle’s Organon reasoning is divided primarily into two forms, a rough division which persists into modern times. The division, known most commonly today as deductive versus inductive method, appears in other eras and methodologies as analysis/​synthesis, non-ampliative/​ampliative, or even confirmation/​verification. The basic idea is there are two “directions” to proceed in our methods of inquiry: one away from what is observed, to the more fundamental, general, and encompassing principles; the other, from the fundamental and general to instances or implications of principles.

The basic aim and method of inquiry identified here can be seen as a theme running throughout the next two millennia of reflection on the correct way to seek after knowledge: carefully observe nature and then seek rules or principles which explain or predict its operation. The Aristotelian corpus provided the framework for a commentary tradition on scientific method independent of science itself (cosmos versus physics.) During the medieval period, figures such as Albertus Magnus (1206–1280), Thomas Aquinas (1225–1274), Robert Grosseteste (1175–1253), Roger Bacon (1214/1220–1292), William of Ockham (1287–1347), Andreas Vesalius (1514–1546), Giacomo Zabarella (1533–1589) all worked to clarify the kind of knowledge obtainable by observation and induction, the source of justification of induction, and best rules for its application. [ 2 ] Many of their contributions we now think of as essential to science (see also Laudan 1968). As Aristotle and Plato had employed a framework of reasoning either “to the forms” or “away from the forms”, medieval thinkers employed directions away from the phenomena or back to the phenomena. In analysis, a phenomena was examined to discover its basic explanatory principles; in synthesis, explanations of a phenomena were constructed from first principles.

During the Scientific Revolution these various strands of argument, experiment, and reason were forged into a dominant epistemic authority. The 16 th –18 th centuries were a period of not only dramatic advance in knowledge about the operation of the natural world—advances in mechanical, medical, biological, political, economic explanations—but also of self-awareness of the revolutionary changes taking place, and intense reflection on the source and legitimation of the method by which the advances were made. The struggle to establish the new authority included methodological moves. The Book of Nature, according to the metaphor of Galileo Galilei (1564–1642) or Francis Bacon (1561–1626), was written in the language of mathematics, of geometry and number. This motivated an emphasis on mathematical description and mechanical explanation as important aspects of scientific method. Through figures such as Henry More and Ralph Cudworth, a neo-Platonic emphasis on the importance of metaphysical reflection on nature behind appearances, particularly regarding the spiritual as a complement to the purely mechanical, remained an important methodological thread of the Scientific Revolution (see the entries on Cambridge platonists ; Boyle ; Henry More ; Galileo ).

In Novum Organum (1620), Bacon was critical of the Aristotelian method for leaping from particulars to universals too quickly. The syllogistic form of reasoning readily mixed those two types of propositions. Bacon aimed at the invention of new arts, principles, and directions. His method would be grounded in methodical collection of observations, coupled with correction of our senses (and particularly, directions for the avoidance of the Idols, as he called them, kinds of systematic errors to which naïve observers are prone.) The community of scientists could then climb, by a careful, gradual and unbroken ascent, to reliable general claims.

Bacon’s method has been criticized as impractical and too inflexible for the practicing scientist. Whewell would later criticize Bacon in his System of Logic for paying too little attention to the practices of scientists. It is hard to find convincing examples of Bacon’s method being put in to practice in the history of science, but there are a few who have been held up as real examples of 16 th century scientific, inductive method, even if not in the rigid Baconian mold: figures such as Robert Boyle (1627–1691) and William Harvey (1578–1657) (see the entry on Bacon ).

It is to Isaac Newton (1642–1727), however, that historians of science and methodologists have paid greatest attention. Given the enormous success of his Principia Mathematica and Opticks , this is understandable. The study of Newton’s method has had two main thrusts: the implicit method of the experiments and reasoning presented in the Opticks, and the explicit methodological rules given as the Rules for Philosophising (the Regulae) in Book III of the Principia . [ 3 ] Newton’s law of gravitation, the linchpin of his new cosmology, broke with explanatory conventions of natural philosophy, first for apparently proposing action at a distance, but more generally for not providing “true”, physical causes. The argument for his System of the World ( Principia , Book III) was based on phenomena, not reasoned first principles. This was viewed (mainly on the continent) as insufficient for proper natural philosophy. The Regulae counter this objection, re-defining the aims of natural philosophy by re-defining the method natural philosophers should follow. (See the entry on Newton’s philosophy .)

To his list of methodological prescriptions should be added Newton’s famous phrase “ hypotheses non fingo ” (commonly translated as “I frame no hypotheses”.) The scientist was not to invent systems but infer explanations from observations, as Bacon had advocated. This would come to be known as inductivism. In the century after Newton, significant clarifications of the Newtonian method were made. Colin Maclaurin (1698–1746), for instance, reconstructed the essential structure of the method as having complementary analysis and synthesis phases, one proceeding away from the phenomena in generalization, the other from the general propositions to derive explanations of new phenomena. Denis Diderot (1713–1784) and editors of the Encyclopédie did much to consolidate and popularize Newtonianism, as did Francesco Algarotti (1721–1764). The emphasis was often the same, as much on the character of the scientist as on their process, a character which is still commonly assumed. The scientist is humble in the face of nature, not beholden to dogma, obeys only his eyes, and follows the truth wherever it leads. It was certainly Voltaire (1694–1778) and du Chatelet (1706–1749) who were most influential in propagating the latter vision of the scientist and their craft, with Newton as hero. Scientific method became a revolutionary force of the Enlightenment. (See also the entries on Newton , Leibniz , Descartes , Boyle , Hume , enlightenment , as well as Shank 2008 for a historical overview.)

Not all 18 th century reflections on scientific method were so celebratory. Famous also are George Berkeley’s (1685–1753) attack on the mathematics of the new science, as well as the over-emphasis of Newtonians on observation; and David Hume’s (1711–1776) undermining of the warrant offered for scientific claims by inductive justification (see the entries on: George Berkeley ; David Hume ; Hume’s Newtonianism and Anti-Newtonianism ). Hume’s problem of induction motivated Immanuel Kant (1724–1804) to seek new foundations for empirical method, though as an epistemic reconstruction, not as any set of practical guidelines for scientists. Both Hume and Kant influenced the methodological reflections of the next century, such as the debate between Mill and Whewell over the certainty of inductive inferences in science.

The debate between John Stuart Mill (1806–1873) and William Whewell (1794–1866) has become the canonical methodological debate of the 19 th century. Although often characterized as a debate between inductivism and hypothetico-deductivism, the role of the two methods on each side is actually more complex. On the hypothetico-deductive account, scientists work to come up with hypotheses from which true observational consequences can be deduced—hence, hypothetico-deductive. Because Whewell emphasizes both hypotheses and deduction in his account of method, he can be seen as a convenient foil to the inductivism of Mill. However, equally if not more important to Whewell’s portrayal of scientific method is what he calls the “fundamental antithesis”. Knowledge is a product of the objective (what we see in the world around us) and subjective (the contributions of our mind to how we perceive and understand what we experience, which he called the Fundamental Ideas). Both elements are essential according to Whewell, and he was therefore critical of Kant for too much focus on the subjective, and John Locke (1632–1704) and Mill for too much focus on the senses. Whewell’s fundamental ideas can be discipline relative. An idea can be fundamental even if it is necessary for knowledge only within a given scientific discipline (e.g., chemical affinity for chemistry). This distinguishes fundamental ideas from the forms and categories of intuition of Kant. (See the entry on Whewell .)

Clarifying fundamental ideas would therefore be an essential part of scientific method and scientific progress. Whewell called this process “Discoverer’s Induction”. It was induction, following Bacon or Newton, but Whewell sought to revive Bacon’s account by emphasising the role of ideas in the clear and careful formulation of inductive hypotheses. Whewell’s induction is not merely the collecting of objective facts. The subjective plays a role through what Whewell calls the Colligation of Facts, a creative act of the scientist, the invention of a theory. A theory is then confirmed by testing, where more facts are brought under the theory, called the Consilience of Inductions. Whewell felt that this was the method by which the true laws of nature could be discovered: clarification of fundamental concepts, clever invention of explanations, and careful testing. Mill, in his critique of Whewell, and others who have cast Whewell as a fore-runner of the hypothetico-deductivist view, seem to have under-estimated the importance of this discovery phase in Whewell’s understanding of method (Snyder 1997a,b, 1999). Down-playing the discovery phase would come to characterize methodology of the early 20 th century (see section 3 ).

Mill, in his System of Logic , put forward a narrower view of induction as the essence of scientific method. For Mill, induction is the search first for regularities among events. Among those regularities, some will continue to hold for further observations, eventually gaining the status of laws. One can also look for regularities among the laws discovered in a domain, i.e., for a law of laws. Which “law law” will hold is time and discipline dependent and open to revision. One example is the Law of Universal Causation, and Mill put forward specific methods for identifying causes—now commonly known as Mill’s methods. These five methods look for circumstances which are common among the phenomena of interest, those which are absent when the phenomena are, or those for which both vary together. Mill’s methods are still seen as capturing basic intuitions about experimental methods for finding the relevant explanatory factors ( System of Logic (1843), see Mill entry). The methods advocated by Whewell and Mill, in the end, look similar. Both involve inductive generalization to covering laws. They differ dramatically, however, with respect to the necessity of the knowledge arrived at; that is, at the meta-methodological level (see the entries on Whewell and Mill entries).

3. Logic of method and critical responses

The quantum and relativistic revolutions in physics in the early 20 th century had a profound effect on methodology. Conceptual foundations of both theories were taken to show the defeasibility of even the most seemingly secure intuitions about space, time and bodies. Certainty of knowledge about the natural world was therefore recognized as unattainable. Instead a renewed empiricism was sought which rendered science fallible but still rationally justifiable.

Analyses of the reasoning of scientists emerged, according to which the aspects of scientific method which were of primary importance were the means of testing and confirming of theories. A distinction in methodology was made between the contexts of discovery and justification. The distinction could be used as a wedge between the particularities of where and how theories or hypotheses are arrived at, on the one hand, and the underlying reasoning scientists use (whether or not they are aware of it) when assessing theories and judging their adequacy on the basis of the available evidence. By and large, for most of the 20 th century, philosophy of science focused on the second context, although philosophers differed on whether to focus on confirmation or refutation as well as on the many details of how confirmation or refutation could or could not be brought about. By the mid-20 th century these attempts at defining the method of justification and the context distinction itself came under pressure. During the same period, philosophy of science developed rapidly, and from section 4 this entry will therefore shift from a primarily historical treatment of the scientific method towards a primarily thematic one.

Advances in logic and probability held out promise of the possibility of elaborate reconstructions of scientific theories and empirical method, the best example being Rudolf Carnap’s The Logical Structure of the World (1928). Carnap attempted to show that a scientific theory could be reconstructed as a formal axiomatic system—that is, a logic. That system could refer to the world because some of its basic sentences could be interpreted as observations or operations which one could perform to test them. The rest of the theoretical system, including sentences using theoretical or unobservable terms (like electron or force) would then either be meaningful because they could be reduced to observations, or they had purely logical meanings (called analytic, like mathematical identities). This has been referred to as the verifiability criterion of meaning. According to the criterion, any statement not either analytic or verifiable was strictly meaningless. Although the view was endorsed by Carnap in 1928, he would later come to see it as too restrictive (Carnap 1956). Another familiar version of this idea is operationalism of Percy William Bridgman. In The Logic of Modern Physics (1927) Bridgman asserted that every physical concept could be defined in terms of the operations one would perform to verify the application of that concept. Making good on the operationalisation of a concept even as simple as length, however, can easily become enormously complex (for measuring very small lengths, for instance) or impractical (measuring large distances like light years.)

Carl Hempel’s (1950, 1951) criticisms of the verifiability criterion of meaning had enormous influence. He pointed out that universal generalizations, such as most scientific laws, were not strictly meaningful on the criterion. Verifiability and operationalism both seemed too restrictive to capture standard scientific aims and practice. The tenuous connection between these reconstructions and actual scientific practice was criticized in another way. In both approaches, scientific methods are instead recast in methodological roles. Measurements, for example, were looked to as ways of giving meanings to terms. The aim of the philosopher of science was not to understand the methods per se , but to use them to reconstruct theories, their meanings, and their relation to the world. When scientists perform these operations, however, they will not report that they are doing them to give meaning to terms in a formal axiomatic system. This disconnect between methodology and the details of actual scientific practice would seem to violate the empiricism the Logical Positivists and Bridgman were committed to. The view that methodology should correspond to practice (to some extent) has been called historicism, or intuitionism. We turn to these criticisms and responses in section 3.4 . [ 4 ]

Positivism also had to contend with the recognition that a purely inductivist approach, along the lines of Bacon-Newton-Mill, was untenable. There was no pure observation, for starters. All observation was theory laden. Theory is required to make any observation, therefore not all theory can be derived from observation alone. (See the entry on theory and observation in science .) Even granting an observational basis, Hume had already pointed out that one could not deductively justify inductive conclusions without begging the question by presuming the success of the inductive method. Likewise, positivist attempts at analyzing how a generalization can be confirmed by observations of its instances were subject to a number of criticisms. Goodman (1965) and Hempel (1965) both point to paradoxes inherent in standard accounts of confirmation. Recent attempts at explaining how observations can serve to confirm a scientific theory are discussed in section 4 below.

The standard starting point for a non-inductive analysis of the logic of confirmation is known as the Hypothetico-Deductive (H-D) method. In its simplest form, a sentence of a theory which expresses some hypothesis is confirmed by its true consequences. As noted in section 2 , this method had been advanced by Whewell in the 19 th century, as well as Nicod (1924) and others in the 20 th century. Often, Hempel’s (1966) description of the H-D method, illustrated by the case of Semmelweiss’ inferential procedures in establishing the cause of childbed fever, has been presented as a key account of H-D as well as a foil for criticism of the H-D account of confirmation (see, for example, Lipton’s (2004) discussion of inference to the best explanation; also the entry on confirmation ). Hempel described Semmelsweiss’ procedure as examining various hypotheses explaining the cause of childbed fever. Some hypotheses conflicted with observable facts and could be rejected as false immediately. Others needed to be tested experimentally by deducing which observable events should follow if the hypothesis were true (what Hempel called the test implications of the hypothesis), then conducting an experiment and observing whether or not the test implications occurred. If the experiment showed the test implication to be false, the hypothesis could be rejected. If the experiment showed the test implications to be true, however, this did not prove the hypothesis true. The confirmation of a test implication does not verify a hypothesis, though Hempel did allow that “it provides at least some support, some corroboration or confirmation for it” (Hempel 1966: 8). The degree of this support then depends on the quantity, variety and precision of the supporting evidence.

Another approach that took off from the difficulties with inductive inference was Karl Popper’s critical rationalism or falsificationism (Popper 1959, 1963). Falsification is deductive and similar to H-D in that it involves scientists deducing observational consequences from the hypothesis under test. For Popper, however, the important point was not the degree of confirmation that successful prediction offered to a hypothesis. The crucial thing was the logical asymmetry between confirmation, based on inductive inference, and falsification, which can be based on a deductive inference. (This simple opposition was later questioned, by Lakatos, among others. See the entry on historicist theories of scientific rationality. )

Popper stressed that, regardless of the amount of confirming evidence, we can never be certain that a hypothesis is true without committing the fallacy of affirming the consequent. Instead, Popper introduced the notion of corroboration as a measure for how well a theory or hypothesis has survived previous testing—but without implying that this is also a measure for the probability that it is true.

Popper was also motivated by his doubts about the scientific status of theories like the Marxist theory of history or psycho-analysis, and so wanted to demarcate between science and pseudo-science. Popper saw this as an importantly different distinction than demarcating science from metaphysics. The latter demarcation was the primary concern of many logical empiricists. Popper used the idea of falsification to draw a line instead between pseudo and proper science. Science was science because its method involved subjecting theories to rigorous tests which offered a high probability of failing and thus refuting the theory.

A commitment to the risk of failure was important. Avoiding falsification could be done all too easily. If a consequence of a theory is inconsistent with observations, an exception can be added by introducing auxiliary hypotheses designed explicitly to save the theory, so-called ad hoc modifications. This Popper saw done in pseudo-science where ad hoc theories appeared capable of explaining anything in their field of application. In contrast, science is risky. If observations showed the predictions from a theory to be wrong, the theory would be refuted. Hence, scientific hypotheses must be falsifiable. Not only must there exist some possible observation statement which could falsify the hypothesis or theory, were it observed, (Popper called these the hypothesis’ potential falsifiers) it is crucial to the Popperian scientific method that such falsifications be sincerely attempted on a regular basis.

The more potential falsifiers of a hypothesis, the more falsifiable it would be, and the more the hypothesis claimed. Conversely, hypotheses without falsifiers claimed very little or nothing at all. Originally, Popper thought that this meant the introduction of ad hoc hypotheses only to save a theory should not be countenanced as good scientific method. These would undermine the falsifiabililty of a theory. However, Popper later came to recognize that the introduction of modifications (immunizations, he called them) was often an important part of scientific development. Responding to surprising or apparently falsifying observations often generated important new scientific insights. Popper’s own example was the observed motion of Uranus which originally did not agree with Newtonian predictions. The ad hoc hypothesis of an outer planet explained the disagreement and led to further falsifiable predictions. Popper sought to reconcile the view by blurring the distinction between falsifiable and not falsifiable, and speaking instead of degrees of testability (Popper 1985: 41f.).

From the 1960s on, sustained meta-methodological criticism emerged that drove philosophical focus away from scientific method. A brief look at those criticisms follows, with recommendations for further reading at the end of the entry.

Thomas Kuhn’s The Structure of Scientific Revolutions (1962) begins with a well-known shot across the bow for philosophers of science:

History, if viewed as a repository for more than anecdote or chronology, could produce a decisive transformation in the image of science by which we are now possessed. (1962: 1)

The image Kuhn thought needed transforming was the a-historical, rational reconstruction sought by many of the Logical Positivists, though Carnap and other positivists were actually quite sympathetic to Kuhn’s views. (See the entry on the Vienna Circle .) Kuhn shares with other of his contemporaries, such as Feyerabend and Lakatos, a commitment to a more empirical approach to philosophy of science. Namely, the history of science provides important data, and necessary checks, for philosophy of science, including any theory of scientific method.

The history of science reveals, according to Kuhn, that scientific development occurs in alternating phases. During normal science, the members of the scientific community adhere to the paradigm in place. Their commitment to the paradigm means a commitment to the puzzles to be solved and the acceptable ways of solving them. Confidence in the paradigm remains so long as steady progress is made in solving the shared puzzles. Method in this normal phase operates within a disciplinary matrix (Kuhn’s later concept of a paradigm) which includes standards for problem solving, and defines the range of problems to which the method should be applied. An important part of a disciplinary matrix is the set of values which provide the norms and aims for scientific method. The main values that Kuhn identifies are prediction, problem solving, simplicity, consistency, and plausibility.

An important by-product of normal science is the accumulation of puzzles which cannot be solved with resources of the current paradigm. Once accumulation of these anomalies has reached some critical mass, it can trigger a communal shift to a new paradigm and a new phase of normal science. Importantly, the values that provide the norms and aims for scientific method may have transformed in the meantime. Method may therefore be relative to discipline, time or place

Feyerabend also identified the aims of science as progress, but argued that any methodological prescription would only stifle that progress (Feyerabend 1988). His arguments are grounded in re-examining accepted “myths” about the history of science. Heroes of science, like Galileo, are shown to be just as reliant on rhetoric and persuasion as they are on reason and demonstration. Others, like Aristotle, are shown to be far more reasonable and far-reaching in their outlooks then they are given credit for. As a consequence, the only rule that could provide what he took to be sufficient freedom was the vacuous “anything goes”. More generally, even the methodological restriction that science is the best way to pursue knowledge, and to increase knowledge, is too restrictive. Feyerabend suggested instead that science might, in fact, be a threat to a free society, because it and its myth had become so dominant (Feyerabend 1978).

An even more fundamental kind of criticism was offered by several sociologists of science from the 1970s onwards who rejected the methodology of providing philosophical accounts for the rational development of science and sociological accounts of the irrational mistakes. Instead, they adhered to a symmetry thesis on which any causal explanation of how scientific knowledge is established needs to be symmetrical in explaining truth and falsity, rationality and irrationality, success and mistakes, by the same causal factors (see, e.g., Barnes and Bloor 1982, Bloor 1991). Movements in the Sociology of Science, like the Strong Programme, or in the social dimensions and causes of knowledge more generally led to extended and close examination of detailed case studies in contemporary science and its history. (See the entries on the social dimensions of scientific knowledge and social epistemology .) Well-known examinations by Latour and Woolgar (1979/1986), Knorr-Cetina (1981), Pickering (1984), Shapin and Schaffer (1985) seem to bear out that it was social ideologies (on a macro-scale) or individual interactions and circumstances (on a micro-scale) which were the primary causal factors in determining which beliefs gained the status of scientific knowledge. As they saw it therefore, explanatory appeals to scientific method were not empirically grounded.

A late, and largely unexpected, criticism of scientific method came from within science itself. Beginning in the early 2000s, a number of scientists attempting to replicate the results of published experiments could not do so. There may be close conceptual connection between reproducibility and method. For example, if reproducibility means that the same scientific methods ought to produce the same result, and all scientific results ought to be reproducible, then whatever it takes to reproduce a scientific result ought to be called scientific method. Space limits us to the observation that, insofar as reproducibility is a desired outcome of proper scientific method, it is not strictly a part of scientific method. (See the entry on reproducibility of scientific results .)

By the close of the 20 th century the search for the scientific method was flagging. Nola and Sankey (2000b) could introduce their volume on method by remarking that “For some, the whole idea of a theory of scientific method is yester-year’s debate …”.

Despite the many difficulties that philosophers encountered in trying to providing a clear methodology of conformation (or refutation), still important progress has been made on understanding how observation can provide evidence for a given theory. Work in statistics has been crucial for understanding how theories can be tested empirically, and in recent decades a huge literature has developed that attempts to recast confirmation in Bayesian terms. Here these developments can be covered only briefly, and we refer to the entry on confirmation for further details and references.

Statistics has come to play an increasingly important role in the methodology of the experimental sciences from the 19 th century onwards. At that time, statistics and probability theory took on a methodological role as an analysis of inductive inference, and attempts to ground the rationality of induction in the axioms of probability theory have continued throughout the 20 th century and in to the present. Developments in the theory of statistics itself, meanwhile, have had a direct and immense influence on the experimental method, including methods for measuring the uncertainty of observations such as the Method of Least Squares developed by Legendre and Gauss in the early 19 th century, criteria for the rejection of outliers proposed by Peirce by the mid-19 th century, and the significance tests developed by Gosset (a.k.a. “Student”), Fisher, Neyman & Pearson and others in the 1920s and 1930s (see, e.g., Swijtink 1987 for a brief historical overview; and also the entry on C.S. Peirce ).

These developments within statistics then in turn led to a reflective discussion among both statisticians and philosophers of science on how to perceive the process of hypothesis testing: whether it was a rigorous statistical inference that could provide a numerical expression of the degree of confidence in the tested hypothesis, or if it should be seen as a decision between different courses of actions that also involved a value component. This led to a major controversy among Fisher on the one side and Neyman and Pearson on the other (see especially Fisher 1955, Neyman 1956 and Pearson 1955, and for analyses of the controversy, e.g., Howie 2002, Marks 2000, Lenhard 2006). On Fisher’s view, hypothesis testing was a methodology for when to accept or reject a statistical hypothesis, namely that a hypothesis should be rejected by evidence if this evidence would be unlikely relative to other possible outcomes, given the hypothesis were true. In contrast, on Neyman and Pearson’s view, the consequence of error also had to play a role when deciding between hypotheses. Introducing the distinction between the error of rejecting a true hypothesis (type I error) and accepting a false hypothesis (type II error), they argued that it depends on the consequences of the error to decide whether it is more important to avoid rejecting a true hypothesis or accepting a false one. Hence, Fisher aimed for a theory of inductive inference that enabled a numerical expression of confidence in a hypothesis. To him, the important point was the search for truth, not utility. In contrast, the Neyman-Pearson approach provided a strategy of inductive behaviour for deciding between different courses of action. Here, the important point was not whether a hypothesis was true, but whether one should act as if it was.

Similar discussions are found in the philosophical literature. On the one side, Churchman (1948) and Rudner (1953) argued that because scientific hypotheses can never be completely verified, a complete analysis of the methods of scientific inference includes ethical judgments in which the scientists must decide whether the evidence is sufficiently strong or that the probability is sufficiently high to warrant the acceptance of the hypothesis, which again will depend on the importance of making a mistake in accepting or rejecting the hypothesis. Others, such as Jeffrey (1956) and Levi (1960) disagreed and instead defended a value-neutral view of science on which scientists should bracket their attitudes, preferences, temperament, and values when assessing the correctness of their inferences. For more details on this value-free ideal in the philosophy of science and its historical development, see Douglas (2009) and Howard (2003). For a broad set of case studies examining the role of values in science, see e.g. Elliott & Richards 2017.

In recent decades, philosophical discussions of the evaluation of probabilistic hypotheses by statistical inference have largely focused on Bayesianism that understands probability as a measure of a person’s degree of belief in an event, given the available information, and frequentism that instead understands probability as a long-run frequency of a repeatable event. Hence, for Bayesians probabilities refer to a state of knowledge, whereas for frequentists probabilities refer to frequencies of events (see, e.g., Sober 2008, chapter 1 for a detailed introduction to Bayesianism and frequentism as well as to likelihoodism). Bayesianism aims at providing a quantifiable, algorithmic representation of belief revision, where belief revision is a function of prior beliefs (i.e., background knowledge) and incoming evidence. Bayesianism employs a rule based on Bayes’ theorem, a theorem of the probability calculus which relates conditional probabilities. The probability that a particular hypothesis is true is interpreted as a degree of belief, or credence, of the scientist. There will also be a probability and a degree of belief that a hypothesis will be true conditional on a piece of evidence (an observation, say) being true. Bayesianism proscribes that it is rational for the scientist to update their belief in the hypothesis to that conditional probability should it turn out that the evidence is, in fact, observed (see, e.g., Sprenger & Hartmann 2019 for a comprehensive treatment of Bayesian philosophy of science). Originating in the work of Neyman and Person, frequentism aims at providing the tools for reducing long-run error rates, such as the error-statistical approach developed by Mayo (1996) that focuses on how experimenters can avoid both type I and type II errors by building up a repertoire of procedures that detect errors if and only if they are present. Both Bayesianism and frequentism have developed over time, they are interpreted in different ways by its various proponents, and their relations to previous criticism to attempts at defining scientific method are seen differently by proponents and critics. The literature, surveys, reviews and criticism in this area are vast and the reader is referred to the entries on Bayesian epistemology and confirmation .

5. Method in Practice

Attention to scientific practice, as we have seen, is not itself new. However, the turn to practice in the philosophy of science of late can be seen as a correction to the pessimism with respect to method in philosophy of science in later parts of the 20 th century, and as an attempted reconciliation between sociological and rationalist explanations of scientific knowledge. Much of this work sees method as detailed and context specific problem-solving procedures, and methodological analyses to be at the same time descriptive, critical and advisory (see Nickles 1987 for an exposition of this view). The following section contains a survey of some of the practice focuses. In this section we turn fully to topics rather than chronology.

A problem with the distinction between the contexts of discovery and justification that figured so prominently in philosophy of science in the first half of the 20 th century (see section 2 ) is that no such distinction can be clearly seen in scientific activity (see Arabatzis 2006). Thus, in recent decades, it has been recognized that study of conceptual innovation and change should not be confined to psychology and sociology of science, but are also important aspects of scientific practice which philosophy of science should address (see also the entry on scientific discovery ). Looking for the practices that drive conceptual innovation has led philosophers to examine both the reasoning practices of scientists and the wide realm of experimental practices that are not directed narrowly at testing hypotheses, that is, exploratory experimentation.

Examining the reasoning practices of historical and contemporary scientists, Nersessian (2008) has argued that new scientific concepts are constructed as solutions to specific problems by systematic reasoning, and that of analogy, visual representation and thought-experimentation are among the important reasoning practices employed. These ubiquitous forms of reasoning are reliable—but also fallible—methods of conceptual development and change. On her account, model-based reasoning consists of cycles of construction, simulation, evaluation and adaption of models that serve as interim interpretations of the target problem to be solved. Often, this process will lead to modifications or extensions, and a new cycle of simulation and evaluation. However, Nersessian also emphasizes that

creative model-based reasoning cannot be applied as a simple recipe, is not always productive of solutions, and even its most exemplary usages can lead to incorrect solutions. (Nersessian 2008: 11)

Thus, while on the one hand she agrees with many previous philosophers that there is no logic of discovery, discoveries can derive from reasoned processes, such that a large and integral part of scientific practice is

the creation of concepts through which to comprehend, structure, and communicate about physical phenomena …. (Nersessian 1987: 11)

Similarly, work on heuristics for discovery and theory construction by scholars such as Darden (1991) and Bechtel & Richardson (1993) present science as problem solving and investigate scientific problem solving as a special case of problem-solving in general. Drawing largely on cases from the biological sciences, much of their focus has been on reasoning strategies for the generation, evaluation, and revision of mechanistic explanations of complex systems.

Addressing another aspect of the context distinction, namely the traditional view that the primary role of experiments is to test theoretical hypotheses according to the H-D model, other philosophers of science have argued for additional roles that experiments can play. The notion of exploratory experimentation was introduced to describe experiments driven by the desire to obtain empirical regularities and to develop concepts and classifications in which these regularities can be described (Steinle 1997, 2002; Burian 1997; Waters 2007)). However the difference between theory driven experimentation and exploratory experimentation should not be seen as a sharp distinction. Theory driven experiments are not always directed at testing hypothesis, but may also be directed at various kinds of fact-gathering, such as determining numerical parameters. Vice versa , exploratory experiments are usually informed by theory in various ways and are therefore not theory-free. Instead, in exploratory experiments phenomena are investigated without first limiting the possible outcomes of the experiment on the basis of extant theory about the phenomena.

The development of high throughput instrumentation in molecular biology and neighbouring fields has given rise to a special type of exploratory experimentation that collects and analyses very large amounts of data, and these new ‘omics’ disciplines are often said to represent a break with the ideal of hypothesis-driven science (Burian 2007; Elliott 2007; Waters 2007; O’Malley 2007) and instead described as data-driven research (Leonelli 2012; Strasser 2012) or as a special kind of “convenience experimentation” in which many experiments are done simply because they are extraordinarily convenient to perform (Krohs 2012).

5.2 Computer methods and ‘new ways’ of doing science

The field of omics just described is possible because of the ability of computers to process, in a reasonable amount of time, the huge quantities of data required. Computers allow for more elaborate experimentation (higher speed, better filtering, more variables, sophisticated coordination and control), but also, through modelling and simulations, might constitute a form of experimentation themselves. Here, too, we can pose a version of the general question of method versus practice: does the practice of using computers fundamentally change scientific method, or merely provide a more efficient means of implementing standard methods?

Because computers can be used to automate measurements, quantifications, calculations, and statistical analyses where, for practical reasons, these operations cannot be otherwise carried out, many of the steps involved in reaching a conclusion on the basis of an experiment are now made inside a “black box”, without the direct involvement or awareness of a human. This has epistemological implications, regarding what we can know, and how we can know it. To have confidence in the results, computer methods are therefore subjected to tests of verification and validation.

The distinction between verification and validation is easiest to characterize in the case of computer simulations. In a typical computer simulation scenario computers are used to numerically integrate differential equations for which no analytic solution is available. The equations are part of the model the scientist uses to represent a phenomenon or system under investigation. Verifying a computer simulation means checking that the equations of the model are being correctly approximated. Validating a simulation means checking that the equations of the model are adequate for the inferences one wants to make on the basis of that model.

A number of issues related to computer simulations have been raised. The identification of validity and verification as the testing methods has been criticized. Oreskes et al. (1994) raise concerns that “validiation”, because it suggests deductive inference, might lead to over-confidence in the results of simulations. The distinction itself is probably too clean, since actual practice in the testing of simulations mixes and moves back and forth between the two (Weissart 1997; Parker 2008a; Winsberg 2010). Computer simulations do seem to have a non-inductive character, given that the principles by which they operate are built in by the programmers, and any results of the simulation follow from those in-built principles in such a way that those results could, in principle, be deduced from the program code and its inputs. The status of simulations as experiments has therefore been examined (Kaufmann and Smarr 1993; Humphreys 1995; Hughes 1999; Norton and Suppe 2001). This literature considers the epistemology of these experiments: what we can learn by simulation, and also the kinds of justifications which can be given in applying that knowledge to the “real” world. (Mayo 1996; Parker 2008b). As pointed out, part of the advantage of computer simulation derives from the fact that huge numbers of calculations can be carried out without requiring direct observation by the experimenter/​simulator. At the same time, many of these calculations are approximations to the calculations which would be performed first-hand in an ideal situation. Both factors introduce uncertainties into the inferences drawn from what is observed in the simulation.

For many of the reasons described above, computer simulations do not seem to belong clearly to either the experimental or theoretical domain. Rather, they seem to crucially involve aspects of both. This has led some authors, such as Fox Keller (2003: 200) to argue that we ought to consider computer simulation a “qualitatively different way of doing science”. The literature in general tends to follow Kaufmann and Smarr (1993) in referring to computer simulation as a “third way” for scientific methodology (theoretical reasoning and experimental practice are the first two ways.). It should also be noted that the debates around these issues have tended to focus on the form of computer simulation typical in the physical sciences, where models are based on dynamical equations. Other forms of simulation might not have the same problems, or have problems of their own (see the entry on computer simulations in science ).

In recent years, the rapid development of machine learning techniques has prompted some scholars to suggest that the scientific method has become “obsolete” (Anderson 2008, Carrol and Goodstein 2009). This has resulted in an intense debate on the relative merit of data-driven and hypothesis-driven research (for samples, see e.g. Mazzocchi 2015 or Succi and Coveney 2018). For a detailed treatment of this topic, we refer to the entry scientific research and big data .

6. Discourse on scientific method

Despite philosophical disagreements, the idea of the scientific method still figures prominently in contemporary discourse on many different topics, both within science and in society at large. Often, reference to scientific method is used in ways that convey either the legend of a single, universal method characteristic of all science, or grants to a particular method or set of methods privilege as a special ‘gold standard’, often with reference to particular philosophers to vindicate the claims. Discourse on scientific method also typically arises when there is a need to distinguish between science and other activities, or for justifying the special status conveyed to science. In these areas, the philosophical attempts at identifying a set of methods characteristic for scientific endeavors are closely related to the philosophy of science’s classical problem of demarcation (see the entry on science and pseudo-science ) and to the philosophical analysis of the social dimension of scientific knowledge and the role of science in democratic society.

One of the settings in which the legend of a single, universal scientific method has been particularly strong is science education (see, e.g., Bauer 1992; McComas 1996; Wivagg & Allchin 2002). [ 5 ] Often, ‘the scientific method’ is presented in textbooks and educational web pages as a fixed four or five step procedure starting from observations and description of a phenomenon and progressing over formulation of a hypothesis which explains the phenomenon, designing and conducting experiments to test the hypothesis, analyzing the results, and ending with drawing a conclusion. Such references to a universal scientific method can be found in educational material at all levels of science education (Blachowicz 2009), and numerous studies have shown that the idea of a general and universal scientific method often form part of both students’ and teachers’ conception of science (see, e.g., Aikenhead 1987; Osborne et al. 2003). In response, it has been argued that science education need to focus more on teaching about the nature of science, although views have differed on whether this is best done through student-led investigations, contemporary cases, or historical cases (Allchin, Andersen & Nielsen 2014)

Although occasionally phrased with reference to the H-D method, important historical roots of the legend in science education of a single, universal scientific method are the American philosopher and psychologist Dewey’s account of inquiry in How We Think (1910) and the British mathematician Karl Pearson’s account of science in Grammar of Science (1892). On Dewey’s account, inquiry is divided into the five steps of

(i) a felt difficulty, (ii) its location and definition, (iii) suggestion of a possible solution, (iv) development by reasoning of the bearing of the suggestions, (v) further observation and experiment leading to its acceptance or rejection. (Dewey 1910: 72)

Similarly, on Pearson’s account, scientific investigations start with measurement of data and observation of their correction and sequence from which scientific laws can be discovered with the aid of creative imagination. These laws have to be subject to criticism, and their final acceptance will have equal validity for “all normally constituted minds”. Both Dewey’s and Pearson’s accounts should be seen as generalized abstractions of inquiry and not restricted to the realm of science—although both Dewey and Pearson referred to their respective accounts as ‘the scientific method’.

Occasionally, scientists make sweeping statements about a simple and distinct scientific method, as exemplified by Feynman’s simplified version of a conjectures and refutations method presented, for example, in the last of his 1964 Cornell Messenger lectures. [ 6 ] However, just as often scientists have come to the same conclusion as recent philosophy of science that there is not any unique, easily described scientific method. For example, the physicist and Nobel Laureate Weinberg described in the paper “The Methods of Science … And Those By Which We Live” (1995) how

The fact that the standards of scientific success shift with time does not only make the philosophy of science difficult; it also raises problems for the public understanding of science. We do not have a fixed scientific method to rally around and defend. (1995: 8)

Interview studies with scientists on their conception of method shows that scientists often find it hard to figure out whether available evidence confirms their hypothesis, and that there are no direct translations between general ideas about method and specific strategies to guide how research is conducted (Schickore & Hangel 2019, Hangel & Schickore 2017)

Reference to the scientific method has also often been used to argue for the scientific nature or special status of a particular activity. Philosophical positions that argue for a simple and unique scientific method as a criterion of demarcation, such as Popperian falsification, have often attracted practitioners who felt that they had a need to defend their domain of practice. For example, references to conjectures and refutation as the scientific method are abundant in much of the literature on complementary and alternative medicine (CAM)—alongside the competing position that CAM, as an alternative to conventional biomedicine, needs to develop its own methodology different from that of science.

Also within mainstream science, reference to the scientific method is used in arguments regarding the internal hierarchy of disciplines and domains. A frequently seen argument is that research based on the H-D method is superior to research based on induction from observations because in deductive inferences the conclusion follows necessarily from the premises. (See, e.g., Parascandola 1998 for an analysis of how this argument has been made to downgrade epidemiology compared to the laboratory sciences.) Similarly, based on an examination of the practices of major funding institutions such as the National Institutes of Health (NIH), the National Science Foundation (NSF) and the Biomedical Sciences Research Practices (BBSRC) in the UK, O’Malley et al. (2009) have argued that funding agencies seem to have a tendency to adhere to the view that the primary activity of science is to test hypotheses, while descriptive and exploratory research is seen as merely preparatory activities that are valuable only insofar as they fuel hypothesis-driven research.

In some areas of science, scholarly publications are structured in a way that may convey the impression of a neat and linear process of inquiry from stating a question, devising the methods by which to answer it, collecting the data, to drawing a conclusion from the analysis of data. For example, the codified format of publications in most biomedical journals known as the IMRAD format (Introduction, Method, Results, Analysis, Discussion) is explicitly described by the journal editors as “not an arbitrary publication format but rather a direct reflection of the process of scientific discovery” (see the so-called “Vancouver Recommendations”, ICMJE 2013: 11). However, scientific publications do not in general reflect the process by which the reported scientific results were produced. For example, under the provocative title “Is the scientific paper a fraud?”, Medawar argued that scientific papers generally misrepresent how the results have been produced (Medawar 1963/1996). Similar views have been advanced by philosophers, historians and sociologists of science (Gilbert 1976; Holmes 1987; Knorr-Cetina 1981; Schickore 2008; Suppe 1998) who have argued that scientists’ experimental practices are messy and often do not follow any recognizable pattern. Publications of research results, they argue, are retrospective reconstructions of these activities that often do not preserve the temporal order or the logic of these activities, but are instead often constructed in order to screen off potential criticism (see Schickore 2008 for a review of this work).

Philosophical positions on the scientific method have also made it into the court room, especially in the US where judges have drawn on philosophy of science in deciding when to confer special status to scientific expert testimony. A key case is Daubert vs Merrell Dow Pharmaceuticals (92–102, 509 U.S. 579, 1993). In this case, the Supreme Court argued in its 1993 ruling that trial judges must ensure that expert testimony is reliable, and that in doing this the court must look at the expert’s methodology to determine whether the proffered evidence is actually scientific knowledge. Further, referring to works of Popper and Hempel the court stated that

ordinarily, a key question to be answered in determining whether a theory or technique is scientific knowledge … is whether it can be (and has been) tested. (Justice Blackmun, Daubert v. Merrell Dow Pharmaceuticals; see Other Internet Resources for a link to the opinion)

But as argued by Haack (2005a,b, 2010) and by Foster & Hubner (1999), by equating the question of whether a piece of testimony is reliable with the question whether it is scientific as indicated by a special methodology, the court was producing an inconsistent mixture of Popper’s and Hempel’s philosophies, and this has later led to considerable confusion in subsequent case rulings that drew on the Daubert case (see Haack 2010 for a detailed exposition).

The difficulties around identifying the methods of science are also reflected in the difficulties of identifying scientific misconduct in the form of improper application of the method or methods of science. One of the first and most influential attempts at defining misconduct in science was the US definition from 1989 that defined misconduct as

fabrication, falsification, plagiarism, or other practices that seriously deviate from those that are commonly accepted within the scientific community . (Code of Federal Regulations, part 50, subpart A., August 8, 1989, italics added)

However, the “other practices that seriously deviate” clause was heavily criticized because it could be used to suppress creative or novel science. For example, the National Academy of Science stated in their report Responsible Science (1992) that it

wishes to discourage the possibility that a misconduct complaint could be lodged against scientists based solely on their use of novel or unorthodox research methods. (NAS: 27)

This clause was therefore later removed from the definition. For an entry into the key philosophical literature on conduct in science, see Shamoo & Resnick (2009).

The question of the source of the success of science has been at the core of philosophy since the beginning of modern science. If viewed as a matter of epistemology more generally, scientific method is a part of the entire history of philosophy. Over that time, science and whatever methods its practitioners may employ have changed dramatically. Today, many philosophers have taken up the banners of pluralism or of practice to focus on what are, in effect, fine-grained and contextually limited examinations of scientific method. Others hope to shift perspectives in order to provide a renewed general account of what characterizes the activity we call science.

One such perspective has been offered recently by Hoyningen-Huene (2008, 2013), who argues from the history of philosophy of science that after three lengthy phases of characterizing science by its method, we are now in a phase where the belief in the existence of a positive scientific method has eroded and what has been left to characterize science is only its fallibility. First was a phase from Plato and Aristotle up until the 17 th century where the specificity of scientific knowledge was seen in its absolute certainty established by proof from evident axioms; next was a phase up to the mid-19 th century in which the means to establish the certainty of scientific knowledge had been generalized to include inductive procedures as well. In the third phase, which lasted until the last decades of the 20 th century, it was recognized that empirical knowledge was fallible, but it was still granted a special status due to its distinctive mode of production. But now in the fourth phase, according to Hoyningen-Huene, historical and philosophical studies have shown how “scientific methods with the characteristics as posited in the second and third phase do not exist” (2008: 168) and there is no longer any consensus among philosophers and historians of science about the nature of science. For Hoyningen-Huene, this is too negative a stance, and he therefore urges the question about the nature of science anew. His own answer to this question is that “scientific knowledge differs from other kinds of knowledge, especially everyday knowledge, primarily by being more systematic” (Hoyningen-Huene 2013: 14). Systematicity can have several different dimensions: among them are more systematic descriptions, explanations, predictions, defense of knowledge claims, epistemic connectedness, ideal of completeness, knowledge generation, representation of knowledge and critical discourse. Hence, what characterizes science is the greater care in excluding possible alternative explanations, the more detailed elaboration with respect to data on which predictions are based, the greater care in detecting and eliminating sources of error, the more articulate connections to other pieces of knowledge, etc. On this position, what characterizes science is not that the methods employed are unique to science, but that the methods are more carefully employed.

Another, similar approach has been offered by Haack (2003). She sets off, similar to Hoyningen-Huene, from a dissatisfaction with the recent clash between what she calls Old Deferentialism and New Cynicism. The Old Deferentialist position is that science progressed inductively by accumulating true theories confirmed by empirical evidence or deductively by testing conjectures against basic statements; while the New Cynics position is that science has no epistemic authority and no uniquely rational method and is merely just politics. Haack insists that contrary to the views of the New Cynics, there are objective epistemic standards, and there is something epistemologically special about science, even though the Old Deferentialists pictured this in a wrong way. Instead, she offers a new Critical Commonsensist account on which standards of good, strong, supportive evidence and well-conducted, honest, thorough and imaginative inquiry are not exclusive to the sciences, but the standards by which we judge all inquirers. In this sense, science does not differ in kind from other kinds of inquiry, but it may differ in the degree to which it requires broad and detailed background knowledge and a familiarity with a technical vocabulary that only specialists may possess.

  • Aikenhead, G.S., 1987, “High-school graduates’ beliefs about science-technology-society. III. Characteristics and limitations of scientific knowledge”, Science Education , 71(4): 459–487.
  • Allchin, D., H.M. Andersen and K. Nielsen, 2014, “Complementary Approaches to Teaching Nature of Science: Integrating Student Inquiry, Historical Cases, and Contemporary Cases in Classroom Practice”, Science Education , 98: 461–486.
  • Anderson, C., 2008, “The end of theory: The data deluge makes the scientific method obsolete”, Wired magazine , 16(7): 16–07
  • Arabatzis, T., 2006, “On the inextricability of the context of discovery and the context of justification”, in Revisiting Discovery and Justification , J. Schickore and F. Steinle (eds.), Dordrecht: Springer, pp. 215–230.
  • Barnes, J. (ed.), 1984, The Complete Works of Aristotle, Vols I and II , Princeton: Princeton University Press.
  • Barnes, B. and D. Bloor, 1982, “Relativism, Rationalism, and the Sociology of Knowledge”, in Rationality and Relativism , M. Hollis and S. Lukes (eds.), Cambridge: MIT Press, pp. 1–20.
  • Bauer, H.H., 1992, Scientific Literacy and the Myth of the Scientific Method , Urbana: University of Illinois Press.
  • Bechtel, W. and R.C. Richardson, 1993, Discovering complexity , Princeton, NJ: Princeton University Press.
  • Berkeley, G., 1734, The Analyst in De Motu and The Analyst: A Modern Edition with Introductions and Commentary , D. Jesseph (trans. and ed.), Dordrecht: Kluwer Academic Publishers, 1992.
  • Blachowicz, J., 2009, “How science textbooks treat scientific method: A philosopher’s perspective”, The British Journal for the Philosophy of Science , 60(2): 303–344.
  • Bloor, D., 1991, Knowledge and Social Imagery , Chicago: University of Chicago Press, 2 nd edition.
  • Boyle, R., 1682, New experiments physico-mechanical, touching the air , Printed by Miles Flesher for Richard Davis, bookseller in Oxford.
  • Bridgman, P.W., 1927, The Logic of Modern Physics , New York: Macmillan.
  • –––, 1956, “The Methodological Character of Theoretical Concepts”, in The Foundations of Science and the Concepts of Science and Psychology , Herbert Feigl and Michael Scriven (eds.), Minnesota: University of Minneapolis Press, pp. 38–76.
  • Burian, R., 1997, “Exploratory Experimentation and the Role of Histochemical Techniques in the Work of Jean Brachet, 1938–1952”, History and Philosophy of the Life Sciences , 19(1): 27–45.
  • –––, 2007, “On microRNA and the need for exploratory experimentation in post-genomic molecular biology”, History and Philosophy of the Life Sciences , 29(3): 285–311.
  • Carnap, R., 1928, Der logische Aufbau der Welt , Berlin: Bernary, transl. by R.A. George, The Logical Structure of the World , Berkeley: University of California Press, 1967.
  • –––, 1956, “The methodological character of theoretical concepts”, Minnesota studies in the philosophy of science , 1: 38–76.
  • Carrol, S., and D. Goodstein, 2009, “Defining the scientific method”, Nature Methods , 6: 237.
  • Churchman, C.W., 1948, “Science, Pragmatics, Induction”, Philosophy of Science , 15(3): 249–268.
  • Cooper, J. (ed.), 1997, Plato: Complete Works , Indianapolis: Hackett.
  • Darden, L., 1991, Theory Change in Science: Strategies from Mendelian Genetics , Oxford: Oxford University Press
  • Dewey, J., 1910, How we think , New York: Dover Publications (reprinted 1997).
  • Douglas, H., 2009, Science, Policy, and the Value-Free Ideal , Pittsburgh: University of Pittsburgh Press.
  • Dupré, J., 2004, “Miracle of Monism ”, in Naturalism in Question , Mario De Caro and David Macarthur (eds.), Cambridge, MA: Harvard University Press, pp. 36–58.
  • Elliott, K.C., 2007, “Varieties of exploratory experimentation in nanotoxicology”, History and Philosophy of the Life Sciences , 29(3): 311–334.
  • Elliott, K. C., and T. Richards (eds.), 2017, Exploring inductive risk: Case studies of values in science , Oxford: Oxford University Press.
  • Falcon, Andrea, 2005, Aristotle and the science of nature: Unity without uniformity , Cambridge: Cambridge University Press.
  • Feyerabend, P., 1978, Science in a Free Society , London: New Left Books
  • –––, 1988, Against Method , London: Verso, 2 nd edition.
  • Fisher, R.A., 1955, “Statistical Methods and Scientific Induction”, Journal of The Royal Statistical Society. Series B (Methodological) , 17(1): 69–78.
  • Foster, K. and P.W. Huber, 1999, Judging Science. Scientific Knowledge and the Federal Courts , Cambridge: MIT Press.
  • Fox Keller, E., 2003, “Models, Simulation, and ‘computer experiments’”, in The Philosophy of Scientific Experimentation , H. Radder (ed.), Pittsburgh: Pittsburgh University Press, 198–215.
  • Gilbert, G., 1976, “The transformation of research findings into scientific knowledge”, Social Studies of Science , 6: 281–306.
  • Gimbel, S., 2011, Exploring the Scientific Method , Chicago: University of Chicago Press.
  • Goodman, N., 1965, Fact , Fiction, and Forecast , Indianapolis: Bobbs-Merrill.
  • Haack, S., 1995, “Science is neither sacred nor a confidence trick”, Foundations of Science , 1(3): 323–335.
  • –––, 2003, Defending science—within reason , Amherst: Prometheus.
  • –––, 2005a, “Disentangling Daubert: an epistemological study in theory and practice”, Journal of Philosophy, Science and Law , 5, Haack 2005a available online . doi:10.5840/jpsl2005513
  • –––, 2005b, “Trial and error: The Supreme Court’s philosophy of science”, American Journal of Public Health , 95: S66-S73.
  • –––, 2010, “Federal Philosophy of Science: A Deconstruction-and a Reconstruction”, NYUJL & Liberty , 5: 394.
  • Hangel, N. and J. Schickore, 2017, “Scientists’ conceptions of good research practice”, Perspectives on Science , 25(6): 766–791
  • Harper, W.L., 2011, Isaac Newton’s Scientific Method: Turning Data into Evidence about Gravity and Cosmology , Oxford: Oxford University Press.
  • Hempel, C., 1950, “Problems and Changes in the Empiricist Criterion of Meaning”, Revue Internationale de Philosophie , 41(11): 41–63.
  • –––, 1951, “The Concept of Cognitive Significance: A Reconsideration”, Proceedings of the American Academy of Arts and Sciences , 80(1): 61–77.
  • –––, 1965, Aspects of scientific explanation and other essays in the philosophy of science , New York–London: Free Press.
  • –––, 1966, Philosophy of Natural Science , Englewood Cliffs: Prentice-Hall.
  • Holmes, F.L., 1987, “Scientific writing and scientific discovery”, Isis , 78(2): 220–235.
  • Howard, D., 2003, “Two left turns make a right: On the curious political career of North American philosophy of science at midcentury”, in Logical Empiricism in North America , G.L. Hardcastle & A.W. Richardson (eds.), Minneapolis: University of Minnesota Press, pp. 25–93.
  • Hoyningen-Huene, P., 2008, “Systematicity: The nature of science”, Philosophia , 36(2): 167–180.
  • –––, 2013, Systematicity. The Nature of Science , Oxford: Oxford University Press.
  • Howie, D., 2002, Interpreting probability: Controversies and developments in the early twentieth century , Cambridge: Cambridge University Press.
  • Hughes, R., 1999, “The Ising Model, Computer Simulation, and Universal Physics”, in Models as Mediators , M. Morgan and M. Morrison (eds.), Cambridge: Cambridge University Press, pp. 97–145
  • Hume, D., 1739, A Treatise of Human Nature , D. Fate Norton and M.J. Norton (eds.), Oxford: Oxford University Press, 2000.
  • Humphreys, P., 1995, “Computational science and scientific method”, Minds and Machines , 5(1): 499–512.
  • ICMJE, 2013, “Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals”, International Committee of Medical Journal Editors, available online , accessed August 13 2014
  • Jeffrey, R.C., 1956, “Valuation and Acceptance of Scientific Hypotheses”, Philosophy of Science , 23(3): 237–246.
  • Kaufmann, W.J., and L.L. Smarr, 1993, Supercomputing and the Transformation of Science , New York: Scientific American Library.
  • Knorr-Cetina, K., 1981, The Manufacture of Knowledge , Oxford: Pergamon Press.
  • Krohs, U., 2012, “Convenience experimentation”, Studies in History and Philosophy of Biological and BiomedicalSciences , 43: 52–57.
  • Kuhn, T.S., 1962, The Structure of Scientific Revolutions , Chicago: University of Chicago Press
  • Latour, B. and S. Woolgar, 1986, Laboratory Life: The Construction of Scientific Facts , Princeton: Princeton University Press, 2 nd edition.
  • Laudan, L., 1968, “Theories of scientific method from Plato to Mach”, History of Science , 7(1): 1–63.
  • Lenhard, J., 2006, “Models and statistical inference: The controversy between Fisher and Neyman-Pearson”, The British Journal for the Philosophy of Science , 57(1): 69–91.
  • Leonelli, S., 2012, “Making Sense of Data-Driven Research in the Biological and the Biomedical Sciences”, Studies in the History and Philosophy of the Biological and Biomedical Sciences , 43(1): 1–3.
  • Levi, I., 1960, “Must the scientist make value judgments?”, Philosophy of Science , 57(11): 345–357
  • Lindley, D., 1991, Theory Change in Science: Strategies from Mendelian Genetics , Oxford: Oxford University Press.
  • Lipton, P., 2004, Inference to the Best Explanation , London: Routledge, 2 nd edition.
  • Marks, H.M., 2000, The progress of experiment: science and therapeutic reform in the United States, 1900–1990 , Cambridge: Cambridge University Press.
  • Mazzochi, F., 2015, “Could Big Data be the end of theory in science?”, EMBO reports , 16: 1250–1255.
  • Mayo, D.G., 1996, Error and the Growth of Experimental Knowledge , Chicago: University of Chicago Press.
  • McComas, W.F., 1996, “Ten myths of science: Reexamining what we think we know about the nature of science”, School Science and Mathematics , 96(1): 10–16.
  • Medawar, P.B., 1963/1996, “Is the scientific paper a fraud”, in The Strange Case of the Spotted Mouse and Other Classic Essays on Science , Oxford: Oxford University Press, 33–39.
  • Mill, J.S., 1963, Collected Works of John Stuart Mill , J. M. Robson (ed.), Toronto: University of Toronto Press
  • NAS, 1992, Responsible Science: Ensuring the integrity of the research process , Washington DC: National Academy Press.
  • Nersessian, N.J., 1987, “A cognitive-historical approach to meaning in scientific theories”, in The process of science , N. Nersessian (ed.), Berlin: Springer, pp. 161–177.
  • –––, 2008, Creating Scientific Concepts , Cambridge: MIT Press.
  • Newton, I., 1726, Philosophiae naturalis Principia Mathematica (3 rd edition), in The Principia: Mathematical Principles of Natural Philosophy: A New Translation , I.B. Cohen and A. Whitman (trans.), Berkeley: University of California Press, 1999.
  • –––, 1704, Opticks or A Treatise of the Reflections, Refractions, Inflections & Colors of Light , New York: Dover Publications, 1952.
  • Neyman, J., 1956, “Note on an Article by Sir Ronald Fisher”, Journal of the Royal Statistical Society. Series B (Methodological) , 18: 288–294.
  • Nickles, T., 1987, “Methodology, heuristics, and rationality”, in Rational changes in science: Essays on Scientific Reasoning , J.C. Pitt (ed.), Berlin: Springer, pp. 103–132.
  • Nicod, J., 1924, Le problème logique de l’induction , Paris: Alcan. (Engl. transl. “The Logical Problem of Induction”, in Foundations of Geometry and Induction , London: Routledge, 2000.)
  • Nola, R. and H. Sankey, 2000a, “A selective survey of theories of scientific method”, in Nola and Sankey 2000b: 1–65.
  • –––, 2000b, After Popper, Kuhn and Feyerabend. Recent Issues in Theories of Scientific Method , London: Springer.
  • –––, 2007, Theories of Scientific Method , Stocksfield: Acumen.
  • Norton, S., and F. Suppe, 2001, “Why atmospheric modeling is good science”, in Changing the Atmosphere: Expert Knowledge and Environmental Governance , C. Miller and P. Edwards (eds.), Cambridge, MA: MIT Press, 88–133.
  • O’Malley, M., 2007, “Exploratory experimentation and scientific practice: Metagenomics and the proteorhodopsin case”, History and Philosophy of the Life Sciences , 29(3): 337–360.
  • O’Malley, M., C. Haufe, K. Elliot, and R. Burian, 2009, “Philosophies of Funding”, Cell , 138: 611–615.
  • Oreskes, N., K. Shrader-Frechette, and K. Belitz, 1994, “Verification, Validation and Confirmation of Numerical Models in the Earth Sciences”, Science , 263(5147): 641–646.
  • Osborne, J., S. Simon, and S. Collins, 2003, “Attitudes towards science: a review of the literature and its implications”, International Journal of Science Education , 25(9): 1049–1079.
  • Parascandola, M., 1998, “Epidemiology—2 nd -Rate Science”, Public Health Reports , 113(4): 312–320.
  • Parker, W., 2008a, “Franklin, Holmes and the Epistemology of Computer Simulation”, International Studies in the Philosophy of Science , 22(2): 165–83.
  • –––, 2008b, “Computer Simulation through an Error-Statistical Lens”, Synthese , 163(3): 371–84.
  • Pearson, K. 1892, The Grammar of Science , London: J.M. Dents and Sons, 1951
  • Pearson, E.S., 1955, “Statistical Concepts in Their Relation to Reality”, Journal of the Royal Statistical Society , B, 17: 204–207.
  • Pickering, A., 1984, Constructing Quarks: A Sociological History of Particle Physics , Edinburgh: Edinburgh University Press.
  • Popper, K.R., 1959, The Logic of Scientific Discovery , London: Routledge, 2002
  • –––, 1963, Conjectures and Refutations , London: Routledge, 2002.
  • –––, 1985, Unended Quest: An Intellectual Autobiography , La Salle: Open Court Publishing Co..
  • Rudner, R., 1953, “The Scientist Qua Scientist Making Value Judgments”, Philosophy of Science , 20(1): 1–6.
  • Rudolph, J.L., 2005, “Epistemology for the masses: The origin of ‘The Scientific Method’ in American Schools”, History of Education Quarterly , 45(3): 341–376
  • Schickore, J., 2008, “Doing science, writing science”, Philosophy of Science , 75: 323–343.
  • Schickore, J. and N. Hangel, 2019, “‘It might be this, it should be that…’ uncertainty and doubt in day-to-day science practice”, European Journal for Philosophy of Science , 9(2): 31. doi:10.1007/s13194-019-0253-9
  • Shamoo, A.E. and D.B. Resnik, 2009, Responsible Conduct of Research , Oxford: Oxford University Press.
  • Shank, J.B., 2008, The Newton Wars and the Beginning of the French Enlightenment , Chicago: The University of Chicago Press.
  • Shapin, S. and S. Schaffer, 1985, Leviathan and the air-pump , Princeton: Princeton University Press.
  • Smith, G.E., 2002, “The Methodology of the Principia”, in The Cambridge Companion to Newton , I.B. Cohen and G.E. Smith (eds.), Cambridge: Cambridge University Press, 138–173.
  • Snyder, L.J., 1997a, “Discoverers’ Induction”, Philosophy of Science , 64: 580–604.
  • –––, 1997b, “The Mill-Whewell Debate: Much Ado About Induction”, Perspectives on Science , 5: 159–198.
  • –––, 1999, “Renovating the Novum Organum: Bacon, Whewell and Induction”, Studies in History and Philosophy of Science , 30: 531–557.
  • Sober, E., 2008, Evidence and Evolution. The logic behind the science , Cambridge: Cambridge University Press
  • Sprenger, J. and S. Hartmann, 2019, Bayesian philosophy of science , Oxford: Oxford University Press.
  • Steinle, F., 1997, “Entering New Fields: Exploratory Uses of Experimentation”, Philosophy of Science (Proceedings), 64: S65–S74.
  • –––, 2002, “Experiments in History and Philosophy of Science”, Perspectives on Science , 10(4): 408–432.
  • Strasser, B.J., 2012, “Data-driven sciences: From wonder cabinets to electronic databases”, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences , 43(1): 85–87.
  • Succi, S. and P.V. Coveney, 2018, “Big data: the end of the scientific method?”, Philosophical Transactions of the Royal Society A , 377: 20180145. doi:10.1098/rsta.2018.0145
  • Suppe, F., 1998, “The Structure of a Scientific Paper”, Philosophy of Science , 65(3): 381–405.
  • Swijtink, Z.G., 1987, “The objectification of observation: Measurement and statistical methods in the nineteenth century”, in The probabilistic revolution. Ideas in History, Vol. 1 , L. Kruger (ed.), Cambridge MA: MIT Press, pp. 261–285.
  • Waters, C.K., 2007, “The nature and context of exploratory experimentation: An introduction to three case studies of exploratory research”, History and Philosophy of the Life Sciences , 29(3): 275–284.
  • Weinberg, S., 1995, “The methods of science… and those by which we live”, Academic Questions , 8(2): 7–13.
  • Weissert, T., 1997, The Genesis of Simulation in Dynamics: Pursuing the Fermi-Pasta-Ulam Problem , New York: Springer Verlag.
  • William H., 1628, Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus , in On the Motion of the Heart and Blood in Animals , R. Willis (trans.), Buffalo: Prometheus Books, 1993.
  • Winsberg, E., 2010, Science in the Age of Computer Simulation , Chicago: University of Chicago Press.
  • Wivagg, D. & D. Allchin, 2002, “The Dogma of the Scientific Method”, The American Biology Teacher , 64(9): 645–646
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Blackmun opinion , in Daubert v. Merrell Dow Pharmaceuticals (92–102), 509 U.S. 579 (1993).
  • Scientific Method at philpapers. Darrell Rowbottom (ed.).
  • Recent Articles | Scientific Method | The Scientist Magazine

al-Kindi | Albert the Great [= Albertus magnus] | Aquinas, Thomas | Arabic and Islamic Philosophy, disciplines in: natural philosophy and natural science | Arabic and Islamic Philosophy, historical and methodological topics in: Greek sources | Arabic and Islamic Philosophy, historical and methodological topics in: influence of Arabic and Islamic Philosophy on the Latin West | Aristotle | Bacon, Francis | Bacon, Roger | Berkeley, George | biology: experiment in | Boyle, Robert | Cambridge Platonists | confirmation | Descartes, René | Enlightenment | epistemology | epistemology: Bayesian | epistemology: social | Feyerabend, Paul | Galileo Galilei | Grosseteste, Robert | Hempel, Carl | Hume, David | Hume, David: Newtonianism and Anti-Newtonianism | induction: problem of | Kant, Immanuel | Kuhn, Thomas | Leibniz, Gottfried Wilhelm | Locke, John | Mill, John Stuart | More, Henry | Neurath, Otto | Newton, Isaac | Newton, Isaac: philosophy | Ockham [Occam], William | operationalism | Peirce, Charles Sanders | Plato | Popper, Karl | rationality: historicist theories of | Reichenbach, Hans | reproducibility, scientific | Schlick, Moritz | science: and pseudo-science | science: theory and observation in | science: unity of | scientific discovery | scientific knowledge: social dimensions of | simulations in science | skepticism: medieval | space and time: absolute and relational space and motion, post-Newtonian theories | Vienna Circle | Whewell, William | Zabarella, Giacomo

Copyright © 2021 by Brian Hepburn < brian . hepburn @ wichita . edu > Hanne Andersen < hanne . andersen @ ind . ku . dk >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

3 The research process

In Chapter 1, we saw that scientific research is the process of acquiring scientific knowledge using the scientific method. But how is such research conducted? This chapter delves into the process of scientific research, and the assumptions and outcomes of the research process.

Paradigms of social research

Our design and conduct of research is shaped by our mental models, or frames of reference that we use to organise our reasoning and observations. These mental models or frames (belief systems) are called paradigms . The word ‘paradigm’ was popularised by Thomas Kuhn (1962) [1] in his book The structure of scientific r evolutions , where he examined the history of the natural sciences to identify patterns of activities that shape the progress of science. Similar ideas are applicable to social sciences as well, where a social reality can be viewed by different people in different ways, which may constrain their thinking and reasoning about the observed phenomenon. For instance, conservatives and liberals tend to have very different perceptions of the role of government in people’s lives, and hence, have different opinions on how to solve social problems. Conservatives may believe that lowering taxes is the best way to stimulate a stagnant economy because it increases people’s disposable income and spending, which in turn expands business output and employment. In contrast, liberals may believe that governments should invest more directly in job creation programs such as public works and infrastructure projects, which will increase employment and people’s ability to consume and drive the economy. Likewise, Western societies place greater emphasis on individual rights, such as one’s right to privacy, right of free speech, and right to bear arms. In contrast, Asian societies tend to balance the rights of individuals against the rights of families, organisations, and the government, and therefore tend to be more communal and less individualistic in their policies. Such differences in perspective often lead Westerners to criticise Asian governments for being autocratic, while Asians criticise Western societies for being greedy, having high crime rates, and creating a ‘cult of the individual’. Our personal paradigms are like ‘coloured glasses’ that govern how we view the world and how we structure our thoughts about what we see in the world.

Paradigms are often hard to recognise, because they are implicit, assumed, and taken for granted. However, recognising these paradigms is key to making sense of and reconciling differences in people’s perceptions of the same social phenomenon. For instance, why do liberals believe that the best way to improve secondary education is to hire more teachers, while conservatives believe that privatising education (using such means as school vouchers) is more effective in achieving the same goal? Conservatives place more faith in competitive markets (i.e., in free competition between schools competing for education dollars), while liberals believe more in labour (i.e., in having more teachers and schools). Likewise, in social science research, to understand why a certain technology was successfully implemented in one organisation, but failed miserably in another, a researcher looking at the world through a ‘rational lens’ will look for rational explanations of the problem, such as inadequate technology or poor fit between technology and the task context where it is being utilised. Another researcher looking at the same problem through a ‘social lens’ may seek out social deficiencies such as inadequate user training or lack of management support. Those seeing it through a ‘political lens’ will look for instances of organisational politics that may subvert the technology implementation process. Hence, subconscious paradigms often constrain the concepts that researchers attempt to measure, their observations, and their subsequent interpretations of a phenomenon. However, given the complex nature of social phenomena, it is possible that all of the above paradigms are partially correct, and that a fuller understanding of the problem may require an understanding and application of multiple paradigms.

Two popular paradigms today among social science researchers are positivism and post-positivism. Positivism , based on the works of French philosopher Auguste Comte (1798–1857), was the dominant scientific paradigm until the mid-twentieth century. It holds that science or knowledge creation should be restricted to what can be observed and measured. Positivism tends to rely exclusively on theories that can be directly tested. Though positivism was originally an attempt to separate scientific inquiry from religion (where the precepts could not be objectively observed), positivism led to empiricism or a blind faith in observed data and a rejection of any attempt to extend or reason beyond observable facts. Since human thoughts and emotions could not be directly measured, they were not considered to be legitimate topics for scientific research. Frustrations with the strictly empirical nature of positivist philosophy led to the development of post-positivism (or postmodernism) during the mid-late twentieth century. Post-positivism argues that one can make reasonable inferences about a phenomenon by combining empirical observations with logical reasoning. Post-positivists view science as not certain but probabilistic (i.e., based on many contingencies), and often seek to explore these contingencies to understand social reality better. The post-positivist camp has further fragmented into subjectivists , who view the world as a subjective construction of our subjective minds rather than as an objective reality, and critical realists , who believe that there is an external reality that is independent of a person’s thinking but we can never know such reality with any degree of certainty.

Burrell and Morgan (1979), [2] in their seminal book Sociological p aradigms and organizational a nalysis , suggested that the way social science researchers view and study social phenomena is shaped by two fundamental sets of philosophical assumptions: ontology and epistemology. Ontology refers to our assumptions about how we see the world (e.g., does the world consist mostly of social order or constant change?). Epistemology refers to our assumptions about the best way to study the world (e.g., should we use an objective or subjective approach to study social reality?). Using these two sets of assumptions, we can categorise social science research as belonging to one of four categories (see Figure 3.1).

If researchers view the world as consisting mostly of social order (ontology) and hence seek to study patterns of ordered events or behaviours, and believe that the best way to study such a world is using an objective approach (epistemology) that is independent of the person conducting the observation or interpretation, such as by using standardised data collection tools like surveys, then they are adopting a paradigm of functionalism . However, if they believe that the best way to study social order is though the subjective interpretation of participants, such as by interviewing different participants and reconciling differences among their responses using their own subjective perspectives, then they are employing an interpretivism paradigm. If researchers believe that the world consists of radical change and seek to understand or enact change using an objectivist approach, then they are employing a radical structuralism paradigm. If they wish to understand social change using the subjective perspectives of the participants involved, then they are following a radical humanism paradigm.

Four paradigms of social science research

To date, the majority of social science research has emulated the natural sciences, and followed the functionalist paradigm. Functionalists believe that social order or patterns can be understood in terms of their functional components, and therefore attempt to break down a problem into small components and studying one or more components in detail using objectivist techniques such as surveys and experimental research. However, with the emergence of post-positivist thinking, a small but growing number of social science researchers are attempting to understand social order using subjectivist techniques such as interviews and ethnographic studies. Radical humanism and radical structuralism continues to represent a negligible proportion of social science research, because scientists are primarily concerned with understanding generalisable patterns of behaviour, events, or phenomena, rather than idiosyncratic or changing events. Nevertheless, if you wish to study social change, such as why democratic movements are increasingly emerging in Middle Eastern countries, or why this movement was successful in Tunisia, took a longer path to success in Libya, and is still not successful in Syria, then perhaps radical humanism is the right approach for such a study. Social and organisational phenomena generally consist of elements of both order and change. For instance, organisational success depends on formalised business processes, work procedures, and job responsibilities, while being simultaneously constrained by a constantly changing mix of competitors, competing products, suppliers, and customer base in the business environment. Hence, a holistic and more complete understanding of social phenomena such as why some organisations are more successful than others, requires an appreciation and application of a multi-paradigmatic approach to research.

Overview of the research process

So how do our mental paradigms shape social science research? At its core, all scientific research is an iterative process of observation, rationalisation, and validation. In the observation phase, we observe a natural or social phenomenon, event, or behaviour that interests us. In the rationalisation phase, we try to make sense of the observed phenomenon, event, or behaviour by logically connecting the different pieces of the puzzle that we observe, which in some cases, may lead to the construction of a theory. Finally, in the validation phase, we test our theories using a scientific method through a process of data collection and analysis, and in doing so, possibly modify or extend our initial theory. However, research designs vary based on whether the researcher starts at observation and attempts to rationalise the observations (inductive research), or whether the researcher starts at an ex ante rationalisation or a theory and attempts to validate the theory (deductive research). Hence, the observation-rationalisation-validation cycle is very similar to the induction-deduction cycle of research discussed in Chapter 1.

Most traditional research tends to be deductive and functionalistic in nature. Figure 3.2 provides a schematic view of such a research project. This figure depicts a series of activities to be performed in functionalist research, categorised into three phases: exploration, research design, and research execution. Note that this generalised design is not a roadmap or flowchart for all research. It applies only to functionalistic research, and it can and should be modified to fit the needs of a specific project.

Functionalistic research process

The first phase of research is exploration . This phase includes exploring and selecting research questions for further investigation, examining the published literature in the area of inquiry to understand the current state of knowledge in that area, and identifying theories that may help answer the research questions of interest.

The first step in the exploration phase is identifying one or more research questions dealing with a specific behaviour, event, or phenomena of interest. Research questions are specific questions about a behaviour, event, or phenomena of interest that you wish to seek answers for in your research. Examples include determining which factors motivate consumers to purchase goods and services online without knowing the vendors of these goods or services, how can we make high school students more creative, and why some people commit terrorist acts. Research questions can delve into issues of what, why, how, when, and so forth. More interesting research questions are those that appeal to a broader population (e.g., ‘how can firms innovate?’ is a more interesting research question than ‘how can Chinese firms innovate in the service-sector?’), address real and complex problems (in contrast to hypothetical or ‘toy’ problems), and where the answers are not obvious. Narrowly focused research questions (often with a binary yes/no answer) tend to be less useful and less interesting and less suited to capturing the subtle nuances of social phenomena. Uninteresting research questions generally lead to uninteresting and unpublishable research findings.

The next step is to conduct a literature review of the domain of interest. The purpose of a literature review is three-fold: one, to survey the current state of knowledge in the area of inquiry, two, to identify key authors, articles, theories, and findings in that area, and three, to identify gaps in knowledge in that research area. Literature review is commonly done today using computerised keyword searches in online databases. Keywords can be combined using Boolean operators such as ‘and’ and ‘or’ to narrow down or expand the search results. Once a shortlist of relevant articles is generated from the keyword search, the researcher must then manually browse through each article, or at least its abstract, to determine the suitability of that article for a detailed review. Literature reviews should be reasonably complete, and not restricted to a few journals, a few years, or a specific methodology. Reviewed articles may be summarised in the form of tables, and can be further structured using organising frameworks such as a concept matrix. A well-conducted literature review should indicate whether the initial research questions have already been addressed in the literature (which would obviate the need to study them again), whether there are newer or more interesting research questions available, and whether the original research questions should be modified or changed in light of the findings of the literature review. The review can also provide some intuitions or potential answers to the questions of interest and/or help identify theories that have previously been used to address similar questions.

Since functionalist (deductive) research involves theory-testing, the third step is to identify one or more theories can help address the desired research questions. While the literature review may uncover a wide range of concepts or constructs potentially related to the phenomenon of interest, a theory will help identify which of these constructs is logically relevant to the target phenomenon and how. Forgoing theories may result in measuring a wide range of less relevant, marginally relevant, or irrelevant constructs, while also minimising the chances of obtaining results that are meaningful and not by pure chance. In functionalist research, theories can be used as the logical basis for postulating hypotheses for empirical testing. Obviously, not all theories are well-suited for studying all social phenomena. Theories must be carefully selected based on their fit with the target problem and the extent to which their assumptions are consistent with that of the target problem. We will examine theories and the process of theorising in detail in the next chapter.

The next phase in the research process is research design . This process is concerned with creating a blueprint of the actions to take in order to satisfactorily answer the research questions identified in the exploration phase. This includes selecting a research method, operationalising constructs of interest, and devising an appropriate sampling strategy.

Operationalisation is the process of designing precise measures for abstract theoretical constructs. This is a major problem in social science research, given that many of the constructs, such as prejudice, alienation, and liberalism are hard to define, let alone measure accurately. Operationalisation starts with specifying an ‘operational definition’ (or ‘conceptualization’) of the constructs of interest. Next, the researcher can search the literature to see if there are existing pre-validated measures matching their operational definition that can be used directly or modified to measure their constructs of interest. If such measures are not available or if existing measures are poor or reflect a different conceptualisation than that intended by the researcher, new instruments may have to be designed for measuring those constructs. This means specifying exactly how exactly the desired construct will be measured (e.g., how many items, what items, and so forth). This can easily be a long and laborious process, with multiple rounds of pre-tests and modifications before the newly designed instrument can be accepted as ‘scientifically valid’. We will discuss operationalisation of constructs in a future chapter on measurement.

Simultaneously with operationalisation, the researcher must also decide what research method they wish to employ for collecting data to address their research questions of interest. Such methods may include quantitative methods such as experiments or survey research or qualitative methods such as case research or action research, or possibly a combination of both. If an experiment is desired, then what is the experimental design? If this is a survey, do you plan a mail survey, telephone survey, web survey, or a combination? For complex, uncertain, and multifaceted social phenomena, multi-method approaches may be more suitable, which may help leverage the unique strengths of each research method and generate insights that may not be obtained using a single method.

Researchers must also carefully choose the target population from which they wish to collect data, and a sampling strategy to select a sample from that population. For instance, should they survey individuals or firms or workgroups within firms? What types of individuals or firms do they wish to target? Sampling strategy is closely related to the unit of analysis in a research problem. While selecting a sample, reasonable care should be taken to avoid a biased sample (e.g., sample based on convenience) that may generate biased observations. Sampling is covered in depth in a later chapter.

At this stage, it is often a good idea to write a research proposal detailing all of the decisions made in the preceding stages of the research process and the rationale behind each decision. This multi-part proposal should address what research questions you wish to study and why, the prior state of knowledge in this area, theories you wish to employ along with hypotheses to be tested, how you intend to measure constructs, what research method is to be employed and why, and desired sampling strategy. Funding agencies typically require such a proposal in order to select the best proposals for funding. Even if funding is not sought for a research project, a proposal may serve as a useful vehicle for seeking feedback from other researchers and identifying potential problems with the research project (e.g., whether some important constructs were missing from the study) before starting data collection. This initial feedback is invaluable because it is often too late to correct critical problems after data is collected in a research study.

Having decided who to study (subjects), what to measure (concepts), and how to collect data (research method), the researcher is now ready to proceed to the research execution phase. This includes pilot testing the measurement instruments, data collection, and data analysis.

Pilot testing is an often overlooked but extremely important part of the research process. It helps detect potential problems in your research design and/or instrumentation (e.g., whether the questions asked are intelligible to the targeted sample), and to ensure that the measurement instruments used in the study are reliable and valid measures of the constructs of interest. The pilot sample is usually a small subset of the target population. After successful pilot testing, the researcher may then proceed with data collection using the sampled population. The data collected may be quantitative or qualitative, depending on the research method employed.

Following data collection, the data is analysed and interpreted for the purpose of drawing conclusions regarding the research questions of interest. Depending on the type of data collected (quantitative or qualitative), data analysis may be quantitative (e.g., employ statistical techniques such as regression or structural equation modelling) or qualitative (e.g., coding or content analysis).

The final phase of research involves preparing the final research report documenting the entire research process and its findings in the form of a research paper, dissertation, or monograph. This report should outline in detail all the choices made during the research process (e.g., theory used, constructs selected, measures used, research methods, sampling, etc.) and why, as well as the outcomes of each phase of the research process. The research process must be described in sufficient detail so as to allow other researchers to replicate your study, test the findings, or assess whether the inferences derived are scientifically acceptable. Of course, having a ready research proposal will greatly simplify and quicken the process of writing the finished report. Note that research is of no value unless the research process and outcomes are documented for future generations—such documentation is essential for the incremental progress of science.

Common mistakes in research

The research process is fraught with problems and pitfalls, and novice researchers often find, after investing substantial amounts of time and effort into a research project, that their research questions were not sufficiently answered, or that the findings were not interesting enough, or that the research was not of ‘acceptable’ scientific quality. Such problems typically result in research papers being rejected by journals. Some of the more frequent mistakes are described below.

Insufficiently motivated research questions. Often times, we choose our ‘pet’ problems that are interesting to us but not to the scientific community at large, i.e., it does not generate new knowledge or insight about the phenomenon being investigated. Because the research process involves a significant investment of time and effort on the researcher’s part, the researcher must be certain—and be able to convince others—that the research questions they seek to answer deal with real—and not hypothetical—problems that affect a substantial portion of a population and have not been adequately addressed in prior research.

Pursuing research fads. Another common mistake is pursuing ‘popular’ topics with limited shelf life. A typical example is studying technologies or practices that are popular today. Because research takes several years to complete and publish, it is possible that popular interest in these fads may die down by the time the research is completed and submitted for publication. A better strategy may be to study ‘timeless’ topics that have always persisted through the years.

Unresearchable problems. Some research problems may not be answered adequately based on observed evidence alone, or using currently accepted methods and procedures. Such problems are best avoided. However, some unresearchable, ambiguously defined problems may be modified or fine tuned into well-defined and useful researchable problems.

Favoured research methods. Many researchers have a tendency to recast a research problem so that it is amenable to their favourite research method (e.g., survey research). This is an unfortunate trend. Research methods should be chosen to best fit a research problem, and not the other way around.

Blind data mining. Some researchers have the tendency to collect data first (using instruments that are already available), and then figure out what to do with it. Note that data collection is only one step in a long and elaborate process of planning, designing, and executing research. In fact, a series of other activities are needed in a research process prior to data collection. If researchers jump into data collection without such elaborate planning, the data collected will likely be irrelevant, imperfect, or useless, and their data collection efforts may be entirely wasted. An abundance of data cannot make up for deficits in research planning and design, and particularly, for the lack of interesting research questions.

  • Kuhn, T. (1962). The structure of scientific revolutions . Chicago: University of Chicago Press. ↵
  • Burrell, G. & Morgan, G. (1979). Sociological paradigms and organisational analysis: elements of the sociology of corporate life . London: Heinemann Educational. ↵

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

UM-Flint Home

TODAY'S HOURS:

Research Process

  • Select a Topic
  • Find Background Info
  • Focus Topic
  • List Keywords
  • Search for Sources
  • Evaluate & Integrate Sources
  • Cite and Track Sources

What is Scientific Research?

Research study design, natural vs. social science, qualitative vs. quantitative research, more information on qualitative research in the social sciences, acknowledgements.

Thank you to Julie Miller, reference intern, for helping to create this page.

Some people use the term research loosely, for example:

  • People will say they are researching different online websites to find the best place to buy a new appliance or locate a lawn care service.
  • TV news may talk about conducting research when they conduct a viewer poll on current event topic such as an upcoming election.
  • Undergraduate students working on a term paper or project may say they are researching the internet to find information.
  • Private sector companies may say they are conducting research to find a solution for a supply chain holdup.

However, none of the above is considered “scientific research” unless:

  • The research contributes to a body of science by providing new information through ethical study design or
  • The research follows the scientific method, an iterative process of observation and inquiry.

The Scientific Method

  • Make an observation: notice a phenomenon in your life or in society or find a gap in the already published literature.
  • Ask a question about what you have observed.
  • Hypothesize about a potential answer or explanation.
  • Make predictions if our hypothesis is correct.
  • Design an experiment or study that will test your prediction.
  • Test the prediction by conducting an experiment or study; report the outcomes of your study.
  • Iterate! Was your prediction correct? Was the outcome unexpected? Did it lead to new observations?

The scientific method is not separate from the Research Process as described in the rest of this guide, in fact the Research Process is directly related to the observation stage of the scientific method. Understanding what other scientists and researchers have already studied will help you focus your area of study and build on their knowledge.

Designing your experiment or study is important for both natural and social scientists. Sage Research Methods (SRM) has an excellent "Project Planner" that guides you through the basic stages of research design. SRM also has excellent explanations of qualitative and quantitative research methods for the social sciences.

For the natural sciences, Springer Nature Experiments and Protocol Exchange have guidance on quantitative research methods.

U-M login required

Books, journals, reference books, videos, podcasts, data-sets, and case studies on social science research methods.

Sage Research Methods includes over 2,000 books, reference books, journal articles, videos, datasets, and case studies on all aspects of social science research methodology. Browse the methods map or the list of methods to identify a social science method to pursue further. Includes a project planning tool and the "Which Stats Test" tool to identify the best statistical method for your project. Includes the notable "little green book" series (Quantitative Applications in the Social Sciences) and the "little blue book" series (Qualitative Research Methods).

Platform connecting researchers with protocols and methods.

Springer Nature Experiments has been designed to help users/researchers find and evaluate relevant protocols and methods across the whole Springer Nature protocols and methods portfolio using one search. This database includes:

  • Nature Protocols
  • Nature Reviews Methods Primers
  • Nature Methods
  • Springer Protocols

Open access for all users

Open repository for sharing scientific research protocols. These protocols are posted directly on the Protocol Exchange by authors and are made freely available to the scientific community for use and comment.

Includes these topics:

  • Biochemistry
  • Biological techniques
  • Chemical biology
  • Chemical engineering
  • Cheminformatics
  • Climate science
  • Computational biology and bioinformatics
  • Drug discovery
  • Electronics
  • Energy sciences
  • Environmental sciences
  • Materials science
  • Molecular biology
  • Molecular medicine
  • Neuroscience
  • Organic chemistry
  • Planetary science

Qualitative research is primarily exploratory. It is used to gain an understanding of underlying reasons, opinions, and motivations. Qualitative research is also used to uncover trends in thought and opinions and to dive deeper into a problem by studying an individual or a group.

Qualitative methods usually use unstructured or semi-structured techniques. The sample size is typically smaller than in quantitative research.

Example: interviews and focus groups.

Quantitative research is characterized by the gathering of data with the aim of testing a hypothesis. The data generated are numerical, or, if not numerical, can be transformed into useable statistics.

Quantitative data collection methods are more structured than qualitative data collection methods and sample sizes are usually larger.

Example: survey

Note: The above descriptions of qualitative and quantitative research are mainly for research in the Social Sciences, rather than for Natural Sciences as most natural sciences rely on quantitative methods for their experiments.

Qualitative research is approaching the world in its natural setting and in a way that reveals the particularities rather than doing studies in a controlled setting. It aims to understand, describe, and sometimes explain social phenomena in a number of different ways:

  • Experiences of individuals or groups
  • Interactions and communications
  • Documents (texts, images, film, or sounds, and digital documents)
  • Experiences or interactions

Qualitative researchers seek to understand how people conceptualize the world around them, what they are doing, how they are doing it or what is happening to them in terms that are significant and that offer meaningful learnings.

Qualitative researchers develop and refine concepts (or hypotheses, if they are used) in the process of research and of collecting data. Cases (its history and complexity) are an important context for understanding the issue that is studied. A major part of qualitative research is based on text and writing – from field notes and transcripts to descriptions and interpretations and finally to the presentation of the findings and of the research as a whole.

For more information, see:

Cover Art

  • << Previous: Cite and Track Sources
  • Last Updated: Mar 1, 2024 1:02 PM
  • URL: https://libguides.umflint.edu/research
  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Research Process – Steps, Examples and Tips

Research Process – Steps, Examples and Tips

Table of Contents

Research Process

Research Process

Definition:

Research Process is a systematic and structured approach that involves the collection, analysis, and interpretation of data or information to answer a specific research question or solve a particular problem.

Research Process Steps

Research Process Steps are as follows:

Identify the Research Question or Problem

This is the first step in the research process. It involves identifying a problem or question that needs to be addressed. The research question should be specific, relevant, and focused on a particular area of interest.

Conduct a Literature Review

Once the research question has been identified, the next step is to conduct a literature review. This involves reviewing existing research and literature on the topic to identify any gaps in knowledge or areas where further research is needed. A literature review helps to provide a theoretical framework for the research and also ensures that the research is not duplicating previous work.

Formulate a Hypothesis or Research Objectives

Based on the research question and literature review, the researcher can formulate a hypothesis or research objectives. A hypothesis is a statement that can be tested to determine its validity, while research objectives are specific goals that the researcher aims to achieve through the research.

Design a Research Plan and Methodology

This step involves designing a research plan and methodology that will enable the researcher to collect and analyze data to test the hypothesis or achieve the research objectives. The research plan should include details on the sample size, data collection methods, and data analysis techniques that will be used.

Collect and Analyze Data

This step involves collecting and analyzing data according to the research plan and methodology. Data can be collected through various methods, including surveys, interviews, observations, or experiments. The data analysis process involves cleaning and organizing the data, applying statistical and analytical techniques to the data, and interpreting the results.

Interpret the Findings and Draw Conclusions

After analyzing the data, the researcher must interpret the findings and draw conclusions. This involves assessing the validity and reliability of the results and determining whether the hypothesis was supported or not. The researcher must also consider any limitations of the research and discuss the implications of the findings.

Communicate the Results

Finally, the researcher must communicate the results of the research through a research report, presentation, or publication. The research report should provide a detailed account of the research process, including the research question, literature review, research methodology, data analysis, findings, and conclusions. The report should also include recommendations for further research in the area.

Review and Revise

The research process is an iterative one, and it is important to review and revise the research plan and methodology as necessary. Researchers should assess the quality of their data and methods, reflect on their findings, and consider areas for improvement.

Ethical Considerations

Throughout the research process, ethical considerations must be taken into account. This includes ensuring that the research design protects the welfare of research participants, obtaining informed consent, maintaining confidentiality and privacy, and avoiding any potential harm to participants or their communities.

Dissemination and Application

The final step in the research process is to disseminate the findings and apply the research to real-world settings. Researchers can share their findings through academic publications, presentations at conferences, or media coverage. The research can be used to inform policy decisions, develop interventions, or improve practice in the relevant field.

Research Process Example

Following is a Research Process Example:

Research Question : What are the effects of a plant-based diet on athletic performance in high school athletes?

Step 1: Background Research Conduct a literature review to gain a better understanding of the existing research on the topic. Read academic articles and research studies related to plant-based diets, athletic performance, and high school athletes.

Step 2: Develop a Hypothesis Based on the literature review, develop a hypothesis that a plant-based diet positively affects athletic performance in high school athletes.

Step 3: Design the Study Design a study to test the hypothesis. Decide on the study population, sample size, and research methods. For this study, you could use a survey to collect data on dietary habits and athletic performance from a sample of high school athletes who follow a plant-based diet and a sample of high school athletes who do not follow a plant-based diet.

Step 4: Collect Data Distribute the survey to the selected sample and collect data on dietary habits and athletic performance.

Step 5: Analyze Data Use statistical analysis to compare the data from the two samples and determine if there is a significant difference in athletic performance between those who follow a plant-based diet and those who do not.

Step 6 : Interpret Results Interpret the results of the analysis in the context of the research question and hypothesis. Discuss any limitations or potential biases in the study design.

Step 7: Draw Conclusions Based on the results, draw conclusions about whether a plant-based diet has a significant effect on athletic performance in high school athletes. If the hypothesis is supported by the data, discuss potential implications and future research directions.

Step 8: Communicate Findings Communicate the findings of the study in a clear and concise manner. Use appropriate language, visuals, and formats to ensure that the findings are understood and valued.

Applications of Research Process

The research process has numerous applications across a wide range of fields and industries. Some examples of applications of the research process include:

  • Scientific research: The research process is widely used in scientific research to investigate phenomena in the natural world and develop new theories or technologies. This includes fields such as biology, chemistry, physics, and environmental science.
  • Social sciences : The research process is commonly used in social sciences to study human behavior, social structures, and institutions. This includes fields such as sociology, psychology, anthropology, and economics.
  • Education: The research process is used in education to study learning processes, curriculum design, and teaching methodologies. This includes research on student achievement, teacher effectiveness, and educational policy.
  • Healthcare: The research process is used in healthcare to investigate medical conditions, develop new treatments, and evaluate healthcare interventions. This includes fields such as medicine, nursing, and public health.
  • Business and industry : The research process is used in business and industry to study consumer behavior, market trends, and develop new products or services. This includes market research, product development, and customer satisfaction research.
  • Government and policy : The research process is used in government and policy to evaluate the effectiveness of policies and programs, and to inform policy decisions. This includes research on social welfare, crime prevention, and environmental policy.

Purpose of Research Process

The purpose of the research process is to systematically and scientifically investigate a problem or question in order to generate new knowledge or solve a problem. The research process enables researchers to:

  • Identify gaps in existing knowledge: By conducting a thorough literature review, researchers can identify gaps in existing knowledge and develop research questions that address these gaps.
  • Collect and analyze data : The research process provides a structured approach to collecting and analyzing data. Researchers can use a variety of research methods, including surveys, experiments, and interviews, to collect data that is valid and reliable.
  • Test hypotheses : The research process allows researchers to test hypotheses and make evidence-based conclusions. Through the systematic analysis of data, researchers can draw conclusions about the relationships between variables and develop new theories or models.
  • Solve problems: The research process can be used to solve practical problems and improve real-world outcomes. For example, researchers can develop interventions to address health or social problems, evaluate the effectiveness of policies or programs, and improve organizational processes.
  • Generate new knowledge : The research process is a key way to generate new knowledge and advance understanding in a given field. By conducting rigorous and well-designed research, researchers can make significant contributions to their field and help to shape future research.

Tips for Research Process

Here are some tips for the research process:

  • Start with a clear research question : A well-defined research question is the foundation of a successful research project. It should be specific, relevant, and achievable within the given time frame and resources.
  • Conduct a thorough literature review: A comprehensive literature review will help you to identify gaps in existing knowledge, build on previous research, and avoid duplication. It will also provide a theoretical framework for your research.
  • Choose appropriate research methods: Select research methods that are appropriate for your research question, objectives, and sample size. Ensure that your methods are valid, reliable, and ethical.
  • Be organized and systematic: Keep detailed notes throughout the research process, including your research plan, methodology, data collection, and analysis. This will help you to stay organized and ensure that you don’t miss any important details.
  • Analyze data rigorously: Use appropriate statistical and analytical techniques to analyze your data. Ensure that your analysis is valid, reliable, and transparent.
  • I nterpret results carefully : Interpret your results in the context of your research question and objectives. Consider any limitations or potential biases in your research design, and be cautious in drawing conclusions.
  • Communicate effectively: Communicate your research findings clearly and effectively to your target audience. Use appropriate language, visuals, and formats to ensure that your findings are understood and valued.
  • Collaborate and seek feedback : Collaborate with other researchers, experts, or stakeholders in your field. Seek feedback on your research design, methods, and findings to ensure that they are relevant, meaningful, and impactful.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Evaluating Research

Evaluating Research – Process, Examples and...

Research Questions

Research Questions – Types, Examples and Writing...

GLOBE Main Banner

Go to GLOBE.gov

GLOBE Navigation

#language('go-to') The GLOBE Program

Steps in the Scientific Process

Globe breadcrumb.

  • Student Resources
  • Be a Scientist

Share

GLOBE Side Navigation

  • Observe Nature
  • Ask Questions
  • Develop a Hypothesis
  • Plan an Investigation
  • Assemble Data
  • Analyze & Interpret Data
  • Construct Explanations
  • Communicate Conclusions
  • Pose New Questions
  • GLOBE Science Process

Steps in the Scientific Process  

Diagram of scientific process, with arrows pointing to and from each step: 1) Observe nature; 2) Ask questions and develop hypothesis; 3) Plan and conduct an investigation; 4) Analyze and interpret data; 5) Construct explanations from evidence; 6) Communicate conclusions

Science is something students can do! The scientific process is about asking questions, pursuing answers and using evidence to support results. With that in mind, students do not have to be professionals to start conducting science in their own community. They can collect and analyze new data or perform calculations on existing data with new perspectives to gain a better understanding of the world around them. 

One of the most important aspects of science is communication. Scientists must effectively explain their findings to the general public and be open to feedback from their peers. When this whole process is followed thoroughly, everyone benefits from their hard work and perseverance.

If students are having trouble knowing where to start in this whole endeavor, they should try looking at the world around them. When something catches their attention, they should think about what kinds of questions they could ask about how or why that thing is the way it is. Then, if they simply follow GLOBE's research flowchart above, students will be conducting science before they know it!  

Start the Scientific Process

Translation

Spanish version of the Scientific Process diagram   

Research Process: 8 Steps in Research Process

What Is Rsearch Process

The research process starts with identifying a research problem and conducting a literature review to understand the context. The researcher sets research questions, objectives, and hypotheses based on the research problem.

A research study design is formed to select a sample size and collect data after processing and analyzing the collected data and the research findings presented in a research report.

What is the Research Process?

There are a variety of approaches to research in any field of investigation, irrespective of whether it is applied research or basic research. Each research study will be unique in some ways because of the particular time, setting, environment, and place it is being undertaken.

Nevertheless, all research endeavors share a common goal of furthering our understanding of the problem, and thus, all traverse through certain primary stages, forming a process called the research process.

Understanding the research process is necessary to effectively carry out research and sequence the stages inherent in the process.

How Research Process Work?

Research Process: 8 Steps In Research Process

Eight steps research process is, in essence, part and parcel of a research proposal. It is an outline of the commitment that you intend to follow in executing a research study.

A close examination of the above stages reveals that each of these stages, by and large, is dependent upon the others.

One cannot analyze data (step 7) unless he has collected data (step 6). One cannot write a report (step 8) unless he has collected and analyzed data (step 7).

Research then is a system of interdependent related stages. Violation of this sequence can cause irreparable harm to the study.

It is also true that several alternatives are available to the researcher during each stage stated above. A research process can be compared with a route map.

The map analogy is useful for the researcher because several alternatives exist at each stage of the research process.

Choosing the best alternative in terms of time constraints, money, and human resources in our research decision is our primary goal.

Before explaining the stages of the research process, we explain the term ‘iterative’ appearing within the oval-shaped diagram at the center of the schematic diagram.

The key to a successful research project ultimately lies in iteration: the process of returning again and again to the identification of the research problems, methodology, data collection, etc., which leads to new ideas, revisions, and improvements.

By discussing the research project with advisers and peers, one will often find that new research questions need to be added, variables to be omitted, added or redefined, and other changes to be made. As a proposed study is examined and reexamined from different perspectives, it may begin to transform and take a different shape.

This is expected and is an essential component of a good research study.

Besides, examining study methods and data collected from different viewpoints is important to ensure a comprehensive approach to the research question.

In conclusion, there is seldom any single strategy or formula for developing a successful research study, but it is essential to realize that the research process is cyclical and iterative.

What is the primary purpose of the research process?

The research process aims to identify a research problem, understand its context through a literature review, set research questions and objectives, design a research study, select a sample, collect data, analyze the data, and present the findings in a research report.

Why is the research design important in the research process?

The research design is the blueprint for fulfilling objectives and answering research questions. It specifies the methods and procedures for collecting, processing, and analyzing data, ensuring the study is structured and systematic.

8 Steps of Research Process

Identifying the research problem.

Identifying The Research Problem

The first and foremost task in the entire process of scientific research is to identify a research problem .

A well-identified problem will lead the researcher to accomplish all-important phases of the research process, from setting objectives to selecting the research methodology .

But the core question is: whether all problems require research.

We have countless problems around us, but all we encounter do not qualify as research problems; thus, these do not need to be researched.

Keeping this point in mind, we must draw a line between research and non-research problems.

Intuitively, researchable problems are those that have a possibility of thorough verification investigation, which can be effected through the analysis and collection of data. In contrast, the non-research problems do not need to go through these processes.

Researchers need to identify both;

Non-Research Problems

Statement of the problem, justifying the problem, analyzing the problem.

A non-research problem does not require any research to arrive at a solution. Intuitively, a non-researchable problem consists of vague details and cannot be resolved through research.

It is a managerial or built-in problem that may be solved at the administrative or management level. The answer to any question raised in a non-research setting is almost always obvious.

The cholera outbreak, for example, following a severe flood, is a common phenomenon in many communities. The reason for this is known. It is thus not a research problem.

Similarly, the reasons for the sudden rise in prices of many essential commodities following the announcement of the budget by the Finance Minister need no investigation. Hence it is not a problem that needs research.

How is a research problem different from a non-research problem?

A research problem is a perceived difficulty that requires thorough verification and investigation through data analysis and collection. In contrast, a non-research problem does not require research for a solution, as the answer is often obvious or already known.

Non-Research Problems Examples

A recent survey in town- A found that 1000 women were continuous users of contraceptive pills.

But last month’s service statistics indicate that none of these women were using contraceptive pills (Fisher et al. 1991:4).

The discrepancy is that ‘all 1000 women should have been using a pill, but none is doing so. The question is: why the discrepancy exists?

Well, the fact is, a monsoon flood has prevented all new supplies of pills from reaching town- A, and all old supplies have been exhausted. Thus, although the problem situation exists, the reason for the problem is already known.

Therefore, assuming all the facts are correct, there is no reason to research the factors associated with pill discontinuation among women. This is, thus, a non-research problem.

A pilot survey by University students revealed that in Rural Town-A, the goiter prevalence among school children is as high as 80%, while in the neighboring Rural Town-A, it is only 30%. Why is a discrepancy?

Upon inquiry, it was seen that some three years back, UNICEF launched a lipiodol injection program in the neighboring Rural Town-A.

This attempt acted as a preventive measure against the goiter. The reason for the discrepancy is known; hence, we do not consider the problem a research problem.

A hospital treated a large number of cholera cases with penicillin, but the treatment with penicillin was not found to be effective. Do we need research to know the reason?

Here again, there is one single reason that Vibrio cholera is not sensitive to penicillin; therefore, this is not the drug of choice for this disease.

In this case, too, as the reasons are known, it is unwise to undertake any study to find out why penicillin does not improve the condition of cholera patients. This is also a non-research problem.

In the tea marketing system , buying and selling tea starts with bidders. Blenders purchase open tea from the bidders. Over the years, marketing cost has been the highest for bidders and the lowest for blenders. What makes this difference?

The bidders pay exorbitantly higher transport costs, which constitute about 30% of their total cost.

Blenders have significantly fewer marketing functions involving transportation, so their marketing cost remains minimal.

Hence no research is needed to identify the factors that make this difference.

Here are some of the problems we frequently encounter, which may well be considered non-research problems:

  • Rises in the price of warm clothes during winter;
  • Preferring admission to public universities over private universities;
  • Crisis of accommodations in sea resorts during summer
  • Traffic jams in the city street after office hours;
  • High sales in department stores after an offer of a discount.

Research Problem

In contrast to a non-research problem, a research problem is of primary concern to a researcher.

A research problem is a perceived difficulty, a feeling of discomfort, or a discrepancy between a common belief and reality.

As noted by Fisher et al. (1993), a problem will qualify as a potential research problem when the following three conditions exist:

  • There should be a perceived discrepancy between “what it is” and “what it should have been.” This implies that there should be a difference between “what exists” and the “ideal or planned situation”;
  • A question about “why” the discrepancy exists. This implies that the reason(s) for this discrepancy is unclear to the researcher (so that it makes sense to develop a research question); and
  • There should be at least two possible answers or solutions to the questions or problems.

The third point is important. If there is only one possible and plausible answer to the question about the discrepancy, then a research situation does not exist.

It is a non-research problem that can be tackled at the managerial or administrative level.

Research Problem Examples

Research problem – example #1.

While visiting a rural area, the UNICEF team observed that some villages have female school attendance rates as high as 75%, while some have as low as 10%, although all villages should have a nearly equal attendance rate. What factors are associated with this discrepancy?

We may enumerate several reasons for this:

  • Villages differ in their socio-economic background.
  • In some villages, the Muslim population constitutes a large proportion of the total population. Religion might play a vital role.
  • Schools are far away from some villages. The distance thus may make this difference.

Because there is more than one answer to the problem, it is considered a research problem, and a study can be undertaken to find a solution.

Research Problem – Example #2

The Government has been making all-out efforts to ensure a regular flow of credit in rural areas at a concession rate through liberal lending policy and establishing many bank branches in rural areas.

Knowledgeable sources indicate that expected development in rural areas has not yet been achieved, mainly because of improper credit utilization.

More than one reason is suspected for such misuse or misdirection.

These include, among others:

  • Diversion of credit money to some unproductive sectors
  • Transfer of credit money to other people like money lenders, who exploit the rural people with this money
  • Lack of knowledge of proper utilization of the credit.

Here too, reasons for misuse of loans are more than one. We thus consider this problem as a researchable problem.

Research Problem – Example #3

Let’s look at a new headline: Stock Exchange observes the steepest ever fall in stock prices: several injured as retail investors clash with police , vehicles ransacked .

Investors’ demonstration, protest and clash with police pause a problem. Still, it is certainly not a research problem since there is only one known reason for the problem: Stock Exchange experiences the steepest fall in stock prices. But what causes this unprecedented fall in the share market?

Experts felt that no single reason could be attributed to the problem. It is a mix of several factors and is a research problem. The following were assumed to be some of the possible reasons:

  • The merchant banking system;
  • Liquidity shortage because of the hike in the rate of cash reserve requirement (CRR);
  • IMF’s warnings and prescriptions on the commercial banks’ exposure to the stock market;
  • Increase in supply of new shares;
  • Manipulation of share prices;
  • Lack of knowledge of the investors on the company’s fundamentals.

The choice of a research problem is not as easy as it appears. The researchers generally guide it;

  • own intellectual orientation ,
  • level of training,
  • experience,
  • knowledge on the subject matter, and
  • intellectual curiosity.

Theoretical and practical considerations also play a vital role in choosing a research problem. Societal needs also guide in choosing a research problem.

Once we have chosen a research problem, a few more related steps must be followed before a decision is taken to undertake a research study.

These include, among others, the following:

  • Statement of the problem.
  • Justifying the problem.
  • Analyzing the problem.

A detailed exposition of these issues is undertaken in chapter ten while discussing the proposal development.

A clear and well-defined problem statement is considered the foundation for developing the research proposal.

It enables the researcher to systematically point out why the proposed research on the problem should be undertaken and what he hopes to achieve with the study’s findings.

A well-defined statement of the problem will lead the researcher to formulate the research objectives, understand the background of the study, and choose a proper research methodology.

Once the problem situation has been identified and clearly stated, it is important to justify the importance of the problem.

In justifying the problems, we ask such questions as why the problem of the study is important, how large and widespread the problem is, and whether others can be convinced about the importance of the problem and the like.

Answers to the above questions should be reviewed and presented in one or two paragraphs that justify the importance of the problem.

As a first step in analyzing the problem, critical attention should be given to accommodate the viewpoints of the managers, users, and researchers to the problem through threadbare discussions.

The next step is identifying the factors that may have contributed to the perceived problems.

Issues of Research Problem Identification

There are several ways to identify, define, and analyze a problem, obtain insights, and get a clearer idea about these issues. Exploratory research is one of the ways of accomplishing this.

The purpose of the exploratory research process is to progressively narrow the scope of the topic and transform the undefined problems into defined ones, incorporating specific research objectives.

The exploratory study entails a few basic strategies for gaining insights into the problem. It is accomplished through such efforts as:

Pilot Survey

A pilot survey collects proxy data from the ultimate subjects of the study to serve as a guide for the large study. A pilot study generates primary data, usually for qualitative analysis.

This characteristic distinguishes a pilot survey from secondary data analysis, which gathers background information.

Case Studies

Case studies are quite helpful in diagnosing a problem and paving the way to defining the problem. It investigates one or a few situations identical to the researcher’s problem.

Focus Group Interviews

Focus group interviews, an unstructured free-flowing interview with a small group of people, may also be conducted to understand and define a research problem .

Experience Survey

Experience survey is another strategy to deal with the problem of identifying and defining the research problem.

It is an exploratory research endeavor in which individuals knowledgeable and experienced in a particular research problem are intimately consulted to understand the problem.

These persons are sometimes known as key informants, and an interview with them is popularly known as the Key Informant Interview (KII).

Reviewing of Literature

Reviewing Research Literature

A review of relevant literature is an integral part of the research process. It enables the researcher to formulate his problem in terms of the specific aspects of the general area of his interest that has not been researched so far.

Such a review provides exposure to a larger body of knowledge and equips him with enhanced knowledge to efficiently follow the research process.

Through a proper review of the literature, the researcher may develop the coherence between the results of his study and those of the others.

A review of previous documents on similar or related phenomena is essential even for beginning researchers.

Ignoring the existing literature may lead to wasted effort on the part of the researchers.

Why spend time merely repeating what other investigators have already done?

Suppose the researcher is aware of earlier studies of his topic or related topics . In that case, he will be in a much better position to assess his work’s significance and convince others that it is important.

A confident and expert researcher is more crucial in questioning the others’ methodology, the choice of the data, and the quality of the inferences drawn from the study results.

In sum, we enumerate the following arguments in favor of reviewing the literature:

  • It avoids duplication of the work that has been done in the recent past.
  • It helps the researcher discover what others have learned and reported on the problem.
  • It enables the researcher to become familiar with the methodology followed by others.
  • It allows the researcher to understand what concepts and theories are relevant to his area of investigation.
  • It helps the researcher to understand if there are any significant controversies, contradictions, and inconsistencies in the findings.
  • It allows the researcher to understand if there are any unanswered research questions.
  • It might help the researcher to develop an analytical framework.
  • It will help the researcher consider including variables in his research that he might not have thought about.

Why is reviewing literature crucial in the research process?

Reviewing literature helps avoid duplicating previous work, discovers what others have learned about the problem, familiarizes the researcher with relevant concepts and theories, and ensures a comprehensive approach to the research question.

What is the significance of reviewing literature in the research process?

Reviewing relevant literature helps formulate the problem, understand the background of the study, choose a proper research methodology, and develop coherence between the study’s results and previous findings.

Setting Research Questions, Objectives, and Hypotheses

Setting Research Questions, Objectives, And Hypotheses

After discovering and defining the research problem, researchers should make a formal statement of the problem leading to research objectives .

An objective will precisely say what should be researched, delineate the type of information that should be collected, and provide a framework for the scope of the study. A well-formulated, testable research hypothesis is the best expression of a research objective.

A hypothesis is an unproven statement or proposition that can be refuted or supported by empirical data. Hypothetical statements assert a possible answer to a research question.

Step #4: Choosing the Study Design

Choosing The Study Design

The research design is the blueprint or framework for fulfilling objectives and answering research questions .

It is a master plan specifying the methods and procedures for collecting, processing, and analyzing the collected data. There are four basic research designs that a researcher can use to conduct their study;

  • experiment,
  • secondary data study, and
  • observational study.

The type of research design to be chosen from among the above four methods depends primarily on four factors:

  • The type of problem
  • The objectives of the study,
  • The existing state of knowledge about the problem that is being studied, and
  • The resources are available for the study.

Deciding on the Sample Design

Deciding On The Sample Design

Sampling is an important and separate step in the research process. The basic idea of sampling is that it involves any procedure that uses a relatively small number of items or portions (called a sample) of a universe (called population) to conclude the whole population.

It contrasts with the process of complete enumeration, in which every member of the population is included.

Such a complete enumeration is referred to as a census.

A population is the total collection of elements we wish to make some inference or generalization.

A sample is a part of the population, carefully selected to represent that population. If certain statistical procedures are followed in selecting the sample, it should have the same characteristics as the population. These procedures are embedded in the sample design.

Sample design refers to the methods followed in selecting a sample from the population and the estimating technique vis-a-vis the formula for computing the sample statistics .

The fundamental question is, then, how to select a sample.

To answer this question, we must have acquaintance with the sampling methods.

These methods are basically of two types;

  • probability sampling , and
  • non-probability sampling .

Probability sampling ensures every unit has a known nonzero probability of selection within the target population.

If there is no feasible alternative, a non-probability sampling method may be employed.

The basis of such selection is entirely dependent on the researcher’s discretion. This approach is called judgment sampling, convenience sampling, accidental sampling, and purposive sampling.

The most widely used probability sampling methods are simple random sampling , stratified random sampling , cluster sampling , and systematic sampling . They have been classified by their representation basis and unit selection techniques.

Two other variations of the sampling methods that are in great use are multistage sampling and probability proportional to size (PPS) sampling .

Multistage sampling is most commonly used in drawing samples from very large and diverse populations.

The PPS sampling is a variation of multistage sampling in which the probability of selecting a cluster is proportional to its size, and an equal number of elements are sampled within each cluster.

Collecting Data From The Research Sample

Collect Data From The Research Sample

Data gathering may range from simple observation to a large-scale survey in any defined population. There are many ways to collect data. The approach selected depends on the objectives of the study, the research design, and the availability of time, money, and personnel.

With the variation in the type of data (qualitative or quantitative) to be collected, the method of data collection also varies .

The most common means for collecting quantitative data is the structured interview .

Studies that obtain data by interviewing respondents are called surveys. Data can also be collected by using self-administered questionnaires . Telephone interviewing is another way in which data may be collected .

Other means of data collection include secondary sources, such as the census, vital registration records, official documents, previous surveys, etc.

Qualitative data are collected mainly through in-depth interviews, focus group discussions , Key Informant Interview ( KII), and observational studies.

Process and Analyze the Collected Research Data

Processing And Analyzing The Collected Research Data

Data processing generally begins with the editing and coding of data . Data are edited to ensure consistency across respondents and to locate omissions if any.

In survey data, editing reduces errors in the recording, improves legibility, and clarifies unclear and inappropriate responses. In addition to editing, the data also need coding.

Because it is impractical to place raw data into a report, alphanumeric codes are used to reduce the responses to a more manageable form for storage and future processing.

This coding process facilitates the processing of the data. The personal computer offers an excellent opportunity for data editing and coding processes.

Data analysis usually involves reducing accumulated data to a manageable size, developing summaries, searching for patterns, and applying statistical techniques for understanding and interpreting the findings in light of the research questions.

Further, based on his analysis, the researcher determines if his findings are consistent with the formulated hypotheses and theories.

The techniques used in analyzing data may range from simple graphical techniques to very complex multivariate analyses depending on the study’s objectives, the research design employed, and the nature of the data collected.

As in the case of data collection methods, an analytical technique appropriate in one situation may not be suitable for another.

Writing Research Report – Developing Research Proposal, Writing Report, Disseminating and Utilizing Results

Writing Research Report - Developing Research Proposal, Writing Report, Disseminating And Utilizing Results

The entire task of a research study is accumulated in a document called a proposal or research proposal.

A research proposal is a work plan, prospectus, outline, offer, and a statement of intent or commitment from an individual researcher or an organization to produce a product or render a service to a potential client or sponsor .

The proposal will be prepared to keep the sequence presented in the research process. The proposal tells us what, how, where, and to whom it will be done.

It must also show the benefit of doing it. It always includes an explanation of the purpose of the study (the research objectives) or a definition of the problem.

It systematically outlines the particular research methodology and details the procedures utilized at each stage of the research process.

The end goal of a scientific study is to interpret the results and draw conclusions.

To this end, it is necessary to prepare a report and transmit the findings and recommendations to administrators, policymakers, and program managers to make a decision.

There are various research reports: term papers, dissertations, journal articles , papers for presentation at professional conferences and seminars, books, thesis, and so on. The results of a research investigation prepared in any form are of little utility if they are not communicated to others.

The primary purpose of a dissemination strategy is to identify the most effective media channels to reach different audience groups with study findings most relevant to their needs.

The dissemination may be made through a conference, a seminar, a report, or an oral or poster presentation.

The style and organization of the report will differ according to the target audience , the occasion, and the purpose of the research. Reports should be developed from the client’s perspective.

A report is an excellent means that helps to establish the researcher’s credibility. At a bare minimum, a research report should contain sections on:

  • An executive summary;
  • Background of the problem;
  • Literature review;
  • Methodology;
  • Discussion;
  • Conclusions and
  • Recommendations.

The study results can also be disseminated through peer-reviewed journals published by academic institutions and reputed publishers both at home and abroad. The report should be properly evaluated .

These journals have their format and editorial policies. The contributors can submit their manuscripts adhering to the policies and format for possible publication of their papers.

There are now ample opportunities for researchers to publish their work online.

The researchers have conducted many interesting studies without affecting actual settings. Ideally, the concluding step of a scientific study is to plan for its utilization in the real world.

Although researchers are often not in a position to implement a plan for utilizing research findings, they can contribute by including in their research reports a few recommendations regarding how the study results could be utilized for policy formulation and program intervention.

Why is the dissemination of research findings important?

Dissemination of research findings is crucial because the results of a research investigation have little utility if not communicated to others. Dissemination ensures that the findings reach relevant stakeholders, policymakers, and program managers to inform decisions.

How should a research report be structured?

A research report should contain sections on an executive summary, background of the problem, literature review, methodology, findings, discussion, conclusions, and recommendations.

Why is it essential to consider the target audience when preparing a research report?

The style and organization of a research report should differ based on the target audience, occasion, and research purpose. Tailoring the report to the audience ensures that the findings are communicated effectively and are relevant to their needs.

30 Accounting Research Paper Topics And Ideas For Writing

Your email address will not be published. Required fields are marked *

blurred figure in a tunnel moving towards a light

The new science of death: ‘There’s something happening in the brain that makes no sense’

New research into the dying brain suggests the line between life and death may be less distinct than previously thought

P atient One was 24 years old and pregnant with her third child when she was taken off life support. It was 2014. A couple of years earlier, she had been diagnosed with a disorder that caused an irregular heartbeat, and during her two previous pregnancies she had suffered seizures and faintings. Four weeks into her third pregnancy, she collapsed on the floor of her home. Her mother, who was with her, called 911. By the time an ambulance arrived, Patient One had been unconscious for more than 10 minutes. Paramedics found that her heart had stopped.

After being driven to a hospital where she couldn’t be treated, Patient One was taken to the emergency department at the University of Michigan. There, medical staff had to shock her chest three times with a defibrillator before they could restart her heart. She was placed on an external ventilator and pacemaker, and transferred to the neurointensive care unit, where doctors monitored her brain activity. She was unresponsive to external stimuli, and had a massive swelling in her brain. After she lay in a deep coma for three days, her family decided it was best to take her off life support. It was at that point – after her oxygen was turned off and nurses pulled the breathing tube from her throat – that Patient One became one of the most intriguing scientific subjects in recent history.

For several years, Jimo Borjigin, a professor of neurology at the University of Michigan, had been troubled by the question of what happens to us when we die. She had read about the near-death experiences of certain cardiac-arrest survivors who had undergone extraordinary psychic journeys before being resuscitated. Sometimes, these people reported travelling outside of their bodies towards overwhelming sources of light where they were greeted by dead relatives. Others spoke of coming to a new understanding of their lives, or encountering beings of profound goodness. Borjigin didn’t believe the content of those stories was true – she didn’t think the souls of dying people actually travelled to an afterworld – but she suspected something very real was happening in those patients’ brains. In her own laboratory, she had discovered that rats undergo a dramatic storm of many neurotransmitters, including serotonin and dopamine, after their hearts stop and their brains lose oxygen. She wondered if humans’ near-death experiences might spring from a similar phenomenon, and if it was occurring even in people who couldn’t be revived.

Dying seemed like such an important area of research – we all do it, after all – that Borjigin assumed other scientists had already developed a thorough understanding of what happens to the brain in the process of death. But when she looked at the scientific literature, she found little enlightenment. “To die is such an essential part of life,” she told me recently. “But we knew almost nothing about the dying brain.” So she decided to go back and figure out what had happened inside the brains of people who died at the University of Michigan neurointensive care unit. Among them was Patient One.

At the time Borjigin began her research into Patient One, the scientific understanding of death had reached an impasse. Since the 1960s, advances in resuscitation had helped to revive thousands of people who might otherwise have died. About 10% or 20% of those people brought with them stories of near-death experiences in which they felt their souls or selves departing from their bodies. A handful of those patients even claimed to witness, from above, doctors’ attempts to resuscitate them. According to several international surveys and studies, one in 10 people claims to have had a near-death experience involving cardiac arrest, or a similar experience in circumstances where they may have come close to death. That’s roughly 800 million souls worldwide who may have dipped a toe in the afterlife.

As remarkable as these near-death experiences sounded, they were consistent enough that some scientists began to believe there was truth to them: maybe people really did have minds or souls that existed separately from their living bodies. In the 1970s, a small network of cardiologists, psychiatrists, medical sociologists and social psychologists in North America and Europe began investigating whether near-death experiences proved that dying is not the end of being, and that consciousness can exist independently of the brain. The field of near-death studies was born.

Over the next 30 years, researchers collected thousands of case reports of people who had had near-death experiences. Meanwhile, new technologies and techniques were helping doctors revive more and more people who, in earlier periods of history, would have almost certainly been permanently deceased. “We are now at the point where we have both the tools and the means to scientifically answer the age-old question: What happens when we die?” wrote Sam Parnia, an accomplished resuscitation specialist and one of the world’s leading experts on near-death experiences, in 2006. Parnia himself was devising an international study to test whether patients could have conscious awareness even after they were found clinically dead.

But by 2015, experiments such as Parnia’s had yielded ambiguous results, and the field of near-death studies was not much closer to understanding death than it had been when it was founded four decades earlier. That’s when Borjigin, together with several colleagues, took the first close look at the record of electrical activity in the brain of Patient One after she was taken off life support. What they discovered – in results reported for the first time last year – was almost entirely unexpected, and has the potential to rewrite our understanding of death.

“I believe what we found is only the tip of a vast iceberg,” Borjigin told me. “What’s still beneath the surface is a full account of how dying actually takes place. Because there’s something happening in there, in the brain, that makes no sense.”

F or all that science has learned about the workings of life, death remains among the most intractable of mysteries. “At times I have been tempted to believe that the creator has eternally intended this department of nature to remain baffling, to prompt our curiosities and hopes and suspicions all in equal measure,” the philosopher William James wrote in 1909.

The first time that the question Borjigin began asking in 2015 was posed – about what happens to the brain during death – was a quarter of a millennium earlier. Around 1740, a French military physician reviewed the case of a famous apothecary who, after a “malign fever” and several blood-lettings, fell unconscious and thought he had travelled to the Kingdom of the Blessed . The physician speculated that the apothecary’s experience had been caused by a surge of blood to the brain. But between that early report and the mid-20th century, scientific interest in near-death experiences remained sporadic.

In 1892, the Swiss climber and geologist Albert Heim collected the first systematic accounts of near-death experiences from 30 fellow climbers who had suffered near-fatal falls. In many cases, the climbers underwent a sudden review of their entire past, heard beautiful music, and “fell in a superbly blue heaven containing roseate cloudlets”, Heim wrote. “Then consciousness was painlessly extinguished, usually at the moment of impact.” There were a few more attempts to do research in the early 20th century, but little progress was made in understanding near-death experiences scientifically. Then, in 1975, an American medical student named Raymond Moody published a book called Life After Life.

Sunbeams behind clouds in vivid sunset sky reflecting in ocean water

In his book, Moody distilled the reports of 150 people who had had intense, life-altering experiences in the moments surrounding a cardiac arrest. Although the reports varied, he found that they often shared one or more common features or themes. The narrative arc of the most detailed of those reports – departing the body and travelling through a long tunnel, having an out-of-body experience, encountering spirits and a being of light, one’s whole life flashing before one’s eyes, and returning to the body from some outer limit – became so canonical that the art critic Robert Hughes could refer to it years later as “the familiar kitsch of near-death experience”. Moody’s book became an international bestseller.

In 1976, the New York Times reported on the burgeoning scientific interest in “life after death” and the “emerging field of thanatology”. The following year, Moody and several fellow thanatologists founded an organisation that became the International Association for Near-Death Studies. In 1981, they printed the inaugural issue of Vital Signs , a magazine for the general reader that was largely devoted to stories of near-death experiences. The following year they began producing the field’s first peer-reviewed journal, which became the Journal of Near-Death Studies . The field was growing, and taking on the trappings of scientific respectability. Reviewing its rise in 1988, the British Journal of Psychiatry captured the field’s animating spirit: “A grand hope has been expressed that, through NDE research, new insights can be gained into the ageless mystery of human mortality and its ultimate significance, and that, for the first time, empirical perspectives on the nature of death may be achieved.”

But near-death studies was already splitting into several schools of belief, whose tensions continue to this day. One influential camp was made up of spiritualists, some of them evangelical Christians, who were convinced that near-death experiences were genuine sojourns in the land of the dead and divine. As researchers, the spiritualists’ aim was to collect as many reports of near-death experience as possible, and to proselytise society about the reality of life after death. Moody was their most important spokesman; he eventually claimed to have had multiple past lives and built a “psychomanteum” in rural Alabama where people could attempt to summon the spirits of the dead by gazing into a dimly lit mirror.

The second, and largest, faction of near-death researchers were the parapsychologists, those interested in phenomena that seemed to undermine the scientific orthodoxy that the mind could not exist independently of the brain. These researchers, who were by and large trained scientists following well established research methods, tended to believe that near-death experiences offered evidence that consciousness could persist after the death of the individual. Many of them were physicians and psychiatrists who had been deeply affected after hearing the near-death stories of patients they had treated in the ICU. Their aim was to find ways to test their theories of consciousness empirically, and to turn near-death studies into a legitimate scientific endeavour.

Finally, there emerged the smallest contingent of near-death researchers, who could be labelled the physicalists. These were scientists, many of whom studied the brain, who were committed to a strictly biological account of near-death experiences. Like dreams, the physicalists argued, near-death experiences might reveal psychological truths, but they did so through hallucinatory fictions that emerged from the workings of the body and the brain. (Indeed, many of the states reported by near-death experiencers can apparently be achieved by taking a hero’s dose of ketamine.) Their basic premise was: no functioning brain means no consciousness, and certainly no life after death. Their task, which Borjigin took up in 2015, was to discover what was happening during near-death experiences on a fundamentally physical level.

Slowly, the spiritualists left the field of research for the loftier domains of Christian talk radio, and the parapsychologists and physicalists started bringing near-death studies closer to the scientific mainstream. Between 1975, when Moody published Life After Life, and 1984, only 17 articles in the PubMed database of scientific publications mentioned near-death experiences. In the following decade, there were 62. In the most recent 10-year span, there were 221. Those articles have appeared everywhere from the Canadian Urological Association Journal to the esteemed pages of The Lancet.

Today, there is a widespread sense throughout the community of near-death researchers that we are on the verge of great discoveries. Charlotte Martial, a neuroscientist at the University of Liège in Belgium who has done some of the best physicalist work on near-death experiences, hopes we will soon develop a new understanding of the relationship between the internal experience of consciousness and its outward manifestations, for example in coma patients. “We really are in a crucial moment where we have to disentangle consciousness from responsiveness, and maybe question every state that we consider unconscious,” she told me. Parnia, the resuscitation specialist, who studies the physical processes of dying but is also sympathetic to a parapsychological theory of consciousness, has a radically different take on what we are poised to find out. “I think in 50 or 100 years time we will have discovered the entity that is consciousness,” he told me. “It will be taken for granted that it wasn’t produced by the brain, and it doesn’t die when you die.”

I f the field of near-death studies is at the threshold of new discoveries about consciousness and death, it is in large part because of a revolution in our ability to resuscitate people who have suffered cardiac arrest. Lance Becker has been a leader in resuscitation science for more than 30 years. As a young doctor attempting to revive people through CPR in the mid-1980s, senior physicians would often step in to declare patients dead. “At a certain point, they would just say, ‘OK, that’s enough. Let’s stop. This is unsuccessful. Time of death: 1.37pm,’” he recalled recently. “And that would be the last thing. And one of the things running through my head as a young doctor was, ‘Well, what really happened at 1.37?’”

In a medical setting, “clinical death” is said to occur at the moment the heart stops pumping blood, and the pulse stops. This is widely known as cardiac arrest. (It is different from a heart attack, in which there is a blockage in a heart that’s still pumping.) Loss of oxygen to the brain and other organs generally follows within seconds or minutes, although the complete cessation of activity in the heart and brain – which is often called “flatlining” or, in the case of the latter, “brain death” – may not occur for many minutes or even hours.

For almost all people at all times in history, cardiac arrest was basically the end of the line. That began to change in 1960, when the combination of mouth-to-mouth ventilation, chest compressions and external defibrillation known as cardiopulmonary resuscitation, or CPR, was formalised. Shortly thereafter, a massive campaign was launched to educate clinicians and the public on CPR’s basic techniques , and soon people were being revived in previously unthinkable, if still modest, numbers.

As more and more people were resuscitated, scientists learned that, even in its acute final stages, death is not a point, but a process. After cardiac arrest, blood and oxygen stop circulating through the body, cells begin to break down, and normal electrical activity in the brain gets disrupted. But the organs don’t fail irreversibly right away, and the brain doesn’t necessarily cease functioning altogether. There is often still the possibility of a return to life. In some cases, cell death can be stopped or significantly slowed, the heart can be restarted, and brain function can be restored. In other words, the process of death can be reversed.

It is no longer unheard of for people to be revived even six hours after being declared clinically dead. In 2011, Japanese doctors reported the case of a young woman who was found in a forest one morning after an overdose stopped her heart the previous night; using advanced technology to circulate blood and oxygen through her body, the doctors were able to revive her more than six hours later, and she was able to walk out of the hospital after three weeks of care. In 2019, a British woman named Audrey Schoeman who was caught in a snowstorm spent six hours in cardiac arrest before doctors brought her back to life with no evident brain damage.

“I don’t think there’s ever been a more exciting time for the field,” Becker told me. “We’re discovering new drugs, we’re discovering new devices, and we’re discovering new things about the brain.”

T he brain – that’s the tricky part. In January 2021, as the Covid-19 pandemic was surging toward what would become its deadliest week on record, Netflix released a documentary series called Surviving Death . In the first episode, some of near-death studies’ most prominent parapsychologists presented the core of their arguments for why they believe near-death experiences show that consciousness exists independently of the brain. “When the heart stops, within 20 seconds or so, you get flatlining, which means no brain activity,” Bruce Greyson, an emeritus professor of psychiatry at the University of Virginia and one of the founding members of the International Association for Near-Death Studies, says in the documentary. “And yet,” he goes on to claim, “people have near-death experiences when they’ve been (quote) ‘flatlined’ for longer than that.”

That is a key tenet of the parapsychologists’ arguments: if there is consciousness without brain activity, then consciousness must dwell somewhere beyond the brain. Some of the parapsychologists speculate that it is a “non-local” force that pervades the universe, like electromagnetism. This force is received by the brain, but is not generated by it, the way a television receives a broadcast.

In order for this argument to hold, something else has to be true: near-death experiences have to happen during death, after the brain shuts down. To prove this, parapsychologists point to a number of rare but astounding cases known as “veridical” near-death experiences, in which patients seem to report details from the operating room that they might have known only if they had conscious awareness during the time that they were clinically dead. Dozens of such reports exist. One of the most famous is about a woman who apparently travelled so far outside her body that she was able to spot a shoe on a window ledge in another part of the hospital where she went into cardiac arrest; the shoe was later reportedly found by a nurse.

an antique illustration of an ‘out of body experience’

At the very least, Parnia and his colleagues have written, such phenomena are “inexplicable through current neuroscientific models”. Unfortunately for the parapsychologists, however, none of the reports of post-death awareness holds up to strict scientific scrutiny. “There are many claims of this kind, but in my long decades of research into out-of-body and near-death experiences I never met any convincing evidence that this is true,” Sue Blackmore, a well-known researcher into parapsychology who had her own near-death experience as a young woman in 1970, has written .

The case of the shoe, Blackmore pointed out, relied solely on the report of the nurse who claimed to have found it. That’s far from the standard of proof the scientific community would require to accept a result as radical as that consciousness can travel beyond the body and exist after death. In other cases, there’s not enough evidence to prove that the experiences reported by cardiac arrest survivors happened when their brains were shut down, as opposed to in the period before or after they supposedly “flatlined”. “So far, there is no sufficiently rigorous, convincing empirical evidence that people can observe their surroundings during a near-death experience,” Charlotte Martial, the University of Liège neuroscientist, told me.

The parapsychologists tend to push back by arguing that even if each of the cases of veridical near-death experiences leaves room for scientific doubt, surely the accumulation of dozens of these reports must count for something. But that argument can be turned on its head: if there are so many genuine instances of consciousness surviving death, then why should it have so far proven impossible to catch one empirically?

P erhaps the story to be written about near-death experiences is not that they prove consciousness is radically different from what we thought it was. Instead, it is that the process of dying is far stranger than scientists ever suspected. The spiritualists and parapsychologists are right to insist that something deeply weird is happening to people when they die, but they are wrong to assume it is happening in the next life rather than this one. At least, that is the implication of what Jimo Borjigin found when she investigated the case of Patient One.

In the moments after Patient One was taken off oxygen, there was a surge of activity in her dying brain. Areas that had been nearly silent while she was on life support suddenly thrummed with high-frequency electrical signals called gamma waves. In particular, the parts of the brain that scientists consider a “hot zone” for consciousness became dramatically alive. In one section, the signals remained detectable for more than six minutes. In another, they were 11 to 12 times higher than they had been before Patient One’s ventilator was removed.

“As she died, Patient One’s brain was functioning in a kind of hyperdrive,” Borjigin told me. For about two minutes after her oxygen was cut off, there was an intense synchronisation of her brain waves, a state associated with many cognitive functions, including heightened attention and memory. The synchronisation dampened for about 18 seconds, then intensified again for more than four minutes. It faded for a minute, then came back for a third time.

In those same periods of dying, different parts of Patient One’s brain were suddenly in close communication with each other. The most intense connections started immediately after her oxygen stopped, and lasted for nearly four minutes. There was another burst of connectivity more than five minutes and 20 seconds after she was taken off life support. In particular, areas of her brain associated with processing conscious experience – areas that are active when we move through the waking world, and when we have vivid dreams – were communicating with those involved in memory formation. So were parts of the brain associated with empathy. Even as she slipped irrevocably deeper into death, something that looked astonishingly like life was taking place over several minutes in Patient One’s brain.

The shadows of anonymous people are seen on a wall

Those glimmers and flashes of something like life contradict the expectations of almost everyone working in the field of resuscitation science and near-death studies. The predominant belief – expressed by Greyson, the psychiatrist and co-founder of the International Association of Near Death Studies, in the Netflix series Surviving Death – was that as soon as oxygen stops going to the brain, neurological activity falls precipitously. Although a few earlier instances of brain waves had been reported in dying human brains, nothing as detailed and complex as what occurred in Patient One had ever been detected.

Given the levels of activity and connectivity in particular regions of her dying brain, Borjigin believes it’s likely that Patient One had a profound near-death experience with many of its major features: out-of-body sensations, visions of light, feelings of joy or serenity, and moral re-evaluations of one’s life. Of course, Patient One did not recover, so no one can prove that the extraordinary happenings in her dying brain had experiential counterparts. Greyson and one of the other grandees of near-death studies, a Dutch cardiologist named Pim van Lommel, have asserted that Patient One’s brain activity can shed no light on near-death experiences because her heart hadn’t fully flatlined, but that is a self-defeating argument: there is no rigorous empirical evidence that near-death experiences occur in people whose hearts have completely stopped.

At the very least, Patient One’s brain activity – and the activity in the dying brain of another patient Borjigin studied, a 77-year-old woman known as Patient Three – seems to close the door on the argument that the brain always and nearly immediately ceases to function in a coherent manner in the moments after clinical death. “The brain, contrary to everybody’s belief, is actually super active during cardiac arrest,” Borjigin said. Death may be far more alive than we ever thought possible.

B orjigin believes that understanding the dying brain is one of the “holy grails” of neuroscience. “The brain is so resilient, the heart is so resilient, that it takes years of abuse to kill them,” she pointed out. “Why then, without oxygen, can a perfectly healthy person die within 30 minutes, irreversibly?” Although most people would take that result for granted, Borjigin thinks that, on a physical level, it actually makes little sense.

Borjigin hopes that understanding the neurophysiology of death can help us to reverse it. She already has brain activity data from dozens of deceased patients that she is waiting to analyse. But because of the paranormal stigma associated with near-death studies, she says, few research agencies want to grant her funding. “Consciousness is almost a dirty word amongst funders,” she added. “Hardcore scientists think research into it should belong to maybe theology, philosophy, but not in hardcore science. Other people ask, ‘What’s the use? The patients are gonna die anyway, so why study that process? There’s nothing you can do about it.’”

Evidence is already emerging that even total brain death may someday be reversible. In 2019, scientists at Yale University harvested the brains of pigs that had been decapitated in a commercial slaughterhouse four hours earlier. Then they perfused the brains for six hours with a special cocktail of drugs and synthetic blood. Astoundingly, some of the cells in the brains began to show metabolic activity again, and some of the synapses even began firing. The pigs’ brain scans didn’t show the widespread electrical activity that we typically associate with sentience or consciousness. But the fact that there was any activity at all suggests the frontiers of life may one day extend much, much farther into the realms of death than most scientists currently imagine.

Other serious avenues of research into near-death experience are ongoing. Martial and her colleagues at the University of Liège are working on many issues relating to near-death experiences. One is whether people with a history of trauma, or with more creative minds, tend to have such experiences at higher rates than the general population. Another is on the evolutionary biology of near-death experiences. Why, evolutionarily speaking, should we have such experiences at all? Martial and her colleagues speculate that it may be a form of the phenomenon known as thanatosis, in which creatures throughout the animal kingdom feign death to escape mortal dangers. Other researchers have proposed that the surge of electrical activity in the moments after cardiac arrest is just the final seizure of a dying brain, or have hypothesised that it’s a last-ditch attempt by the brain to restart itself, like jump-starting the engine on a car.

Meanwhile, in parts of the culture where enthusiasm is reserved not for scientific discovery in this world, but for absolution or benediction in the next, the spiritualists, along with sundry other kooks and grifters, are busily peddling their tales of the afterlife. Forget the proverbial tunnel of light: in America in particular, a pipeline of money has been discovered from death’s door, through Christian media, to the New York Times bestseller list and thence to the fawning, gullible armchairs of the nation’s daytime talk shows. First stop, paradise; next stop, Dr Oz.

But there is something that binds many of these people – the physicalists, the parapsychologists, the spiritualists – together. It is the hope that by transcending the current limits of science and of our bodies, we will achieve not a deeper understanding of death, but a longer and more profound experience of life. That, perhaps, is the real attraction of the near-death experience: it shows us what is possible not in the next world, but in this one.

  • The long read
  • Death and dying
  • Consciousness
  • Neuroscience

Most viewed

ScienceDaily

Researchers developed new method for detecting heart failure with a smartphone

The new technology, which was created at the University of Turku and developed by the company CardioSignal, uses a smartphone to analyse heart movement and detect heart failure. The study involved five organisations from Finland and the United States.

Heart failure is a condition affecting tens of millions of people worldwide, in which the heart is unable to perform its normal function of pumping blood to the body. It is a serious condition that develops as a result of a number of cardiovascular diseases and its symptoms may require repeated hospitalisation.

Heart failure is challenging to diagnose because its symptoms, such as shortness of breath, abnormal fatigue on exertion, and swelling, can be caused by a number of conditions. There is no simple test available to detect it and diagnostics relies on an examination by a doctor, blood tests, and sophisticated imaging, such as an ultrasound scan of the heart.

Gyrocardiography is a non-invasive technique for measuring cardiac vibrations on the chest. The smartphone's built-in motion sensors can detect and record these vibrations, including those that doctors cannot hear with a stethoscope. The method has been developed over the last 10 years by researchers at the University of Turku and CardioSignal.

The researchers' latest study on using smartphone motion sensors to detect heart failure was carried out at the Turku and Helsinki University Hospitals in Finland and Stanford University Hospital in the US. Approximately 1,000 people took part in the study, of whom around 200 were patients suffering from heart failure. The study compared the data provided by the motion sensors in the heart failure patients and patients without heart disease.

"The results we obtained with this new method are promising and may in the future make it easier to detect heart failure," says Cardiologist Antti Saraste, one of the two main authors of the research article and the Professor of Cardiovascular Medicine at the University of Turku, Finland.

Precise detection uncovers heart failure

The researchers found that heart failure is associated with typical changes in the motion sensor data collected by a smartphone. On the basis of this data, the researchers were able to identify the majority of patients with heart failure.

The analysis of the movements detected by the gyroscope and accelerometer is so accurate that in the future it could provide healthcare professionals with a quick and easy way to detect heart failure.

"Primary healthcare has very limited tools for detecting heart failure. We can create completely new treatment options for remote monitoring of at-risk groups and for monitoring already diagnosed patients after hospitalisation," says CardioSignal's founding member and CEO, Cardiologist Juuso Blomster.

Consistent with several European countries, heart failure affects around 1-2% of the population in Finland, but it is much more common in older adults, affecting around one in ten people aged 70. Detecting heart failure is important as effective treatment can help to alleviate its symptoms. Accurate diagnosis and timely access to treatment can also reduce healthcare costs, which are driven up by emergency room visits and hospital stays, especially during exacerbations.

The joint research projects between CardioSignal and the University of Turku aim to promote people's health and reduce healthcare costs through innovation, improved disease diagnostics, and prevention of serious complications.

  • Heart Disease
  • Stroke Prevention
  • Diseases and Conditions
  • Neural Interfaces
  • Information Technology
  • Heart failure
  • Artificial heart
  • Coronary heart disease
  • Ischaemic heart disease
  • Heart valve
  • Echocardiography

Story Source:

Materials provided by University of Turku . Note: Content may be edited for style and length.

Journal Reference :

  • Francois Haddad, Antti Saraste, Kristiina M. Santalahti, Mikko Pänkäälä, Matti Kaisti, Riina Kandolin, Piia Simonen, Wail Nammas, Kamal Jafarian Dehkordi, Tero Koivisto, Juhani Knuuti, Kenneth W. Mahaffey, Juuso I. Blomster. Smartphone-Based Recognition of Heart Failure by Means of Microelectromechanical Sensors . JACC: Heart Failure , 2024; DOI: 10.1016/j.jchf.2024.01.022

Cite This Page :

Explore More

  • Soft, Flexible 'Skeletons' for 'Muscular' Robots
  • Toothed Whale Echolocation and Jaw Muscles
  • Friendly Pat On the Back: Free Throws
  • How the Moon Turned Itself Inside Out
  • A Welcome Hug Is Good for Your Health
  • Climate Change Threatens Antarctic Meteorites
  • Precise Measurement of Our Expanding Universe
  • Little Research On 'Polycrisis' Humanity Faces
  • Prebiotic Molecular Kitchen
  • A Neurodegenerative Disease Triggered by Virus

Trending Topics

Strange & offbeat.

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Academy of Sciences (US), National Academy of Engineering (US) and Institute of Medicine (US) Panel on Scientific Responsibility and the Conduct of Research. Responsible Science: Ensuring the Integrity of the Research Process: Volume I. Washington (DC): National Academies Press (US); 1992.

Cover of Responsible Science

Responsible Science: Ensuring the Integrity of the Research Process: Volume I.

  • Hardcopy Version at National Academies Press

2 Scientific Principles and Research Practices

Until the past decade, scientists, research institutions, and government agencies relied solely on a system of self-regulation based on shared ethical principles and generally accepted research practices to ensure integrity in the research process. Among the very basic principles that guide scientists, as well as many other scholars, are those expressed as respect for the integrity of knowledge, collegiality, honesty, objectivity, and openness. These principles are at work in the fundamental elements of the scientific method, such as formulating a hypothesis, designing an experiment to test the hypothesis, and collecting and interpreting data. In addition, more particular principles characteristic of specific scientific disciplines influence the methods of observation; the acquisition, storage, management, and sharing of data; the communication of scientific knowledge and information; and the training of younger scientists. 1 How these principles are applied varies considerably among the several scientific disciplines, different research organizations, and individual investigators.

The basic and particular principles that guide scientific research practices exist primarily in an unwritten code of ethics. Although some have proposed that these principles should be written down and formalized, 2 the principles and traditions of science are, for the most part, conveyed to successive generations of scientists through example, discussion, and informal education. As was pointed out in an early Academy report on responsible conduct of research in the health sciences, “a variety of informal and formal practices and procedures currently exist in the academic research environment to assure and maintain the high quality of research conduct” (IOM, 1989a, p. 18).

Physicist Richard Feynman invoked the informal approach to communicating the basic principles of science in his 1974 commencement address at the California Institute of Technology (Feynman, 1985):

[There is an] idea that we all hope you have learned in studying science in school—we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. . . . It's a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it; other causes that could possibly explain your results; and things you thought of that you've eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can—if you know anything at all wrong, or possibly wrong—to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. In summary, the idea is to try to give all the information to help others to judge the value of your contribution, not just the information that leads to judgment in one particular direction or another. (pp. 311-312)

Many scholars have noted the implicit nature and informal character of the processes that often guide scientific research practices and inference. 3 Research in well-established fields of scientific knowledge, guided by commonly accepted theoretical paradigms and experimental methods, involves few disagreements about what is recognized as sound scientific evidence. Even in a revolutionary scientific field like molecular biology, students and trainees have learned the basic principles governing judgments made in such standardized procedures as cloning a new gene and determining its sequence.

In evaluating practices that guide research endeavors, it is important to consider the individual character of scientific fields. Research fields that yield highly replicable results, such as ordinary organic chemical structures, are quite different from fields such as cellular immunology, which are in a much earlier stage of development and accumulate much erroneous or uninterpretable material before the pieces fit together coherently. When a research field is too new or too fragmented to support consensual paradigms or established methods, different scientific practices can emerge.

THE NATURE OF SCIENCE

In broadest terms, scientists seek a systematic organization of knowledge about the universe and its parts. This knowledge is based on explanatory principles whose verifiable consequences can be tested by independent observers. Science encompasses a large body of evidence collected by repeated observations and experiments. Although its goal is to approach true explanations as closely as possible, its investigators claim no final or permanent explanatory truths. Science changes. It evolves. Verifiable facts always take precedence. . . .

Scientists operate within a system designed for continuous testing, where corrections and new findings are announced in refereed scientific publications. The task of systematizing and extending the understanding of the universe is advanced by eliminating disproved ideas and by formulating new tests of others until one emerges as the most probable explanation for any given observed phenomenon. This is called the scientific method.

An idea that has not yet been sufficiently tested is called a hypothesis. Different hypotheses are sometimes advanced to explain the same factual evidence. Rigor in the testing of hypotheses is the heart of science, if no verifiable tests can be formulated, the idea is called an ad hoc hypothesis—one that is not fruitful; such hypotheses fail to stimulate research and are unlikely to advance scientific knowledge.

A fruitful hypothesis may develop into a theory after substantial observational or experimental support has accumulated. When a hypothesis has survived repeated opportunities for disproof and when competing hypotheses have been eliminated as a result of failure to produce the predicted consequences, that hypothesis may become the accepted theory explaining the original facts.

Scientific theories are also predictive. They allow us to anticipate yet unknown phenomena and thus to focus research on more narrowly defined areas. If the results of testing agree with predictions from a theory, the theory is provisionally corroborated. If not, it is proved false and must be either abandoned or modified to account for the inconsistency.

Scientific theories, therefore, are accepted only provisionally. It is always possible that a theory that has withstood previous testing may eventually be disproved. But as theories survive more tests, they are regarded with higher levels of confidence. . . .

In science, then, facts are determined by observation or measurement of natural or experimental phenomena. A hypothesis is a proposed explanation of those facts. A theory is a hypothesis that has gained wide acceptance because it has survived rigorous investigation of its predictions. . . .

. . . science accommodates, indeed welcomes, new discoveries: its theories change and its activities broaden as new facts come to light or new potentials are recognized. Examples of events changing scientific thought are legion. . . . Truly scientific understanding cannot be attained or even pursued effectively when explanations not derived from or tested by the scientific method are accepted.

SOURCE: National Academy of Sciences and National Research Council (1984), pp. 8-11.

A well-established discipline can also experience profound changes during periods of new conceptual insights. In these moments, when scientists must cope with shifting concepts, the matter of what counts as scientific evidence can be subject to dispute. Historian Jan Sapp has described the complex interplay between theory and observation that characterizes the operation of scientific judgment in the selection of research data during revolutionary periods of paradigmatic shift (Sapp, 1990, p. 113):

What “liberties” scientists are allowed in selecting positive data and omitting conflicting or “messy” data from their reports is not defined by any timeless method. It is a matter of negotiation. It is learned, acquired socially; scientists make judgments about what fellow scientists might expect in order to be convincing. What counts as good evidence may be more or less well-defined after a new discipline or specialty is formed; however, at revolutionary stages in science, when new theories and techniques are being put forward, when standards have yet to be negotiated, scientists are less certain as to what others may require of them to be deemed competent and convincing.

Explicit statements of the values and traditions that guide research practice have evolved through the disciplines and have been given in textbooks on scientific methodologies. 4 In the past few decades, many scientific and engineering societies representing individual disciplines have also adopted codes of ethics (see Volume II of this report for examples), 5 and more recently, a few research institutions have developed guidelines for the conduct of research (see Chapter 6 ).

But the responsibilities of the research community and research institutions in assuring individual compliance with scientific principles, traditions, and codes of ethics are not well defined. In recent years, the absence of formal statements by research institutions of the principles that should guide research conducted by their members has prompted criticism that scientists and their institutions lack a clearly identifiable means to ensure the integrity of the research process.

  • FACTORS AFFECTING THE DEVELOPMENT OF RESEARCH PRACTICES

In all of science, but with unequal emphasis in the several disciplines, inquiry proceeds based on observation and experimentation, the exercising of informed judgment, and the development of theory. Research practices are influenced by a variety of factors, including:

The general norms of science;

The nature of particular scientific disciplines and the traditions of organizing a specific body of scientific knowledge;

The example of individual scientists, particularly those who hold positions of authority or respect based on scientific achievements;

The policies and procedures of research institutions and funding agencies; and

Socially determined expectations.

The first three factors have been important in the evolution of modern science. The latter two have acquired more importance in recent times.

Norms of Science

As members of a professional group, scientists share a set of common values, aspirations, training, and work experiences. 6 Scientists are distinguished from other groups by their beliefs about the kinds of relationships that should exist among them, about the obligations incurred by members of their profession, and about their role in society. A set of general norms are imbedded in the methods and the disciplines of science that guide individual, scientists in the organization and performance of their research efforts and that also provide a basis for nonscientists to understand and evaluate the performance of scientists.

But there is uncertainty about the extent to which individual scientists adhere to such norms. Most social scientists conclude that all behavior is influenced to some degree by norms that reflect socially or morally supported patterns of preference when alternative courses of action are possible. However, perfect conformity with any relevant set of norms is always lacking for a variety of reasons: the existence of competing norms, constraints, and obstacles in organizational or group settings, and personality factors. The strength of these influences, and the circumstances that may affect them, are not well understood.

In a classic statement of the importance of scientific norms, Robert Merton specified four norms as essential for the effective functioning of science: communism (by which Merton meant the communal sharing of ideas and findings), universalism, disinterestedness, and organized skepticism (Merton, 1973). Neither Merton nor other sociologists of science have provided solid empirical evidence for the degree of influence of these norms in a representative sample of scientists. In opposition to Merton, a British sociologist of science, Michael Mulkay, has argued that these norms are “ideological” covers for self-interested behavior that reflects status and politics (Mulkay, 1975). And the British physicist and sociologist of science John Ziman, in an article synthesizing critiques of Merton's formulation, has specified a set of structural factors in the bureaucratic and corporate research environment that impede the realization of that particular set of norms: the proprietary nature of research, the local importance and funding of research, the authoritarian role of the research manager, commissioned research, and the required expertise in understanding how to use modern instruments (Ziman, 1990).

It is clear that the specific influence of norms on the development of scientific research practices is simply not known and that further study of key determinants is required, both theoretically and empirically. Commonsense views, ideologies, and anecdotes will not support a conclusive appraisal.

Individual Scientific Disciplines

Science comprises individual disciplines that reflect historical developments and the organization of natural and social phenomena for study. Social scientists may have methods for recording research data that differ from the methods of biologists, and scientists who depend on complex instrumentation may have authorship practices different from those of scientists who work in small groups or carry out field studies. Even within a discipline, experimentalists engage in research practices that differ from the procedures followed by theorists.

Disciplines are the “building blocks of science,” and they “designate the theories, problems, procedures, and solutions that are prescribed, proscribed, permitted, and preferred” (Zuckerman, 1988a, p. 520). The disciplines have traditionally provided the vital connections between scientific knowledge and its social organization. Scientific societies and scientific journals, some of which have tens of thousands of members and readers, and the peer review processes used by journals and research sponsors are visible forms of the social organization of the disciplines.

The power of the disciplines to shape research practices and standards is derived from their ability to provide a common frame of reference in evaluating the significance of new discoveries and theories in science. It is the members of a discipline, for example, who determine what is “good biology” or “good physics” by examining the implications of new research results. The disciplines' abilities to influence research standards are affected by the subjective quality of peer review and the extent to which factors other than disciplinary quality may affect judgments about scientific achievements. Disciplinary departments rely primarily on informal social and professional controls to promote responsible behavior and to penalize deviant behavior. These controls, such as social ostracism, the denial of letters of support for future employment, and the withholding of research resources, can deter and penalize unprofessional behavior within research institutions. 7

Many scientific societies representing individual disciplines have adopted explicit standards in the form of codes of ethics or guidelines governing, for example, the editorial practices of their journals and other publications. 8 Many societies have also established procedures for enforcing their standards. In the past decade, the societies' codes of ethics—which historically have been exhortations to uphold high standards of professional behavior—have incorporated specific guidelines relevant to authorship practices, data management, training and mentoring, conflict of interest, reporting research findings, treatment of confidential or proprietary information, and addressing error or misconduct.

The Role of Individual Scientists and Research Teams

The methods by which individual scientists and students are socialized in the principles and traditions of science are poorly understood. The principles of science and the practices of the disciplines are transmitted by scientists in classroom settings and, perhaps more importantly, in research groups and teams. The social setting of the research group is a strong and valuable characteristic of American science and education. The dynamics of research groups can foster—or inhibit—innovation, creativity, education, and collaboration.

One author of a historical study of research groups in the chemical and biochemical sciences has observed that the laboratory director or group leader is the primary determinant of a group's practices (Fruton, 1990). Individuals in positions of authority are visible and are also influential in determining funding and other support for the career paths of their associates and students. Research directors and department chairs, by virtue of personal example, thus can reinforce, or weaken, the power of disciplinary standards and scientific norms to affect research practices.

To the extent that the behavior of senior scientists conforms with general expectations for appropriate scientific and disciplinary practice, the research system is coherent and mutually reinforcing. When the behavior of research directors or department chairs diverges from expectations for good practice, however, the expected norms of science become ambiguous, and their effects are thus weakened. Thus personal example and the perceived behavior of role models and leaders in the research community can be powerful stimuli in shaping the research practices of colleagues, associates, and students.

The role of individuals in influencing research practices can vary by research field, institution, or time. The standards and expectations for behavior exemplified by scientists who are highly regarded for their technical competence or creative insight may have greater influence than the standards of others. Individual and group behaviors may also be more influential in times of uncertainty and change in science, especially when new scientific theories, paradigms, or institutional relationships are being established.

Institutional Policies

Universities, independent institutes, and government and industrial research organizations create the environment in which research is done. As the recipients of federal funds and the institutional sponsors of research activities, administrative officers must comply with regulatory and legal requirements that accompany public support. They are required, for example, “to foster a research environment that discourages misconduct in all research and that deals forthrightly with possible misconduct” (DHHS, 1989a, p. 32451).

Academic institutions traditionally have relied on their faculty to ensure that appropriate scientific and disciplinary standards are maintained. A few universities and other research institutions have also adopted policies or guidelines to clarify the principles that their members are expected to observe in the conduct of scientific research. 9 In addition, as a result of several highly publicized incidents of misconduct in science and the subsequent enactment of governmental regulations, most major research institutions have now adopted policies and procedures for handling allegations of misconduct in science.

Institutional policies governing research practices can have a powerful effect on research practices if they are commensurate with the norms that apply to a wide spectrum of research investigators. In particular, the process of adopting and implementing strong institutional policies can sensitize the members of those institutions to the potential for ethical problems in their work. Institutional policies can establish explicit standards that institutional officers then have the power to enforce with sanctions and penalties.

Institutional policies are limited, however, in their ability to specify the details of every problematic situation, and they can weaken or displace individual professional judgment in such situations. Currently, academic institutions have very few formal policies and programs in specific areas such as authorship, communication and publication, and training and supervision.

Government Regulations and Policies

Government agencies have developed specific rules and procedures that directly affect research practices in areas such as laboratory safety, the treatment of human and animal research subjects, and the use of toxic or potentially hazardous substances in research.

But policies and procedures adopted by some government research agencies to address misconduct in science (see Chapter 5 ) represent a significant new regulatory development in the relationships between research institutions and government sponsors. The standards and criteria used to monitor institutional compliance with an increasing number of government regulations and policies affecting research practices have been a source of significant disagreement and tension within the research community.

In recent years, some government research agencies have also adopted policies and procedures for the treatment of research data and materials in their extramural research programs. For example, the National Science Foundation (NSF) has implemented a data-sharing policy through program management actions, including proposal review and award negotiations and conditions. The NSF policy acknowledges that grantee institutions will “keep principal rights to intellectual property conceived under NSF sponsorship” to encourage appropriate commercialization of the results of research (NSF, 1989b, p. 1). However, the NSF policy emphasizes “that retention of such rights does not reduce the responsibility of researchers and institutions to make results and supporting materials openly accessible” (p. 1).

In seeking to foster data sharing under federal grant awards, the government relies extensively on the scientific traditions of openness and sharing. Research agency officials have observed candidly that if the vast majority of scientists were not so committed to openness and dissemination, government policy might require more aggressive action. But the principles that have traditionally characterized scientific inquiry can be difficult to maintain. For example, NSF staff have commented, “Unless we can arrange real returns or incentives for the original investigator, either in financial support or in professional recognition, another researcher's request for sharing is likely to present itself as ‘hassle'—an unwelcome nuisance and diversion. Therefore, we should hardly be surprised if researchers display some reluctance to share in practice, however much they may declare and genuinely feel devotion to the ideal of open scientific communication” (NSF, 1989a, p. 4).

Social Attitudes and Expectations

Research scientists are part of a larger human society that has recently experienced profound changes in attitudes about ethics, morality, and accountability in business, the professions, and government. These attitudes have included greater skepticism of the authority of experts and broader expectations about the need for visible mechanisms to assure proper research practices, especially in areas that affect the public welfare. Social attitudes are also having a more direct influence on research practices as science achieves a more prominent and public role in society. In particular, concern about waste, fraud, and abuse involving government funds has emerged as a factor that now directly influences the practices of the research community.

Varying historical and conceptual perspectives also can affect expectations about standards of research practice. For example, some journalists have criticized several prominent scientists, such as Mendel, Newton, and Millikan, because they “cut corners in order to make their theories prevail” (Broad and Wade, 1982, p. 35). The criticism suggests that all scientists at all times, in all phases of their work, should be bound by identical standards.

Yet historical studies of the social context in which scientific knowledge has been attained suggest that modern criticism of early scientific work often imposes contemporary standards of objectivity and empiricism that have in fact been developed in an evolutionary manner. 10 Holton has argued, for example, that in selecting data for publication, Millikan exercised creative insight in excluding unreliable data resulting from experimental error. But such practices, by today's standards, would not be acceptable without reporting the justification for omission of recorded data.

In the early stages of pioneering studies, particularly when fundamental hypotheses are subject to change, scientists must be free to use creative judgment in deciding which data are truly significant. In such moments, the standards of proof may be quite different from those that apply at stages when confirmation and consensus are sought from peers. Scientists must consistently guard against self-deception, however, particularly when theoretical prejudices tend to overwhelm the skepticism and objectivity basic to experimental practices.

In discussing “the theory-ladenness of observations,” Sapp (1990) observed the fundamental paradox that can exist in determining the “appropriateness” of data selection in certain experiments done in the past: scientists often craft their experiments so that the scientific problems and research subjects conform closely with the theory that they expect to verify or refute. Thus, in some cases, their observations may come closer to theoretical expectations than what might be statistically proper.

This source of bias may be acceptable when it is influenced by scientific insight and judgment. But political, financial, or other sources of bias can corrupt the process of data selection. In situations where both kinds of influence exist, it is particularly important for scientists to be forthcoming about possible sources of bias in the interpretation of research results. The coupling of science to other social purposes in fostering economic growth and commercial technology requires renewed vigilance to maintain acceptable standards for disclosure and control of financial or competitive conflicts of interest and bias in the research environment. The failure to distinguish between appropriate and inappropriate sources of bias in research practices can lead to erosion of public trust in the autonomy of the research enterprise.

  • RESEARCH PRACTICES

In reviewing modern research practices for a range of disciplines, and analyzing factors that could affect the integrity of the research process, the panel focused on the following four areas:

Data handling—acquisition, management, and storage;

Communication and publication;

Correction of errors; and

Research training and mentorship.

Commonly understood practices operate in each area to promote responsible research conduct; nevertheless, some questionable research practices also occur. Some research institutions, scientific societies, and journals have established policies to discourage questionable practices, but there is not yet a consensus on how to treat violations of these policies. 11 Furthermore, there is concern that some questionable practices may be encouraged or stimulated by other institutional factors. For example, promotion or appointment policies that stress quantity rather than the quality of publications as a measure of productivity could contribute to questionable practices.

Data Handling

Acquisition and management.

Scientific experiments and measurements are transformed into research data. The term “research data” applies to many different forms of scientific information, including raw numbers and field notes, machine tapes and notebooks, edited and categorized observations, interpretations and analyses, derived reagents and vectors, and tables, charts, slides, and photographs.

Research data are the basis for reporting discoveries and experimental results. Scientists traditionally describe the methods used for an experiment, along with appropriate calibrations, instrument types, the number of repeated measurements, and particular conditions that may have led to the omission of some datain the reported version. Standard procedures, innovations for particular purposes, and judgments concerning the data are also reported. The general standard of practice is to provide information that is sufficiently complete so that another scientist can repeat or extend the experiment.

When a scientist communicates a set of results and a related piece of theory or interpretation in any form (at a meeting, in a journal article, or in a book), it is assumed that the research has been conducted as reported. It is a violation of the most fundamental aspect of the scientific research process to set forth measurements that have not, in fact, been performed (fabrication) or to ignore or change relevant data that contradict the reported findings (falsification).

On occasion what is actually proper research practice may be confused with misconduct in science. Thus, for example, applying scientific judgment to refine data and to remove spurious results places special responsibility on the researcher to avoid misrepresentation of findings. Responsible practice requires that scientists disclose the basis for omitting or modifying data in their analyses of research results, especially when such omissions or modifications could alter the interpretation or significance of their work.

In the last decade, the methods by which research scientists handle, store, and provide access to research data have received increased scrutiny, owing to conflicts, over ownership, such as those described by Nelkin (1984); advances in the methods and technologies that are used to collect, retain, and share data; and the costs of data storage. More specific concerns have involved the profitability associated with the patenting of science-based results in some fields and the need to verify independently the accuracy of research results used in public or private decision making. In resolving competing claims, the interests of individual scientists and research institutions may not always coincide: researchers may be willing to exchange scientific data of possible economic significance without regard for financial or institutional implications, whereas their institutions may wish to establish intellectual property rights and obligations prior to any disclosure.

The general norms of science emphasize the principle of openness. Scientists are generally expected to exchange research data as well as unique research materials that are essential to the replication or extension of reported findings. The 1985 report Sharing Research Data concluded that the general principle of data sharing is widely accepted, especially in the behavioral and social sciences (NRC, 1985). The report catalogued the benefits of data sharing, including maintaining the integrity of the research process by providing independent opportunities for verification, refutation, or refinement of original results and data; promoting new research and the development and testing of new theories; and encouraging appropriate use of empirical data in policy formulation and evaluation. The same report examined obstacles to data sharing, which include the criticism or competition that might be stimulated by data sharing; technical barriers that may impede the exchange of computer-readable data; lack of documentation of data sets; and the considerable costs of documentation, duplication, and transfer of data.

The exchange of research data and reagents is ideally governed by principles of collegiality and reciprocity: scientists often distribute reagents with the hope that the recipient will reciprocate in the future, and some give materials out freely with no stipulations attached. 12 Scientists who repeatedly or flagrantly deviate from the tradition of sharing become known to their peers and may suffer subtle forms of professional isolation. Such cases may be well known to senior research investigators, but they are not well documented.

Some scientists may share materials as part of a collaborative agreement in exchange for co-authorship on resulting publications. Some donors stipulate that the shared materials are not to be used for applications already being pursued by the donor's laboratory. Other stipulations include that the material not be passed on to third parties without prior authorization, that the material not be used for proprietary research, or that the donor receive prepublication copies of research publications derived from the material. In some instances, so-called materials transfer agreements are executed to specify the responsibilities of donor and recipient. As more academic research is being supported under proprietary agreements, researchers and institutions are experiencing the effects of these arrangements on research practices.

Governmental support for research studies may raise fundamental questions of ownership and rights of control, particularly when data are subsequently used in proprietary efforts, public policy decisions, or litigation. Some federal research agencies have adopted policies for data sharing to mitigate conflicts over issues of ownership and access (NIH, 1987; NSF, 1989b).

Many research investigators store primary data in the laboratories in which the data were initially derived, generally as electronic records or data sheets in laboratory notebooks. For most academic laboratories, local customary practice governs the storage (or discarding) of research data. Formal rules or guidelines concerning their disposition are rare.

Many laboratories customarily store primary data for a set period (often 3 to 5 years) after they are initially collected. Data that support publications are usually retained for a longer period than are those tangential to reported results. Some research laboratories serve as the proprietor of data and data books that are under the stewardship of the principal investigator. Others maintain that it is the responsibility of the individuals who collected the data to retain proprietorship, even if they leave the laboratory.

Concerns about misconduct in science have raised questions about the roles of research investigators and of institutions in maintaining and providing access to primary data. In some cases of alleged misconduct, the inability or unwillingness of an investigator to provide primary data or witnesses to support published reports sometimes has constituted a presumption that the experiments were not conducted as reported. 13 Furthermore, there is disagreement about the responsibilities of investigators to provide access to raw data, particularly when the reported results have been challenged by others. Many scientists believe that access should be restricted to peers and colleagues, usually following publication of research results, to reduce external demands on the time of the investigator. Others have suggested that raw data supporting research reports should be accessible to any critic or competitor, at any time, especially if the research is conducted with public funds. This topic, in particular, could benefit from further research and systematic discussion to clarify the rights and responsibilities of research investigators, institutions, and sponsors.

Institutional policies have been developed to guide data storage practices in some fields, often stimulated by desires to support the patenting of scientific results and to provide documentation for resolving disputes over patent claims. Laboratories concerned with patents usually have very strict rules concerning data storage and note keeping, often requiring that notes be recorded in an indelible form and be countersigned by an authorized person each day. A few universities have also considered the creation of central storage repositories for all primary data collected by their research investigators. Some government research institutions and industrial research centers maintain such repositories to safeguard the record of research developments for scientific, historical, proprietary, and national security interests.

In the academic environment, however, centralized research records raise complex problems of ownership, control, and access. Centralized data storage is costly in terms of money and space, and it presents logistical problems of cataloguing and retrieving data. There have been suggestions that some types of scientific data should be incorporated into centralized computerized data banks, a portion of which could be subject to periodic auditing or certification. 14 But much investigator-initiated research is not suitable for random data audits because of the exploratory nature of basic or discovery research. 15

Some scientific journals now require that full data for research papers be deposited in a centralized data bank before final publication. Policies and practices differ, but in some fields support is growing for compulsory deposit to enhance researchers' access to supporting data.

Issues Related to Advances in Information Technology

Advances in electronic and other information technologies have raised new questions about the customs and practices that influence the storage, ownership, and exchange of electronic data and software. A number of special issues, not addressed by the panel, are associated with computer modeling, simulation, and other approaches that are becoming more prevalent in the research environment. Computer technology can enhance research collaboration; it can also create new impediments to data sharing resulting from increased costs, the need for specialized equipment, or liabilities or uncertainties about responsibilities for faulty data, software, or computer-generated models.

Advances in computer technology may assist in maintaining and preserving accurate records of research data. Such records could help resolve questions about the timing or accuracy of specific research findings, especially when a principal investigator is not available or is uncooperative in responding to such questions. In principle, properly managed information technologies, utilizing advances in nonerasable optical disk systems, might reinforce openness in scientific research and make primary data more transparent to collaborators and research managers. For example, the so-called WORM (write once, read many) systems provide a high-density digital storage medium that supplies an ineradicable audit trail and historical record for all entered information (Haas, 1991).

Advances in information technologies could thus provide an important benefit to research institutions that wish to emphasize greater access to and storage of primary research data. But the development of centralized information systems in the academic research environment raises difficult issues of ownership, control, and principle that reflect the decentralized character of university governance. Such systems are also a source of additional research expense, often borne by individual investigators. Moreover, if centralized systems are perceived by scientists as an inappropriate or ineffective form of management or oversight of individual research groups, they simply may not work in an academic environment.

Communication and Publication

Scientists communicate research results by a variety of formal and informal means. In earlier times, new findings and interpretations were communicated by letter, personal meeting, and publication. Today, computer networks and facsimile machines have supplemented letters and telephones in facilitating rapid exchange of results. Scientific meetings routinely include poster sessions and press conferences as well as formal presentations. Although research publications continue to document research findings, the appearance of electronic publications and other information technologies heralds change. In addition, incidents of plagiarism, the increasing number of authors per article in selected fields, and the methods by which publications are assessed in determining appointments and promotions have all increased concerns about the traditions and practices that have guided communication and publication.

Journal publication, traditionally an important means of sharing information and perspectives among scientists, is also a principal means of establishing a record of achievement in science. Evaluation of the accomplishments of individual scientists often involves not only the numbers of articles that have resulted from a selected research effort, but also the particular journals in which the articles have appeared. Journal submission dates are often important in establishing priority and intellectual property claims.

Authorship of original research reports is an important indicator of accomplishment, priority, and prestige within the scientific community. Questions of authorship in science are intimately connected with issues of credit and responsibility. Authorship practices are guided by disciplinary traditions, customary practices within research groups, and professional and journal standards and policies. 16 There is general acceptance of the principle that each named author has made a significant intellectual contribution to the paper, even though there remains substantial disagreement over the types of contributions that are judged to be significant.

A general rule is that an author must have participated sufficiently in the work to take responsibility for its content and vouch for its validity. Some journals have adopted more specific guidelines, suggesting that credit for authorship be contingent on substantial participation in one or more of the following categories: (1) conception and design of the experiment, (2) execution of the experiment and collection and storage of the supporting data, (3) analysis and interpretation of the primary data, and (4) preparation and revision of the manuscript. The extent of participation in these four activities required for authorship varies across journals, disciplines, and research groups. 17

“Honorary,” “gift,” or other forms of noncontributing authorship are problems with several dimensions. 18 Honorary authors reap an inflated list of publications incommensurate with their scientific contributions (Zen, 1988). Some scientists have requested or been given authorship as a form of recognition of their status or influence rather than their intellectual contribution. Some research leaders have a custom of including their own names in any paper issuing from their laboratory, although this practice is increasingly discouraged. Some students or junior staff encourage such “gift authorship” because they feel that the inclusion of prestigious names on their papers increases the chance of publication in well-known journals. In some cases, noncontributing authors have been listed without their consent, or even without their being told. In response to these practices, some journals now require all named authors to sign the letter that accompanies submission of the original article, to ensure that no author is named without consent.

“Specialized” authorship is another issue that has received increasing attention. In these cases, a co-author may claim responsibility for a specialized portion of the paper and may not even see or be able to defend the paper as a whole. 19 “Specialized” authorship may also result from demands that co-authorship be given as a condition of sharing a unique research reagent or selected data that do not constitute a major contribution—demands that many scientists believe are inappropriate. “Specialized” authorship may be appropriate in cross-disciplinary collaborations, in which each participant has made an important contribution that deserves recognition. However, the risks associated with the inabilities of co-authors to vouch for the integrity of an entire paper are great; scientists may unwittingly become associated with a discredited publication.

Another problem of lesser importance, except to the scientists involved, is the order of authors listed on a paper. The meaning of author order varies among and within disciplines. For example, in physics the ordering of authors is frequently alphabetical, whereas in the social sciences and other fields, the ordering reflects a descending order of contribution to the described research. Another practice, common in biology, is to list the senior author last.

Appropriate recognition for the contributions of junior investigators, postdoctoral fellows, and graduate students is sometimes a source of discontent and unease in the contemporary research environment. Junior researchers have raised concerns about treatment of their contributions when research papers are prepared and submitted, particularly if they are attempting to secure promotions or independent research funding or if they have left the original project. In some cases, well-meaning senior scientists may grant junior colleagues undeserved authorship or placement as a means of enhancing the junior colleague's reputation. In others, significant contributions may not receive appropriate recognition.

Authorship practices are further complicated by large-scale projects, especially those that involve specialized contributions. Mission teams for space probes, oceanographic expeditions, and projects in high-energy physics, for example, all involve large numbers of senior scientists who depend on the long-term functioning of complex equipment. Some questions about communication and publication that arise from large science projects such as the Superconducting Super Collider include: Who decides when an experiment is ready to be published? How is the spokesperson for the experiment determined? Who determines who can give talks on the experiment? How should credit for technical or hardware contributions be acknowledged?

Apart from plagiarism, problems of authorship and credit allocation usually do not involve misconduct in science. Although some forms of “gift authorship,” in which a designated author made no identifiable contribution to a paper, may be viewed as instances of falsification, authorship disputes more commonly involve unresolved differences of judgment and style. Many research groups have found that the best method of resolving authorship questions is to agree on a designation of authors at the outset of the project. The negotiation and decision process provides initial recognition of each member's effort, and it may prevent misunderstandings that can arise during the course of the project when individuals may be in transition to new efforts or may become preoccupied with other matters.

Plagiarism. Plagiarism is using the ideas or words of another person without giving appropriate credit. Plagiarism includes the unacknowledged use of text and ideas from published work, as well as the misuse of privileged information obtained through confidential review of research proposals and manuscripts.

As described in Honor in Science, plagiarism can take many forms: at one extreme is the exact replication of another's writing without appropriate attribution (Sigma Xi, 1986). At the other is the more subtle “borrowing” of ideas, terms, or paraphrases, as described by Martin et al., “so that the result is a mosaic of other people's ideas and words, the writer's sole contribution being the cement to hold the pieces together.” 20 The importance of recognition for one's intellectual abilities in science demands high standards of accuracy and diligence in ensuring appropriate recognition for the work of others.

The misuse of privileged information may be less clear-cut because it does not involve published work. But the general principles of the importance of giving credit to the accomplishments of others are the same. The use of ideas or information obtained from peer review is not acceptable because the reviewer is in a privileged position. Some organizations, such as the American Chemical Society, have adopted policies to address these concerns (ACS, 1986).

Additional Concerns. Other problems related to authorship include overspecialization, overemphasis on short-term projects, and the organization of research communication around the “least publishable unit.” In a research system that rewards quantity at the expense of quality and favors speed over attention to detail (the effects of “publish or perish”), scientists who wait until their research data are complete before releasing them for publication may be at a disadvantage. Some institutions, such as Harvard Medical School, have responded to these problems by limiting the number of publications reviewed for promotion. Others have placed greater emphasis on major contributions as the basis for evaluating research productivity.

As gatekeepers of scientific journals, editors are expected to use good judgment and fairness in selecting papers for publication. Although editors cannot be held responsible for the errors or inaccuracies of papers that may appear in their journals, editors have obligations to consider criticism and evidence that might contradict the claims of an author and to facilitate publication of critical letters, errata, or retractions. 21 Some institutions, including the National Library of Medicine and professional societies that represent editors of scientific journals, are exploring the development of standards relevant to these obligations (Bailar et al., 1990).

Should questions be raised about the integrity of a published work, the editor may request an author's institution to address the matter. Editors often request written assurances that research reported conforms to all appropriate guidelines involving human or animal subjects, materials of human origin, or recombinant DNA.

In theory, editors set standards of authorship for their journals. In practice, scientists in the specialty do. Editors may specify the. terms of acknowledgment of contributors who fall short of authorship status, and make decisions regarding appropriate forms of disclosure of sources of bias or other potential conflicts of interest related to published articles. For example, the New England Journal of Medicine has established a category of prohibited contributions from authors engaged in for-profit ventures: the journal will not allow such persons to prepare review articles or editorial commentaries for publication. Editors can clarify and insist on the confidentiality of review and take appropriate actions against reviewers who violate it. Journals also may require or encourage their authors to deposit reagents and sequence and crystallographic data into appropriate databases or storage facilities. 22

Peer Review

Peer review is the process by which editors and journals seek to be advised by knowledgeable colleagues about the quality and suitability of a manuscript for publication in a journal. Peer review is also used by funding agencies to seek advice concerning the quality and promise of proposals for research support. The proliferation of research journals and the rewards associated with publication and with obtaining research grants have put substantial stress on the peer review system. Reviewers for journals or research agencies receive privileged information and must exert great care to avoid sharing such information with colleagues or allowing it to enter their own work prematurely.

Although the system of peer review is generally effective, it has been suggested that the quality of refereeing has declined, that self-interest has crept into the review process, and that some journal editors and reviewers exert inappropriate influence on the type of work they deem publishable. 23

Correction of Errors

At some level, all scientific reports, even those that mark profound advances, contain errors of fact or interpretation. In part, such errors reflect uncertainties intrinsic to the research process itself—a hypothesis is formulated, an experimental test is devised, and based on the interpretation of the results, the hypothesis is refined, revised, or discarded. Each step in this cycle is subject to error. For any given report, “correctness” is limited by the following:

The precision and accuracy of the measurements. These in turn depend on available technology, the use of proper statistical and analytical methods, and the skills of the investigator.

Generality of the experimental system and approach. Studies must often be carried out using “model systems.” In biology, for example, a given phenomenon is examined in only one or a few among millions of organismal species.

Experimental design—a product of the background and expertise of the investigator.

Interpretation and speculation regarding the significance of the findings—judgments that depend on expert knowledge, experience, and the insightfulness and boldness of the investigator.

Viewed in this context, errors are an integral aspect of progress in attaining scientific knowledge. They are consequences of the fact that scientists seek fundamental truths about natural processes of vast complexity. In the best experimental systems, it is common that relatively few variables have been identified and that even fewer can be controlled experimentally. Even when important variables are accounted for, the interpretation of the experimental results may be incorrect and may lead to an erroneous conclusion. Such conclusions are sometimes overturned by the original investigator or by others when new insights from another study prompt a reexamination of older reported data. In addition, however, erroneous information can also reach the scientific literature as a consequence of misconduct in science.

What becomes of these errors or incorrect interpretations? Much has been made of the concept that science is “self-correcting”—that errors, whether honest or products of misconduct, will be exposed in future experiments because scientific truth is founded on the principle that results must be verifiable and reproducible. This implies that errors will generally not long confound the direction of thinking or experimentation in actively pursued areas of research. Clearly, published experiments are not routinely replicated precisely by independent investigators. However, each experiment is based on conclusions from prior studies; repeated failure of the experiment eventually calls into question those conclusions and leads to reevaluation of the measurements, generality, design, and interpretation of the earlier work.

Thus publication of a scientific report provides an opportunity for the community at large to critique and build on the substance of the report, and serves as one stage at which errors and misinterpretations can be detected and corrected. Each new finding is considered by the community in light of what is already known about the system investigated, and disagreements with established measurements and interpretations must be justified. For example, a particular interpretation of an electrical measurement of a material may implicitly predict the results of an optical experiment. If the reported optical results are in disagreement with the electrical interpretation, then the latter is unlikely to be correct—even though the measurements themselves were carefully and correctly performed. It is also possible, however, that the contradictory results are themselves incorrect, and this possibility will also be evaluated by the scientists working in the field. It is by this process of examination and reexamination that science advances.

The research endeavor can therefore be viewed as a two-tiered process: first, hypotheses are formulated, tested, and modified; second, results and conclusions are reevaluated in the course of additional study. In fact, the two tiers are interrelated, and the goals and traditions of science mandate major responsibilities in both areas for individual investigators. Importantly, the principle of self-correction does not diminish the responsibilities of the investigator in either area. The investigator has a fundamental responsibility to ensure that the reported results can be replicated in his or her laboratory. The scientific community in general adheres strongly to this principle, but practical constraints exist as a result of the availability of specialized instrumentation, research materials, and expert personnel. Other forces, such as competition, commercial interest, funding trends and availability, or pressure to publish may also erode the role of replication as a mechanism for fostering integrity in the research process. The panel is unaware of any quantitative studies of this issue.

The process of reevaluating prior findings is closely related to the formulation and testing of hypotheses. 24 Indeed, within an individual laboratory, the formulation/testing phase and the reevaluation phase are ideally ongoing interactive processes. In that setting, the precise replication of a prior result commonly serves as a crucial control in attempts to extend the original findings. It is not unusual that experimental flaws or errors of interpretation are revealed as the scope of an investigation deepens and broadens.

If new findings or significant questions emerge in the course of a reevaluation that affect the claims of a published report, the investigator is obliged to make public a correction of the erroneous result or to indicate the nature of the questions. Occasionally, this takes the form of a formal published retraction, especially in situations in which a central claim is found to be fundamentally incorrect or irreproducible. More commonly, a somewhat different version of the original experiment, or a revised interpretation of the original result, is published as part of a subsequent report that extends in other ways the initial work. Some concerns have been raised that such “revisions” can sometimes be so subtle and obscure as to be unrecognizable. Such behavior is, at best, a questionable research practice. Clearly, each scientist has a responsibility to foster an environment that encourages and demands rigorous evaluation and reevaluation of every key finding.

Much greater complexity is encountered when an investigator in one research group is unable to confirm the published findings of another. In such situations, precise replication of the original result is commonly not attempted because of the lack of identical reagents, differences in experimental protocols, diverse experimental goals, or differences in personnel. Under these circumstances, attempts to obtain the published result may simply be dropped if the central claim of the original study is not the major focus of the new study. Alternatively, the inability to obtain the original finding may be documented in a paper by the second investigator as part of a challenge to the original claim. In any case, such questions about a published finding usually provoke the initial investigator to attempt to reconfirm the original result, or to pursue additional studies that support and extend the original findings.

In accordance with established principles of science, scientists have the responsibility to replicate and reconfirm their results as a normal part of the research process. The cycles of theoretical and methodological formulation, testing, and reevaluation, both within and between laboratories, produce an ongoing process of revision and refinement that corrects errors and strengthens the fabric of research.

Research Training and Mentorship

The panel defined a mentor as that person directly responsible for the professional development of a research trainee. 25 Professional development includes both technical training, such as instruction in the methods of scientific research (e.g., research design, instrument use, and selection of research questions and data), and socialization in basic research practices (e.g., authorship practices and sharing of research data).

Positive Aspects of Mentorship

The relationship of the mentor and research trainee is usually characterized by extraordinary mutual commitment and personal involvement. A mentor, as a research advisor, is generally expected to supervise the work of the trainee and ensure that the trainee's research is completed in a sound, honest, and timely manner. The ideal mentor challenges the trainee, spurs the trainee to higher scientific achievement, and helps socialize the trainee into the community of scientists by demonstrating and discussing methods and practices that are not well understood.

Research mentors thus have complex and diverse roles. Many individuals excel in providing guidance and instruction as well as personal support, and some mentors are resourceful in providing funds and securing professional opportunities for their trainees. The mentoring relationship may also combine elements of other relationships, such as parenting, coaching, and guildmastering. One mentor has written that his “research group is like an extended family or small tribe, dependent on one another, but led by the mentor, who acts as their consultant, critic, judge, advisor, and scientific father” (Cram, 1989, p. 1). Another mentor described as “orphaned graduate students” trainees who had lost their mentors to death, job changes, or in other ways (Sindermann, 1987). Many students come to respect and admire their mentors, who act as role models for their younger colleagues.

Difficulties Associated with Mentorship

However, the mentoring relationship does not always function properly or even satisfactorily. Almost no literature exists that evaluates which problems are idiosyncratic and which are systemic. However, it is clear that traditional practices in the area of mentorship and training are under stress. In some research fields, for example, concerns are being raised about how the increasing size and diverse composition of research groups affect the quality of the relationship between trainee and mentor. As the size of research laboratories expands, the quality of the training environment is at risk (CGS, 1990a).

Large laboratories may provide valuable instrumentation and access to unique research skills and resources as well as an opportunity to work in pioneering fields of science. But as only one contribution to the efforts of a large research team, a graduate student's work may become highly specialized, leading to a narrowing of experience and greater dependency on senior personnel; in a period when the availability of funding may limit research opportunities, laboratory heads may find it necessary to balance research decisions for the good of the team against the individual educational interests of each trainee. Moreover, the demands of obtaining sufficient resources to maintain a laboratory in the contemporary research environment often separate faculty from their trainees. When laboratory heads fail to participate in the everyday workings of the laboratory—even for the most beneficent of reasons, such as finding funds to support young investigators—their inattention may harm their trainees' education.

Although the size of a research group can influence the quality of mentorship, the more important issues are the level of supervision received by trainees, the degree of independence that is appropriate for the trainees' experience and interests, and the allocation of credit for achievements that are accomplished by groups composed of individuals with different status. Certain studies involving large groups of 40 to 100 or more are commonly carried out by collaborative or hierarchical arrangements under a single investigator. These factors may affect the ability of research mentors to transmit the methods and ethical principles according to which research should be conducted.

Problems also arise when faculty members are not directly rewarded for their graduate teaching or training skills. Although faculty may receive indirect rewards from the contributions of well-trained graduate students to their own research as well as the satisfaction of seeing their students excelling elsewhere, these rewards may not be sufficiently significant in tenure or promotion decisions. When institutional policies fail to recognize and reward the value of good teaching and mentorship, the pressures to maintain stable funding for research teams in a competitive environment can overwhelm the time allocated to teaching and mentorship by a single investigator.

The increasing duration of the training period in many research fields is another source of concern, particularly when it prolongs the dependent status of the junior investigator. The formal period of graduate and postdoctoral training varies considerably among fields of study. In 1988, the median time to the doctorate from the baccalaureate degree was 6.5 years (NRC, 1989). The disciplinary median varied: 5.5 years in chemistry; 5.9 years in engineering; 7.1 years in health sciences and in earth, atmospheric, and marine sciences; and 9.0 years in anthropology and sociology. 26

Students, research associates, and faculty are currently raising various questions about the rights and obligations of trainees. Sexist behavior by some research directors and other senior scientists is a particular source of concern. Another significant concern is that research trainees may be subject to exploitation because of their subordinate status in the research laboratory, particularly when their income, access to research resources, and future recommendations are dependent on the goodwill of the mentor. Foreign students and postdoctoral fellows may be especially vulnerable, since their immigration status often depends on continuation of a research relationship with the selected mentor.

Inequalities between mentor and trainee can exacerbate ordinary conflicts such as the distribution of credit or blame for research error (NAS, 1989). When conflicts arise, the expectations and assumptions that govern authorship practices, ownership of intellectual property, and the giving of references and recommendations are exposed for professional—and even legal—scrutiny (Nelkin, 1984; Weil and Snapper, 1989).

Making Mentorship Better

Ideally, mentors and trainees should select each other with an eye toward scientific merit, intellectual and personal compatibility, and other relevant factors. But this situation operates only under conditions of freely available information and unconstrained choice—conditions that usually do not exist in academic research groups. The trainee may choose to work with a faculty member based solely on criteria of patronage, perceived influence, or ability to provide financial support.

Good mentors may be well known and highly regarded within their research communities and institutions. Unfortunately, individuals who exploit the mentorship relationship may be less visible. Poor mentorship practices may be self-correcting over time, if students can detect and avoid research groups characterized by disturbing practices. However, individual trainees who experience abusive relationships with a mentor may discover only too late that the practices that constitute the abuse were well known but were not disclosed to new initiates.

It is common practice for a graduate student to be supervised not only by an individual mentor but also by a committee that represents the graduate department or research field of the student. However, departmental oversight is rare for the postdoctoral research fellow. In order to foster good mentorship practices for all research trainees, many groups and institutions have taken steps to clarify the nature of individual and institutional responsibilities in the mentor–trainee relationship. 27

  • FINDINGS AND CONCLUSIONS

The self-regulatory system that characterizes the research process has evolved from a diverse set of principles, traditions, standards, and customs transmitted from senior scientists, research directors, and department chairs to younger scientists by example, discussion, and informal education. The principles of honesty, collegiality, respect for others, and commitment to dissemination, critical evaluation, and rigorous training are characteristic of all the sciences. Methods and techniques of experimentation, styles of communicating findings, the relationship between theory and experimentation, and laboratory groupings for research and for training vary with the particular scientific disciplines. Within those disciplines, practices combine the general with the specific. Ideally, research practices reflect the values of the wider research community and also embody the practical skills needed to conduct scientific research.

Practicing scientists are guided by the principles of science and the standard practices of their particular scientific discipline as well as their personal moral principles. But conflicts are inherent among these principles. For example, loyalty to one's group of colleagues can be in conflict with the need to correct or report an abuse of scientific practice on the part of a member of that group.

Because scientists and the achievements of science have earned the respect of society at large, the behavior of scientists must accord not only with the expectations of scientific colleagues, but also with those of a larger community. As science becomes more closely linked to economic and political objectives, the processes by which scientists formulate and adhere to responsible research practices will be subject to increasing public scrutiny. This is one reason for scientists and research institutions to clarify and strengthen the methods by which they foster responsible research practices.

Accordingly, the panel emphasizes the following conclusions:

  • The panel believes that the existing self-regulatory system in science is sound. But modifications are necessary to foster integrity in a changing research environment, to handle cases of misconduct in science, and to discourage questionable research practices.
  • Individual scientists have a fundamental responsibility to ensure that their results are reproducible, that their research is reported thoroughly enough so that results are reproducible, and that significant errors are corrected when they are recognized. Editors of scientific journals share these last two responsibilities.
  • Research mentors, laboratory directors, department heads, and senior faculty are responsible for defining, explaining, exemplifying, and requiring adherence to the value systems of their institutions. The neglect of sound training in a mentor's laboratory will over time compromise the integrity of the research process.
  • Administrative officials within the research institution also bear responsibility for ensuring that good scientific practices are observed in units of appropriate jurisdiction and that balanced reward systems appropriately recognize research quality, integrity, teaching, and mentorship. Adherence to scientific principles and disciplinary standards is at the root of a vital and productive research environment.
  • At present, scientific principles are passed on to trainees primarily by example and discussion, including training in customary practices. Most research institutions do not have explicit programs of instruction and discussion to foster responsible research practices, but the communication of values and traditions is critical to fostering responsible research practices and detering misconduct in science.
  • Efforts to foster responsible research practices in areas such as data handling, communication and publication, and research training and mentorship deserve encouragement by the entire research community. Problems have also developed in these areas that require explicit attention and correction by scientists and their institutions. If not properly resolved, these problems may weaken the integrity of the research process.

1. See, for example, Kuyper (1991).

2. See, for example, the proposal by Pigman and Carmichael (1950).

3. See, for example, Holton (1988) and Ravetz (1971).

4. Several excellent books on experimental design and statistical methods are available. See, for example, Wilson (1952) and Beveridge (1957).

5. For a somewhat dated review of codes of ethics adopted by the scientific and engineering societies, see Chalk et al. (1981).

6. The discussion in this section is derived from Mark Frankel's background paper, “Professional Societies and Responsible Research Conduct,” included in Volume II of this report.

7. For a broader discussion on this point, see Zuckerman (1977).

8. For a full discussion of the roles of scientific societies in fostering responsible research practices, see the background paper prepared by Mark Frankel, “Professional Societies and Responsible Research Conduct,” in Volume II of this report.

9. Selected examples of academic research conduct policies and guidelines are included in Volume II of this report.

10. See, for example, Holton's response to the criticisms of Millikan in Chapter 12 of Thematic Origins of Scientific Thought (Holton, 1988). See also Holton (1978).

11. See, for example, responses to the Proceedings of the National Academy of Sciences action against Friedman: Hamilton (1990) and Abelson et al. (1990). See also the discussion in Bailar et al. (1990).

12. Much of the discussion in this section is derived from a background paper, “Reflections on the Current State of Data and Reagent Exchange Among Biomedical Researchers,” prepared by Robert Weinberg and included in Volume II of this report.

13. See, for example, Culliton (1990) and Bradshaw et al. (1990). For the impact of the inability to provide corroborating data or witnesses, also see Ross et al. (1989).

14. See, for example, Rennie (1989) and Cassidy and Shamoo (1989).

15. See, for example, the discussion on random data audits in Institute of Medicine (1989a), pp. 26-27.

16. For a full discussion of the practices and policies that govern authorship in the biological sciences, see Bailar et al. (1990).

17. Note that these general guidelines exclude the provision of reagents or facilities or the supervision of research as a criteria of authorship.

18. A full discussion of problematic practices in authorship is included in Bailar et al. (1990). A controversial review of the responsibilities of co-authors is presented by Stewart and Feder (1987).

19. In the past, scientific papers often included a special note by a named researcher, not a co-author of the paper, who described, for example, a particular substance or procedure in a footnote or appendix. This practice seems to.have been abandoned for reasons that are not well understood.

20. Martin et al. (1969), as cited in Sigma Xi (1986), p. 41.

21. Huth (1988) suggests a “notice of fraud or notice of suspected fraud” issued by the journal editor to call attention to the controversy (p. 38). Angell (1983) advocates closer coordination between institutions and editors when institutions have ascertained misconduct.

22. Such facilities include Cambridge Crystallographic Data Base, GenBank at Los Alamos National Laboratory, the American Type Culture Collection, and the Protein Data Bank at Brookhaven National Laboratory. Deposition is important for data that cannot be directly printed because of large volume.

23. For more complete discussions of peer review in the wider context, see, for example, Cole et al. (1977) and Chubin and Hackett (1990).

24. The strength of theories as sources of the formulation of scientific laws and predictive power varies among different fields of science. For example, theories derived from observations in the field of evolutionary biology lack a great deal of predictive power. The role of chance in mutation and natural selection is great, and the future directions that evolution may take are essentially impossible to predict. Theory has enormous power for clarifying understanding of how evolution has occurred and for making sense of detailed data, but its predictive power in this field is very limited. See, for example, Mayr (1982, 1988).

25. Much of the discussion on mentorship is derived from a background paper prepared for the panel by David Guston. A copy of the full paper, “Mentorship and the Research Training Experience,” is included in Volume II of this report.

26. Although the time to the doctorate is increasing, there is some evidence that the magnitude of the increase may be affected by the organization of the cohort chosen for study. In the humanities, the increased time to the doctorate is not as large if one chooses as an organizational base the year in which the baccalaureate was received by Ph.D. recipients, rather than the year in which the Ph.D. was completed; see Bowen et al. (1991).

27. Some universities have written guidelines for the supervision or mentorship of trainees as part of their institutional research policy guidelines (see, for example, the guidelines adopted by Harvard University and the University of Michigan that are included in Volume II of this report). Other groups or institutions have written “guidelines” (IOM, 1989a; NIH, 1990), “checklists” (CGS, 1990a), and statements of “areas of concern” and suggested “devices” (CGS, 1990c).

The guidelines often affirm the need for regular, personal interaction between the mentor and the trainee. They indicate that mentors may need to limit the size of their laboratories so that they are able to interact directly and frequently with all of their trainees. Although there are many ways to ensure responsible mentorship, methods that provide continuous feedback, whether through formal or informal mechanisms, are apt to be the most successful (CGS, 1990a). Departmental mentorship awards (comparable to teaching or research prizes) can recognize, encourage, and enhance the mentoring relationship. For other discussions on mentorship, see the paper by David Guston in Volume II of this report.

One group convened by the Institute of Medicine has suggested “that the university has a responsibility to ensure that the size of a research unit does not outstrip the mentor's ability to maintain adequate supervision” (IOM, 1989a, p. 85). Others have noted that although it may be desirable to limit the number of trainees assigned to a senior investigator, there is insufficient information at this time to suggest that numbers alone significantly affect the quality of research supervision (IOM, 1989a, p. 33).

  • Cite this Page National Academy of Sciences (US), National Academy of Engineering (US) and Institute of Medicine (US) Panel on Scientific Responsibility and the Conduct of Research. Responsible Science: Ensuring the Integrity of the Research Process: Volume I. Washington (DC): National Academies Press (US); 1992. 2, Scientific Principles and Research Practices.
  • PDF version of this title (1.2M)

In this Page

Recent activity.

  • Scientific Principles and Research Practices - Responsible Science Scientific Principles and Research Practices - Responsible Science

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

the scientific research process is

Solar Eclipse 2024: What to Know as the Eclipse Passes Over

T he first total solar eclipse to pass over the U.S. since 2017 took place Monday, as the moon scooted between Earth and the sun, casting a shadow that plunged parts of North America into darkness. In a swath of the continent known as the path of totality—where the sun was totally covered—temperatures were expected to drop, and some animals were expected to go gaga as day turned to night.

“The first time I saw one was life-changing and mind-blowing,” said C. Alex Young, a National Aeronautics and Space Administration solar astrophysicist. “I feel like there’s this window that opens up that allows me to see our star in a way that normally we can’t experience with our own eyes.”

What time is the eclipse?

The eclipse began in the U.S. just before 12:30 p.m. local time in Texas, ending in northeastern Maine around 4:40 p.m. local time.

In Austin, Texas, the total eclipse started at 1:36 p.m. local time. Indianapolis was in total darkness beginning at 3:06 p.m. local time, while Buffalo, N.Y., completely lost sight of the sun at 3:18 p.m. local time.

Astronomers had suggested checking Timeanddate.com to see when the total eclipse began and ended in a specific area.

Where is the path of totality?

It changes from eclipse to eclipse, but this time, the path was a roughly 115-mile-wide band stretching from central Mexico to Newfoundland, Canada—passing through more than a dozen U.S. states in between.

Austin, Dallas, Indianapolis, Cleveland, Buffalo and Rochester, N.Y., were among the cities in the path of totality. Most of the rest of the continent saw a partial eclipse.

What happens during a total solar eclipse?

In the path of totality, the celestial spectacle starts as a partial eclipse, as the moon slowly obscures more of the sun over a period of time.

Just before totality, points of light appear around the edges of the moon. Scientists call these Baily’s beads, after one of the first eclipse chasers. They are the result of sunlight moving through valleys on the lunar surface and quickly disappear, leaving one final bright spot resembling a diamond on a ring. Once that disappears, the sun takes on the appearance of a black disc.

“It’s not like a sunset where it’s bright in one direction and dark in another direction,” said Jay Anderson, a Canadian meteorologist and eclipse chaser. “For the eclipse, you’re in the middle of a big shadow.”

The moon on Monday completely blocked the sun for as long as 4½ minutes. Then the process reversed, with the sun slowly re-emerging.

How can you watch a solar eclipse safely?

Skywatchers needed to use eclipse glasses, which consist of solar filters that block out light from the sun, during the event. The only time it’s safe to look directly at the sun is during totality, when the moon completely blocks the bright orb. Even 1% of the sun’s surface is 10,000 times brighter than a full moon and dangerous to view without the right equipment.

Gazing at the sun without protection can cause what is known as eclipse blindness, or retinal burns, when nerve tissue at the back of the eye is damaged. The retina has no pain receptors, so viewers are unaware when this damage occurs.

What causes a total solar eclipse?

The sun is 400 times the moon’s diameter, yet about 400 times farther away, resulting in the two appearing nearly the same size in the sky. So when the moon passes in just the right spot between Earth and the sun, the star is blocked from view.

How often do total solar eclipses occur?

These events happen somewhere on the planet every year or two. The geometry of the Earth’s orbit, moon’s orbit and their relative positions in relationship to the sun make it so that, on average, the same spot on Earth only experiences a total solar eclipse once every 375 years.

Some parts of the U.S. are lucky. Small areas in Missouri, Illinois and Kentucky experienced totality in 2017 and got to experience it again in 2024.

How long do total eclipses last?

There is no standard length for a total eclipse, though there is always one place where totality lasts the longest. For Monday’s eclipse, the maximum duration of totality was four minutes, 28 seconds near Torreón, Mexico. The longest total eclipses on record have exceeded seven minutes.

“The average length is about 3½ minutes,” Anderson said. “The shortest I’ve seen is 18 seconds.”

The length depends on how close the moon is to Earth, and how far the sun is from Earth on eclipse day. The farther away the sun, the smaller it appears in the sky, and the more easily it can be covered by the moon. The closer the moon, the bigger it appears, and the longer its disc will cover the sun.

The nearness of the moon also affects the shadow it projects onto the planet. The path of totality for this eclipse was wider than the one from 2017, which was roughly 70 miles wide.

How can cloud cover affect your eclipse experience?

Clouds can conceal a total eclipse but don’t completely thwart the experience. Even if the sight of the moon fully blocking the sun is obscured, the temperature drop, changing winds, impacts on animals and darkening that accompany totality are noticeable.

“There was still a visceral experience about it, there was still a sensory experience that’s different than anything you would experience in your lifetime,” according to Young, who said he has experienced a couple of total eclipses that were clouded out.

Historical data about typical cloud coverage this time of year in North America suggested Mexico and the southern U.S. were the most likely spots along the path of totality to be cloud-free. But the latest forecasts had shown Texas and many other parts of the path blanketed in clouds.

Cloud cover can change over the course of the day and during the eclipse itself, according to Patricia Reiff, a professor of physics and astronomy at Rice University in Houston.

“Some clouds begin to thin, and even those thin clouds become more transparent as totality approaches, because there’s not as much sunlight scattered inside them,” Reiff said, adding that she has experienced eclipses in which the sky opened up just for totality before the clouds closed in again.

What science can be done during a total solar eclipse?

Researchers can use these events to improve their understanding of difficult-to-study parts of the sun’s upper atmosphere, known as the corona. The superhot corona is so faint compared with the sun’s surface that it can’t be seen from Earth unless the light from the sun is totally blocked. While scientists can build instruments that mimic solar eclipses, they don’t measure up to the real thing, experts say.

The National Center for Atmospheric Research planned to use a jet to follow the path of totality and observe infrared light from the corona. The corona is a source of solar wind—a stream of charged particles spewed into the solar system that can affect our navigation and communication systems, satellites and power grids by causing space weather above our planet. By better understanding the corona, scientists hope to improve our ability to predict dangerous space weather, said Paul Bryans, a project scientist at the center.

NASA also planned to launch small rockets to study how an electrically charged part of Earth’s atmosphere known as the ionosphere changes in the sun’s absence during an eclipse. Radio and GPS signals from satellites and ground-based systems travel through the ionosphere, so changes there could affect technology.

What happens to animals during an eclipse?

Anecdotal evidence suggests that, as the moon slowly blocks out the sun, most animals tend to switch to their nighttime routines, but the study of animal behavior during these events has been limited, according to Adam Hartstone-Rose, a professor of biological sciences at North Carolina State University.

During the 2017 eclipse, Hartstone-Rose said he and his colleagues watched more than a dozen species at the Riverbanks Zoo in Columbia, S.C., during totality. The group noticed lorikeets flocked together and flew to their nighttime roost, giraffes started galloping in their enclosure, a sedentary Komodo dragon started running around its cage, and Galapagos tortoises began mating.

“Obviously, we can’t know exactly what animals are thinking,” he said, “but many species had a reaction that we think that we relate to anxiety.”

When is the next total solar eclipse?

Viewers in Spain and Iceland will experience the next totality on Aug. 12, 2026. Less than a year after that, skywatchers in parts of North Africa, Saudi Arabia and Spain will also glimpse a total solar eclipse.

The continental U.S.’s next totality experience won’t occur until 2044, when a total eclipse passes over Montana and North Dakota. A 2045 total eclipse will herald a longer path of totality, cutting from Florida to California.

According to scientists, eventually Earth will stop experiencing total eclipses. More than 50 years ago, Apollo astronauts left laser reflectors on the moon’s surface to help determine how far away it is from Earth. Observations show the moon is moving away at a rate of about 1.5 inches a year. As the moon recedes, how large it appears in the sky shrinks, so in about 600 million years the moon will be far enough away that it will appear too small to fully cover the sun, Bryans said.

How do you photograph a solar eclipse?

Believe it or not, a smartphone mounted to a tripod is sufficient to photograph totality. Don’t forget to turn off the autoflash, and consider doing a panorama shot.

To snap a close-up of the sun’s darkened disc, opt for a powerful telephoto lens.

Eclipse chasers and scientists recommend spending most of a total eclipse looking up or observing the world around you rather than taking photos.

“What I’ve sometimes done during totality is set up my camera, or my cellphone, on a tripod, turned it on video and just hit record and left it alone,” Young said. “My recommendation has always been to just put your camera down and enjoy.”

To photograph a partial eclipse before and after totality, be sure to purchase a solar filter for your lens. (Remove it during totality.) Never use a camera—or binoculars or a telescope—without a filter because these devices concentrate a lot of light into your eyes and cause injury, even if you have eclipse glasses on, according to NASA.

Other tips for seeing a total solar eclipse?

Totality plunges parts of the world into a darkness akin to a full moonlit night. Astronomers say you are able to see planets such as Venus and Jupiter.

To get a better experience, Young said to avoid city lights as you would if you were watching a meteor shower.

“If there’s a place that normally has really good stars, then that’s going to be a good place to see the eclipse,” Reiff said.

This explanatory article may be periodically updated.

Write to Aylin Woodward at [email protected]

Solar Eclipse 2024: What to Know as the Eclipse Passes Over

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 09 April 2024

Research of wet string grid dust removal vehicle and creation of dust control area on tunnel working face

  • Huan Deng 1 ,
  • Shiqiang Chen   ORCID: orcid.org/0000-0002-8672-9411 1 ,
  • Junxin Huang 2 ,
  • Zhirong Wu 3 ,
  • Ying Rao 1 ,
  • Xinyi Qiu 1 &
  • Jiujun Cheng 1  

Scientific Reports volume  14 , Article number:  8292 ( 2024 ) Cite this article

Metrics details

  • Civil engineering
  • Environmental sciences

The spread of blast dust throughout the tunnel becomes a common problem in drill and blast tunneling,the key to breaking through the problem is the creation of a dust control area on the working face.In view of this key problem, a wet string grid dust removal crawler vehicle was developed, the power of the vehicle came from the diesel generator, and further, the air cooler of the diesel generator was used to generate airflow, and the suction process formed by the on-board axial flow fan was coupled to create a dust control area of the working face after blasting.The results show that when the frequency of the axial flow fan is adjusted to 30 Hz, the airflow speed of the wet chord grid section reaches 3.34 m/s, and the dust removal efficiency is the highest, with a value of 94.3%.Compared with the non-use of the dust removal vehicle, when the air outlet of the air cooler is front, horizontal front, horizontal rear, the dust concentration is reduced by 74.37, 92.39 and 50.53%.Finally, the optimized wet grid dust removal crawler was installed in the Dading tunnel, and the actual dust reduction efficiency was about 78.49%. The results obtained provide an important technical way to improve the working environment of the drilling and blasting construction tunnel.

Introduction

For tunnel construction, the drilling and blasting method generates a lot of dust, especially respiratory dust with high mass concentration, and its suppression and treatment technology has been a research difficulty plaguing the industry. On the one hand, dust induces wear and tear on construction equipment, reducing the accuracy of instruments and shortening their lifespan. On the other hand, construction workers are susceptible to pneumoconiosis due to prolonged dust inhalation 1 . According to relevant statistics, the incidence rate of pneumoconiosis among occupational diseases is above 80 percent 2 . Therefore, it is important to provide a favorable working environment for construction workers to alleviate their risk of illness and improve construction efficiency.

Mechanical ventilation is widely used in construction sites to control dust at the working face, and scholars all over the world have adjusted many key parameters in the ventilation system to meet the requirement 3 , 4 , 5 , 6 , 7 . However, as tunnel excavation tends to grow in size, it is difficult to realize ideal effect of ventilation and dust removal. In the engineering field, dry filtration, foam adsorption, electrostatic capture, negative ion catalytic and many other dust removal technologies have been created to reduce the concentration of dust at the working face and improve the operating environment of the tunnel 8 , 9 , 10 , 11 , 12 , 13 . According to relevant studies, the most economical and effective means of controlling dust is spraying dust. Researchers have focused on active magnetized water spray, pneumatic atomization, and ultrasonic atomization to deal with the issue 15 , 16 , 17 . However, the tunnel dust source is susceptible to airflow turbulence, the fog droplets and dust particles are difficult to contact, the effective distance of wetting and condensation is limited, greatly reducing the efficiency of spray dust reduction. Wetting fine wire air cleaning technology with high dust removal efficiency, low resistance and stable performance and other characteristics, which captures dust effectively by forming a dynamic water film through the wetting and coagulation of spray droplet groups and dust particles, as well as droplet groups on the fine wire surface causes collision, developing into a kind of promising engineering application 18 .

This paper develops a wet chord grid dust removal crawler vehicle for tunnel construction which is on the basis of wetting fine wire air cleaning technology and drill and blast method. At the same time, in order to ensure that the dust at the working face is effectively pumped and purified, a dust control zone in the working face was created by the use of air cooler, and the spread and aggregation of unpurified dust at the secondary lining is effectively prevented 19 , 20 , so as to maintain the tunnel construction environment. In addition, dust control, dust removal, self-drive and control devices are integrated into the vehicle without negatively affecting the tunnel production operations, which provide a technological approach to improving the operating environment in drill and blast tunneling operations.

Engineering background

Engineering overview.

The Dading Tunnel is a single-hole double-lane tunnel with a design length of 2609 m, a width of 12 m, and an arch height of 10 m.According to the grade of the surrounding rock of the tunnel body and the Code for Construction Design of Railway Tunnels 21 , the short step method is selected for excavation, and the step length is 12 m.The tunnel adopts press-in ventilation, the diameter of the air duct is 1.5 m, and the distance from the outlet end to the working face is 20 m, so that the fresh air flow outside the tunnel can reach the working face smoothly.The schematic diagram of on-site construction is shown in Fig.  1 .

figure 1

Schematic diagram of on-site construction.

The construction of the tunnel body mainly includes drilling, blasting, slag removal, support and lining and other processes, and the maximum value of the required air volume should be comprehensively determined by considering the exhaust smoke of the working face, the maximum number of workers in the tunnel, the minimum allowable airflow speed, and the dilution and discharge of exhaust gas of the internal combustion engine. Table 1 shows the required air volume for the tunnel 22 .

As can be seen from Table 1 , the maximum air demand of the tunnel is 1824.6m 3 /min, and the air supply volume of the press-in ventilator outside the tunnel is calculated considering the air leakage volume of the press-in ventilation pipe. This air volume is calculated by the following formula:

Among them, Q z is the air demand of the working face, m 3 /s; Q j is the air supply volume of the ventilator, m 3 /s; P is the air leakage rate of the press-in duct, %, is calculated as follows:

Among them, L p is the length of the pipe, take the longest pipe length, m; P 100 is the average air leakage rate of 100 m, take 1%.

Accordingly, the 2 × 75 kW axial flow fan SDF(C)-NO11.5 is selected to supply fresh air to the tunnel, with a rated air volume of 1171–2285 m 3 /min and an air pressure of 727–4629 Pa, which can meet the ventilation requirements of the unilateral tunnel.

On-site measurement

In order to visually analyze the airflow and dust migration in the tunnel, the TSI air speed measuring instrument and the FCC-30 dust measuring instrument are used to measure the three measurement points shown in Fig.  2 a, and one group is recorded every 5 m. In the process of measurement, the tunnel is undergoing blasting and smoke exhaust operations, and the airflow speed and dust distribution at the tunnel measuring point are shown in Fig.  2 b and c, wherein Fig.  2 b is the airflow speed distribution at the tunnel measuring point, and Fig.  2 c is the dust distribution at the tunnel measuring point.

figure 2

Field airflow speed and dust concentration measurements.

It can be seen from Fig.  2 b and c that after the air flow is injected out of the press-in air duct, due to the turbulent diffusion and entrapment of the jet, the gun smoke of the working face is strongly mixed with the fresh air, and it is pushed outward along the tunnel.Within 20 m from the working face, the airflow speed of measurement point B is smaller than that of A and C, and there is a vortex in the middle of the tunnel, so the dust concentration of measurement point B is high. In the range of 20–60 m from the working face, the reflux crosses the upper step, mainly moving along one side of the C measuring point, the airflow speed of the C measuring point is larger than A and B, the airflow speed of the measuring point A is the smallest, the dust is retained on the A side, and the dust concentration of each measuring point exceeds the standard value. Therefore, it is urgent to take the necessary dust removal measures.

Development of wet chord grid dust removal crawler vehicle

In order to improve the tunnel construction environment and reduce the dust concentration in the tunnel, a wet chord grid dust removal crawler vehicle as shown in Fig.  3 was developed, and the dust was reduced at the working face.The wet grid dust removal crawler truck is mainly composed of a drive system (including diesel generators, hydraulic pumps, air cooler, etc.), air purification boxes, and extractable axial flow fans. After blasting in the tunnel, the vertical water pump and the extraction axial flow fan are started through the electric control box, and the dust and dirty air at the working face are purified through the air inlet under the suction action of the extraction axial flow fan of the device, and the reused air is directly discharged into the tunnel to dilute the pollutant concentration in the tunnel and promote the air flow to move outside the tunnel.

figure 3

Schematic diagram of the device structure.

Wet string grid dust removal mechanism

Wet string grid water film dust removal is based on the water film washing and dust removal method based on the combination of dust and water, the string grid is the medium of film making, the water film is equivalent to a barrier, when the dusty air flow through the water film, the washing airflow, dust particles are trapped in the water film, to achieve the purpose of dust removal and purification. In the air purification box, spray, string grid film-making, and water retaining and defogging work together to form a string grid water film dust removal and purification system, as shown in Fig.  4 .

figure 4

Principle of wet string grid dust removal.

Specifically, the droplet group ejected from the nozzle collides with the surface of the string grid by inertia, and the droplets are capillary wetted on the surface of the string grid and the capillary spread between the strings due to capillary action, and the longitudinal fine chord grid gap forms a dynamic water film flowing downward. When dusty air passes through the water film, the dust particles are captured by the water film and flow downward with the water flow. The water film ruptures under the action of airflow, and continuously generates new water film under the action of continuous spray droplets. The water mist formed by the rupture of the string grid is defogged by the water baffle, which further separates the dust contained in the water mist and discharges clean air 23 , 24 , 25 .

Optimization of the operating parameters of the wet chord grid dust removal crawler vehicle

The formation and crushing of the water film on the surface and between the strings of the wet string grid are mainly affected by the water supply pressure of the nozzle and the filter airflow speed of the string grid section.Due to the limited water supply pressure of the vehicle-mounted water pump, there are many impurities in the water source in the tunnel, and the nozzle with a small aperture is easy to be blocked, and the atomization effect is poor; The nozzle with a large pore size is selected, and the particle size of the spray droplet is too large, and it is difficult to form a water film between the string grids; Finally, a low-pressure atomizing nozzle with an aperture of 2.0 mm was selected, and the dust removal effect was significant when the water supply pressure was 0.6 MPa 18 .Under different fan frequencies, the air volume inhaled by the dust collection port is different, which will lead to the difference in the filtration air speed of the wet chord grid section, which will affect the dust removal efficiency. The airflow speed of the chord grid section under the conditions of 20, 30, 40 and 50 Hz was measured by the TSI air speed measuring instrument, which was 2.85, 3.34, 3.83 and 4.34 m/s, respectively. Figure  5 shows the self-built wet string grid filtration and dust removal experimental platform, which includes AG410 dry powder aerosol diffuser, low-pressure atomization nozzle, string grid, W-shaped water baffle, extraction axial flow fan and dust sampler.

figure 5

Experimental platform for wet string grid filtration and dust removal.

As shown in Fig.  6 , the dust collected in Fig.  2 is analyzed by LS 13 320 particle size analyzer, and the dust particles are mainly distributed between 2 and 25 μm, with 77.85% of the dust below 10 μm and an average particle size of 8 μm.Therefore, the dust with a particle size of d < 25 μm is screened for experiment, and the AG410 dry powder aerosol diffuser emites dust particles into the pipeline, and two dust sampling points are arranged at the front and rear ends of the wet string grid, after the normal operation of the experimental platform, the sampler can accurately sample the dust before and after the dust removal of the string grid, and the dust removal efficiency is calculated by weighing the filter paper after the dust sampler sampled and dried.

figure 6

Dust particle size analysis.

Table 2 shows the dust quality and dust removal efficiency of the measured points at different fan frequencies.

As can be seen from Table 2 , when the fan frequencies are 20, 30, 40 and 50 Hz, the dust removal efficiency of the wet string grid is 88.2, 94.3, 90.3 and 87.5%, respectively. When the fan frequency is 30 Hz, the dust removal efficiency can reach up to 94.3%, and when the fan frequency is increased to 40 and 50 Hz, the airflow speed of the chord grid section increases, the water film between the chords is not easy to form, and the dust removal efficiency decreases. The experimental results provide a theoretical basis for the optimization of the operating parameters of the device.

Creation of dust control area on the working face

Due to the large cross-sectional area of the tunnel, only dust removal measures are taken on the working face, and a large amount of dust is still diffused. Therefore, it is also necessary to create a dust control zone on the working surface. The air outlet of the air cooler of the diesel generator of the dust removal truck is a square of 1 m × 1 m, and the airflow speed is constant at 8 m/s.

Dust control area construction mechanism

Vortex blowing and suction agglomeration dust control technology

Figure  7 shows the dust control principle of the pre-installed device of the air cooler. In Fig.  7 , the fresh air pressed into the press-in air duct, the heat dissipation outlet air of the device, and the suction air flow of the extractable axial flow fan follow the vortex blowing principle, forming a strong vortex air flow between the working face and the device, and the air flow is finally attenuated and sucked in by the extraction air duct, so as to form a vortex blowing and suction accumulation dust control technology 26 .

figure 7

Vortex blowing and suction accumulation dust control.

Jet air curtain dust control technology

Figure  8 shows the dust control mechanism of the device when the air cooler is horizontally positioned. In Fig.  8 , the wet string grid dust removal crawler is installed close to the side wall of the tunnel, the device is opened, the press-in ventilator is closed, and the air outlet of the air cooler is directed to form an air curtain layer at a certain flow rate, and the dust control area is formed by sucking up the surrounding air 27 , and finally, the dirty air in the dust control area is sucked in by the extraction air duct.

figure 8

Jet air curtain dust control.

Establish the model and set the parameters

Physical models.

According to the actual working conditions, the physical model as shown in Fig.  9 is established at a scale of 1:1.The model includes a tunnel, a primary support trolley, a wet grating dust removal crawler (8 m long, 2 m wide and 2.2 m high), an inverted arch trestle and a press-in air duct. Wherein, model (a) is a tunnel single-head compressed air model; Model (b) Front-facing air cooler for installation; The devices (c) and (d) of the model are located on the opposite side of the press-in air duct, and the air cooler is placed horizontally in front and rear respectively.

figure 9

Physical model.

In Fig.  9 , the outlet end of the press-in air duct is set to inlet1, the dust collection port is set to inlet2, the air cooler outlet is set to inlet3, and the reuse air outlet is set to inlet4. In order to simplify the calculation, in the simulation process, it is assumed that the air flow after the purification of the wet string grid dust removal crawler vehicle is a clean and dust-free air flow.

Mathematical models

The airflow speed in the tunnel is not large and the pressure change is small, so the compressibility of the air can be ignored. Therefore, the air flow in the tunnel is considered as a three-dimensional incompressible and stable viscous turbulence. The model of turbulent flow is a high Reynolds number k-ε model. Mathematical models include continuity equations, momentum equations, and k-ε model equations 28 , 29 .

Incompressible Continuity Equation:

Incompressible Momentum Equation:

Realizable k-ε turbulence model:

k equation:

ε equation:

In Eqs. ( 3 )–( 7 ), ρ is the fluid density, kg/m 3 ; U I and U J are the velocity components of the fluid, m/s, respectively; p is the pressure on the fluid microelement, Pa; μ is the dynamic viscosity, Pa·s; μ t is the turbulent viscosity Pa·s; k is the turbulent flow energy, m 2 /s 2 ; ε is the dissipation rate, m 3 /s; σ k and σ ε are the Prandtl numbers corresponding to k and ε equations, respectively. According to the relevant experiments, the model constants were C μ  = 0.09, C ε1  = 1.44, C ε2  = 1.92, σ k  = 1.0, and σ ε  = 1.3.

Parameter Settings

The frequency modulation range of the axial flow fan of the wet string grid dust removal crawler vehicle is 0 ~ 50 Hz, and the frequency of the ventilation fan is adjusted to 30 Hz when the numerical simulation is calculated. In order to ensure the accuracy of the simulated values, the TSI anemometer was used to measure the four velocity inlets inlet1, inlet2, inlet3 and inlet4 in Fig.  7 , and the measurement results are shown in Table 3 .

The particle size range of the dust source is taken according to Fig.  6 , and the particle size range is 2–16 μm, and the particle size range is 2–16 μm, and the mass flow rate and other references 30 are taken for the values, and the parameters of the injection source are set as shown in Table 4 .

Airflow and dust migration law

Airflow movement law.

Figure  10 shows the airflow speed distribution at the measuring point at a distance of 1.6 m (human breathing height) from the tunnel floor in the four model tunnels. In Fig.  10 a, the tunnel only adopts press-in ventilation, and the air duct directs the air flow along the side wall of measurement point A to the working face, and the jet flows back and spreads along the side wall of measurement point C to the full section. Therefore, in the range of 0–20 m from the working face, the airflow speed of measuring point A is greater than that of measuring points B and C, and in the range of 20–100 m from the working face, the airflow speed of measuring point C is greater than that of measuring points A and B, and the airflow speed of measuring point B in the middle is the smallest.

figure 10

Airflow velocity distribution at human respiration height.

In Fig.  10 b, the device is installed in the middle of the upper step and in front of the air cooler, the air duct and the air cooler respectively draw the air flow to the working face, and the device extraction air duct sucks the air flow into the device, resulting in the disorder of the air flow migration within 0–60 m from the working face. In Fig.  10 c, the device is installed on the opposite side of the press-in air duct, and the air cooler is placed horizontally in front, and the press-in ventilator is not turned on at this time, the air cooler is transversely ventilated, and the air flow shoots to the side wall and gradually attenuates, and finally the reused air is discharged along the side wall of the air duct with the purification of the device, and the airflow speed at measurement point A is the largest. In Fig.  10 d, the device is installed on the opposite side of the pressed-in air duct, the air cooler is placed horizontally, and other parameters are not changed, and the airflow speed at measurement point A is still the maximum, and compared with Fig.  10 c, the airflow speed at measurement point B increases significantly, indicating that changing the installation position of the air cooler will also affect the flow field of the whole tunnel.

Dust distribution law

Figure  11 shows the dust concentration distribution of the tunnel section under the unused device and three air-cooler laying schemes. By observing the dust concentration distribution of X = 6 m (tunnel axial surface), Z = 10 m, Z = 30 m, Z = 50 m, Z = 70 m, Z = 90 m, etc., we can preliminarily judge whether the dust is controlled on the working face, so as to optimize the air-cooler layout scheme.

figure 11

Cloud map of dust concentration distribution in part of the tunnel section.

As can be seen from Fig.  11 a, the blasting dust was diffused from the working face without the use of the device, polluting the entire tunnel. In Fig.  11 b, the air cooler is in front, and the dust is mainly distributed in the area of 0–30 m away from the working face, that is, between the working face and the inverted arch trestle; In Fig.  11 c, the air cooler is placed horizontally in front, and the dust is mainly distributed between 0 and 10 m away from the working face, that is, between the dust collection port of the device and the working face; In Fig.  11 d, the air cooler is placed horizontally and then released, and the dust fills almost the entire tunnel, which may be due to the fact that the air cooler outlet and the reuse air outlet are too close to each other, and the dust in the dust control area is suctioned by the air cooler outlet and the reuse air flow, which then causes the dust to diffuse to the entire tunnel. As shown in Fig.  12 , the dust concentration of each measurement point of human respiration height under different schemes is effectively controlled after the device is adopted.

figure 12

Comparison of dust concentrations at each measurement point under different schemes.

After the device is adopted, the dust reduction efficiency of different schemes is calculated by Eq. ( 8 ):

where the η is the average dust reduction efficiency in the tunnel after the device is adopted, %; C 0 is the average dust concentration in the tunnel when the device is not used, mg/m 3 ; C 1 is the average dust concentration in the tunnel after the installation of the device, mg/m 3 .

The results show that compared with the unused device, the dust reduction efficiency is 74.37%. Compared with the unused device, the dust reduction efficiency of the air cooler is 92.39%. Compared with the unused device, the dust reduction efficiency of the air cooler is 50.53%. Therefore, it is better to choose the horizontal front placement scheme of the air cooler, which can create a better dust control area on the working face.

On-site application

The wet chord grid dust removal crawler is used in the Dading Tunnel of the Zhuhai-Zhaoqing High-speed Railway in Foshan, Guangdong, as shown in Fig.  13 . When the working face is blasted, the press-in fan is not turned on, and the wet string grid dust removal crawler is started. The air cooler transversely directs the air flow to form a dust control area with the working face and the side wall of the tunnel, blocking the full-section diffusion of dust at the upper steps; The extractable axial fan sucks the dirty air in the dust control area into the air purification box, and the purified reused air is directly discharged into the tunnel through the air outlet.

figure 13

Field application of wet string grid dust removal crawler vehicle.

As shown in Fig.  14 , after the wet string grid dust removal crawler is installed and operated for a period of time, it is sampled and detected by the FCC-30 dust sampler, and it is shown that a good dust removal effect has been obtained. Before installing the wet string grid dust removal crawler vehicle, the dust concentration in the tunnel is 45–500 mg/m 3 , which is much higher than the national standard requirements; After the installation of the wet string grid dust removal crawler vehicle, the dust mainly gathers around the dust collection port of the device, and the dust concentration is significantly reduced within the range of 20–100 m from the working face. According to Eq. ( 8 ), the dust reduction efficiency after the installation of the device is 78.49%, and the dust reduction effect is remarkable, and the air flow after dust removal and purification basically meets the requirements of the air quality of the tunnel inlet air flow.

figure 14

Comparison of dust concentration in the tunnel before and after the installation of the wet string grid dust removal crawler vehicle.

In this paper, with the main goal of reducing the dust concentration in the tunnel and optimizing the construction environment, the performance parameters of dust removal efficiency were optimized on the experimental platform, and the layout scheme of the air cooler outlet was compared and selected through numerical simulation, and the dust control area was created at the working face. Based on the obtained results, a wet grid dust removal crawler was designed and developed and applied in the field. Specifically, the following conclusions can be obtained:

In order to prevent the diffusion of dust at the tunnel working face and deteriorate the construction environment, a wet string grid dust removal crawler vehicle was developed, and the dust reduction efficiency of field application was 78.49%, which provided an important technical way to improve the tunnel construction environment.

When the frequency modulation of the extractable axial fan is 30 Hz, the airflow speed of the wet chord grid section is 3.34 m/s, and the dust removal efficiency is up to 94.3%.

The air cooler outlet layout scheme of the device was compared and selected through the numerical model, and the dust reduction efficiency of the air cooler was 74.37, 92.39 and 50.53% respectively compared with the unused device.

Data availability

Data will be made available on request.

Abbreviations

The air demand of the working face (m 3 /s)

The air supply volume of the ventilator (m 3 /s)

The air leakage rate of the press-in duct (%)

The length of the pipe, take the longest pipe length (m)

The average air leakage rate of 100 m (%)

The fluid density (kg/m 3 )

The velocity components of the fluid (m/s)

The pressure on the fluid microelement (Pa)

The dynamic viscosity (Pa s)

The turbulent flow energy (m 2 /s 2 )

The dissipation rate (m 3 /s)

The Prandtl numbers corresponding to k and ε equations

The average dust reduction efficiency in the tunnel after the device is adopted (%)

The average dust concentration in the tunnel when the device is not used (mg/m 3 )

The average dust concentration in the tunnel after the installation of the device (mg/m 3 )

Chen, Y. et al. Epidemiology of coal miners’ pneumoconiosis and its social determinants: An ecological study from 1949 to 2021 in China. Chin. Med. J. Pulm. Crit. Care Med. 1 (01), 46–55 (2023).

Article   Google Scholar  

Dhar, D. & Nite, D. K. Occupational disease, its recognition and classification: The story of an Indian Coalfield, 1946–1971. Extract. Ind. Soc. 12 , 101148 (2022).

Yu, H. et al. Mechanisms of dust diffuse pollution under forced-exhaust ventilation in fully-mechanized excavation faces by CFD-DEM. Powder Technol. 317 , 31–47 (2017).

Article   CAS   Google Scholar  

Feng, G. et al. Study on CO migration law of spiral tunnel in plateau area. J. Chongqing Jiaotong Univ. (Nat. Sci. Edition) 39 (06), 66–72+80 (2020).

Google Scholar  

Wang, Q. G. et al. Study and application on foam-water mist integrated dust control technology in fully mechanized excavation face. Process Saf. Environ. Prot. 133 , 41–50 (2020).

Dunwen, L. et al. Numerical simulation and experimental study on optimization of ventilation ducts in gas tunnel construction. J. China Highw. Eng. 28 (11), 98–103 (2015).

Xianzhou, H. et al. Research on mixed construction ventilation technology for the Lancang River extra-long tunnel. J. Undergr. Space Eng. 16 , 353–359 (2020).

Zhongqiang, S. Research on Dust Migration Law and Control Technology of Highway Tunnel Drilling and Blasting Construction (University of Science and Technology Beijing, 2015).

Pomerol, J. Scenario development and practical decision making under uncertainty. Decis. Support Syst. 31 (2), 197–204 (2001).

Yubao, Z. et al. Mechanism and application of dry dust removal in long tunnel construction. Mod. Tunnel Technol. 51 (03), 200–205 (2014).

Yong, X. Research on jet foam dust removal technology for urban highway tunnel. Traffic Energy Conserv. Environ. Prot. 17 (02), 133–136 (2021).

Shiqiang, Xu. et al. Main factors affecting the negative ion dust removal efficiency of construction tunnels. J. Southwest Jiaotong Univ. 55 (01), 210–217 (2020).

Honghai, Y. et al. Experimental study of Electrostatic precipitator in highway tunnels. Mod. Tunnel Technol. 56 (01), 164–168 (2019).

Ning, L. et al. Research on a new three-section spraying system and dust reduction performance. Tunnel Constr. (Chin. Engl.) 42 (04), 630–639 (2022).

Gaogao, W. et al. Effect of gas supply pressure on atomization characteristics and dust reduction efficiency of fluid ultrasonic nozzle. J. China Coal Soc. 46 (06), 1898–1906 (2021).

Botao, Q. et al. Technology system and application of activated magnetized water spray dust reduction in fully mechanized coal mining face. J. China Coal Soc. 46 (12), 3891–3901 (2021).

Tian, Z. et al. Supersonic water siphon pneumatic atomization dust reduction technology. J. China Coal Soc. 46 (12), 3912–3921 (2021).

Wu, Z. R. et al. Experimental investigation and application of mine airflow purification and reuse technology. Environ. Technol. Innov. 24 , 102035 (2021).

Shulei, Z. et al. Study on the transport law of pollutants under the extreme ventilation distance of a single end of the Mirashan Tunnel. Tunnel Constr. (Chin. Engl.) 41 (S2), 367–374 (2021).

Gefu, J. & Liwei, Q. Monitoring and research on characteristics of lining trolley’s resistance to free SiO 2 dust diffusion. Environ. Eng. 36 (04), 170–175 (2018).

Technical Specifications for Highway Tunnel Construction. JTG F60-2009. 2009-08-25.

Wang, H. et al. Parameter analysis of jet tunnel ventilation for long distance construction tunnels at high altitude. J. Wind Eng. Ind. Aerodyn. 228 , 105128 (2022).

Zhao, J. Research and application of air well exhaust rheumatic resonance grid dust removal technology. Hunan University of Science and Technology (2015).

Zhirong, W. Research and application of mine wet resonant chord grid filter dust collector. Hunan University of Science and Technology (2019).

Wang Haiqiao, W. et al. Experimental study and efficiency optimization of water film dust removal of wet string grid. J. China Coal Soc. 48 (01), 279–289 (2023).

Deji, J. et al. Numerical simulation and experiment of vortex blowing and suction dust removal technology during coal falling process. Chin. J. Saf. Sci. 31 (06), 121–127 (2021).

Haiqiao, W., Ronghua, L. & Shiqiang, C. Experimental simulation study on the flow field characteristics of confined attached jets in a single head tunnel. China Eng. Sci. 8 , 45–49 (2004).

Han, Z. Fluent Fluid Engineering Simulation Calculation Examples and Analysis (Beijing Institute of Technology Press, 2009).

Fujun, W. Computational Fluid Dynamics (Tsinghua University Press, 2004).

Zhengmao, C. et al. Study on dust transport characteristics during high altitude highway tunnel construction. J. Undergr. Space Eng. 15 (03), 927–935 (2019).

Download references

Acknowledgements

This study was supported by the National Natural Science Foundation of China (No.42202321), by the National Natural Science Foundation of Hunan (No. 2023jj50106), by the Key Science and Technology Projects in Transportation Industry Funded by Ministry of Transport, China (2021-MS5-126), by the list of Key Science and Technology Projects in Guangxi’s Transportation Industry (No.18).

Author information

Authors and affiliations.

School of Civil Engineering, Hunan University of Science and Technology, Xiangtan, 411201, Hunan, China

Huan Deng, Shiqiang Chen, Ying Rao, Xinyi Qiu & Jiujun Cheng

School of Chemical and Environmental Engineering, Hunan Institute of Technology, Hengyang, 421000, China

Junxin Huang

School of Energy and Built Environment, Guilin Institute of Aerospace Technology, Guilin, 541000, Guangxi, China

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed to the writing of the paper and provided critical input that helped shape the research, analysis, and paper. H.D.: Conceptualization, formal analysis, writing original draft, study conception and design, analysis and interpretation of results, draft manuscript preparation. S.C.: designed the research, supervised the work: carried out the research. J.H.: carried out the research. Z.W.: carried out the research. Ying Rao: carried out the research. X.Q.: carried out the research. J.C.: carried out the research.

Corresponding author

Correspondence to Shiqiang Chen .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Deng, H., Chen, S., Huang, J. et al. Research of wet string grid dust removal vehicle and creation of dust control area on tunnel working face. Sci Rep 14 , 8292 (2024). https://doi.org/10.1038/s41598-024-57748-x

Download citation

Received : 21 January 2024

Accepted : 21 March 2024

Published : 09 April 2024

DOI : https://doi.org/10.1038/s41598-024-57748-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Drill and blast method tunnel
  • Wet grid dust removal crawler vehicle
  • Air cooler outlet
  • Dust control area
  • Dust removal

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

the scientific research process is

share this!

April 8, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

Fluorous lipopeptides act as highly effective antibiotics for multidrug-resistant pathogens

Fluorous lipopetides act as highly effective antibiotics

Multidrug-resistant bacterial infections that cannot be treated by any known antibiotics pose a serious global threat. Publishing in the journal Angewandte Chemie International Edition , a Chinese research team has now introduced a method for the development of novel antibiotics to fight resistant pathogens. The drugs are based on protein building blocks with fluorous lipid chains.

Antibiotics are often prescribed far too readily. In many countries they are distributed without prescriptions and administered in factory farming: prophylactically to prevent infections and enhance performance. As a result, resistance is on the rise—increasingly against reserve antibiotics as well. The development of innovative alternatives is essential.

It is possible to learn some lessons from the microbes themselves. Lipoproteins, small protein molecules with fatty acid chains , are widely used by bacteria in their battles against microbial competitors. A number of lipoproteins have already been approved for use as drugs.

The common factors among the active lipoproteins include a positive charge and an amphiphilic structure, meaning they have segments that repel fat and others that repel water. This allows them to bind to bacterial membranes and pierce through them to the interior.

The team led by Yiyun Cheng at East China Normal University in Shanghai aims to amplify this effect by replacing hydrogen atoms in the lipid chain with fluorine atoms. These make the lipid chain simultaneously water-repellant (hydrophobic) and fat-repellant (lipophobic). Their particularly low surface energy strengthens their binding to cell membranes while their lipophobicity disrupts the cohesion of the membrane.

The team synthesized a spectrum (substance library) of fluorous lipopeptides from fluorinated hydrocarbons and peptide chains. To link the two pieces, they used the amino acid cysteine, which binds them together via a disulfide bridge.

The researchers screened the molecules by testing their activity against methicillin-resistant Staphylococcus aureus (MRSA), a widespread, highly dangerous strain of bacteria that is resistant to nearly all antibiotics. The most effective compound they found was "R6F," a fluorous lipopeptide made of six arginine units and a lipid chain made of eight carbon and 13 fluorine atoms. To increase biocompatibility, the R6F was enclosed within phospholipid nanoparticles.

In mouse models, R6F nanoparticles were shown to be very effective against sepsis and chronic wound infections by MRSA. No toxic side effects were observed.

The nanoparticles seem to attack the bacteria in several ways: they inhibit the synthesis of important cell-wall components, promoting collapse of the walls; they also pierce the cell membrane and destabilize it; disrupt the respiratory chain and metabolism; and increase oxidative stress while simultaneously disrupting the antioxidant defense system of the bacteria.

In combination, these effects kill the bacteria—other bacteria as well as MRSA. No resistance appears to develop.

These insights provide starting points for the development of highly efficient fluorous peptide drugs to treat multi-drug resistant bacteria.

Journal information: Angewandte Chemie International Edition

Provided by Wiley

Explore further

Feedback to editors

the scientific research process is

A natural touch for coastal defense: Hybrid solutions may offer more benefits in lower-risk areas

the scientific research process is

Public transit agencies may need to adapt to the rise of remote work, says new study

the scientific research process is

Broken record: March is 10th straight month to be hottest on record, scientists say

2 hours ago

the scientific research process is

Early medieval money mystery solved

11 hours ago

the scientific research process is

Finding new chemistry to capture double the carbon

14 hours ago

the scientific research process is

Americans are bad at recognizing conspiracy theories when they believe they're true, says study

15 hours ago

the scientific research process is

A total solar eclipse races across North America as clouds part along totality

the scientific research process is

New statistical-modeling workflow may help advance drug discovery and synthetic chemistry

the scientific research process is

Researchers develop better way to make painkiller from trees

the scientific research process is

Replacing plastics with alternatives is worse for greenhouse gas emissions in most cases, study finds

Relevant physicsforums posts, potentially fatal dog parasite found in the colorado river.

4 hours ago

What do large moles on the body indicate?

Mar 30, 2024

Avian flu - A new study led by a team from the University of Maryland

Mar 27, 2024

Are all biological catabolic reactions exergonic?

Mar 20, 2024

A First of Its Kind: A Calcium-based signal in the Human Brain

Mar 18, 2024

Biological culture and cultural biology

Mar 17, 2024

More from Biology and Medical

Related Stories

the scientific research process is

New insights into how epilancin 15X kills bacteria

Feb 6, 2024

the scientific research process is

Steroid drugs used for HRT can combat E. coli and MRSA

Mar 13, 2024

the scientific research process is

Researchers introduce singlet oxygen battery for battling multidrug-resistant pathogens

Aug 16, 2023

the scientific research process is

Chemists develop a new class of antibiotics to fight resistant bacteria

Jun 1, 2023

the scientific research process is

'Youngest' antibiotic kills bacteria via a new two-step mechanism

Aug 3, 2022

the scientific research process is

Nanomaterial with 'light switch' kills Gram-negative or Gram-positive bacteria

Dec 5, 2023

Recommended for you

the scientific research process is

A targeted polymer to treat colorectal cancer liver metastases

the scientific research process is

New diagnostic tool achieves accuracy of PCR tests with faster and simpler nanopore system

the scientific research process is

New micromaterial releases nanoparticles that selectively destroy cancer cells

Apr 5, 2024

the scientific research process is

Improving infectious disease testing with gold nanoparticles

Apr 4, 2024

the scientific research process is

The role of interfacial amino acids in shaping bio-electronic communication between proteins

Apr 1, 2024

the scientific research process is

New method uses nanofibrils on magnetic microparticles to isolate HIV particles

Mar 28, 2024

Let us know if there is a problem with our content

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

IMAGES

  1. Research Process: 8 Steps in Research Process

    the scientific research process is

  2. The scientific method is a process for experimentation

    the scientific research process is

  3. Infographic: Steps in the Research Process

    the scientific research process is

  4. Components of Research Process

    the scientific research process is

  5. Module 1: Introduction: What is Research?

    the scientific research process is

  6. What is the scientific method, and how does it relate to insights and

    the scientific research process is

VIDEO

  1. The Scientific Method: Steps, Terms and Examples

  2. The Scientific Method: Steps, Examples, Tips, and Exercise

  3. The Scientific Method: Steps and Examples

  4. The scientific method

  5. Stages Of Research Process

  6. The Scientific Method: Steps, Terms and Examples

COMMENTS

  1. Scientific method

    scientific method, mathematical and experimental technique employed in the sciences. More specifically, it is the technique used in the construction and testing of a scientific hypothesis. The process of observing, asking questions, and seeking answers through tests and experiments is not unique to any one field of science.

  2. Steps of the Scientific Method

    The six steps of the scientific method include: 1) asking a question about something you observe, 2) doing background research to learn what is already known about the topic, 3) constructing a hypothesis, 4) experimenting to test the hypothesis, 5) analyzing the data from the experiment and drawing conclusions, and 6) communicating the results ...

  3. Scientific method

    The scientific method is an empirical method for acquiring knowledge that has characterized the development of science since at least the 17th century. ... When applying the scientific method to research, determining a good question can be very difficult and it will affect the outcome of the investigation.

  4. What is Scientific Research and How Can it be Done?

    Research conducted for the purpose of contributing towards science by the systematic collection, interpretation and evaluation of data and that, too, in a planned manner is called scientific research: a researcher is the one who conducts this research. The results obtained from a small group through scientific studies are socialised, and new ...

  5. The scientific method (article)

    The scientific method. At the core of biology and other sciences lies a problem-solving approach called the scientific method. The scientific method has five basic steps, plus one feedback step: Make an observation. Ask a question. Form a hypothesis, or testable explanation. Make a prediction based on the hypothesis.

  6. How to Conduct Scientific Research?

    Scientific research is the research performed by applying systematic and constructed scientific methods to obtain, analyze, and interpret data. Scientific research is the neutral, systematic, planned, and multiple-step process that uses previously discovered facts to advance knowledge that does not exist in the literature.

  7. 6 Steps of the Scientific Method

    The number of steps can vary from one description to another (which mainly happens when data and analysis are separated into separate steps), however, this is a fairly standard list of the six scientific method steps that you are expected to know for any science class: Purpose/Question. Ask a question. Research. Conduct background research.

  8. How Research Works: Understanding the Process of Science

    Even if the scientific process doesn't answer the original question, the knowledge gained may help provide other answers that lead to new hypotheses and discoveries. ... You might say something as simple as, "Meta-analysis is a method for doing research on all the best research; meta-analytic research confirms the overall trend in results ...

  9. The Scientific Method Steps, Uses, and Key Terms

    When conducting research, the scientific method steps to follow are: Observe what you want to investigate. Ask a research question and make predictions. Test the hypothesis and collect data. Examine the results and draw conclusions. Report and share the results. This process not only allows scientists to investigate and understand different ...

  10. A Beginner's Guide to Starting the Research Process

    Step 3: Formulate research questions. Next, based on the problem statement, you need to write one or more research questions. These target exactly what you want to find out. They might focus on describing, comparing, evaluating, or explaining the research problem.

  11. Steps & Procedures for Conducting Scientific Research

    Defining Problem and Research. •••. The first step of the scientific research process involves defining the problem and conducting research. First, a broad topic is selected concerning some topic or a research question is asked. The scientist researches the question to determine if it has been answered or the types of conclusions other ...

  12. The Scientific Method: 5 Steps for Investigating Our World

    The Scientific Process. Scientists use a dynamic, open-ended process to investigate questions. Here are the five steps. 1. Define a Question to Investigate. As scientists conduct their research, they make observations and collect data. The observations and data often lead them to ask why something is the way it is.

  13. Overview of the Research Process

    Research is a rigorous problem-solving process whose ultimate goal is the discovery of new knowledge. Research may include the description of a new phenomenon, definition of a new relationship, development of a new model, or application of an existing principle or procedure to a new context. Research is systematic, logical, empirical, reductive, replicable and transmittable, and generalizable.

  14. Perspective: Dimensions of the scientific method

    The scientific method has been guiding biological research for a long time. It not only prescribes the order and types of activities that give a scientific study validity and a stamp of approval but also has substantially shaped how we collectively think about the endeavor of investigating nature. The advent of high-throughput data generation ...

  15. Explaining How Research Works

    Placing research in the bigger context of its field and where it fits into the scientific process can help people better understand and interpret new findings as they emerge. A single study usually uncovers only a piece of a larger puzzle. Questions about how the world works are often investigated on many different levels.

  16. Scientific Method

    Science is an enormously successful human enterprise. The study of scientific method is the attempt to discern the activities by which that success is achieved. Among the activities often identified as characteristic of science are systematic observation and experimentation, inductive and deductive reasoning, and the formation and testing of ...

  17. The research process

    The research process is fraught with problems and pitfalls, and novice researchers often find, after investing substantial amounts of time and effort into a research project, that their research questions were not sufficiently answered, or that the findings were not interesting enough, or that the research was not of 'acceptable' scientific ...

  18. Scientific Research

    Scientific research is the systematic and empirical investigation of phenomena, theories, or hypotheses, using various methods and techniques in order to acquire new knowledge or to validate existing knowledge. It involves the collection, analysis, interpretation, and presentation of data, as well as the formulation and testing of hypotheses.

  19. Scientific Research & Study Design

    The research contributes to a body of science by providing new information through ethical study design or. The research follows the scientific method, an iterative process of observation and inquiry. The Scientific Method. Make an observation: notice a phenomenon in your life or in society or find a gap in the already published literature.

  20. Research Process

    The research process has numerous applications across a wide range of fields and industries. Some examples of applications of the research process include: Scientific research: The research process is widely used in scientific research to investigate phenomena in the natural world and develop new theories or technologies. This includes fields ...

  21. Steps in the Scientific Process

    The scientific process is about asking questions, pursuing answers and using evidence to support results. With that in mind, students do not have to be professionals to start conducting science in their own community. They can collect and analyze new data or perform calculations on existing data with new perspectives to gain a better ...

  22. Research Process: 8 Steps in Research Process

    Setting Research Questions, Objectives, and Hypotheses. Step #4: Choosing the Study Design. Deciding on the Sample Design. Collecting Data From The Research Sample. Process and Analyze the Collected Research Data. Writing Research Report - Developing Research Proposal, Writing Report, Disseminating and Utilizing Results.

  23. The new science of death: 'There's something happening in the brain

    At the time Borjigin began her research into Patient One, the scientific understanding of death had reached an impasse. Since the 1960s, advances in resuscitation had helped to revive thousands of ...

  24. Researchers developed new method for detecting heart failure with a

    Researchers developed new method for detecting heart failure with a smartphone. ScienceDaily . Retrieved April 8, 2024 from www.sciencedaily.com / releases / 2024 / 04 / 240408130737.htm

  25. Scientific Principles and Research Practices

    2. Scientific Principles and Research Practices. Until the past decade, scientists, research institutions, and government agencies relied solely on a system of self-regulation based on shared ethical principles and generally accepted research practices to ensure integrity in the research process. Among the very basic principles that guide ...

  26. TechSolutions and Marines Bring a Decades-Old Process into the 21st

    Assessing surf zone conditions has never been an exact science for the Department of the Navy. That's about to change thanks to a recent request to TechSolutions, which has resulted in new surf observation (SUROB) technology to make operational forecasts more precise. For the past six months, a team of scientists and engineers from the Naval Research Lab (NRL) and the U.S. Army's ...

  27. Solar Eclipse 2024: What to Know as the Eclipse Passes Over

    What science can be done during a total solar eclipse? Researchers can use these events to improve their understanding of difficult-to-study parts of the sun's upper atmosphere, known as the corona.

  28. 2023 Special Award for Tokyo Tech Advanced Researchers Decided

    Funded by the Tokyo Tech Fund, this program aims to provide large-scale support to bright young researchers who create new value based on various unique research achievements in the fundamental sciences. This objective is in line with the Institutes mid-term goals and contributes to enhancing research capacity.

  29. Research of wet string grid dust removal vehicle and creation ...

    The spread of blast dust throughout the tunnel becomes a common problem in drill and blast tunneling,the key to breaking through the problem is the creation of a dust control area on the working ...

  30. Fluorous lipopeptides act as highly effective antibiotics for multidrug

    This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility: