meaning of program of research

Community Blog

Keep up-to-date on postgraduate related issues with our quick reads written by students, postdocs, professors and industry leaders.

What is Research? – Purpose of Research

DiscoverPhDs

  • By DiscoverPhDs
  • September 10, 2020

Purpose of Research - What is Research

The purpose of research is to enhance society by advancing knowledge through the development of scientific theories, concepts and ideas. A research purpose is met through forming hypotheses, collecting data, analysing results, forming conclusions, implementing findings into real-life applications and forming new research questions.

What is Research

Simply put, research is the process of discovering new knowledge. This knowledge can be either the development of new concepts or the advancement of existing knowledge and theories, leading to a new understanding that was not previously known.

As a more formal definition of research, the following has been extracted from the Code of Federal Regulations :

meaning of program of research

While research can be carried out by anyone and in any field, most research is usually done to broaden knowledge in the physical, biological, and social worlds. This can range from learning why certain materials behave the way they do, to asking why certain people are more resilient than others when faced with the same challenges.

The use of ‘systematic investigation’ in the formal definition represents how research is normally conducted – a hypothesis is formed, appropriate research methods are designed, data is collected and analysed, and research results are summarised into one or more ‘research conclusions’. These research conclusions are then shared with the rest of the scientific community to add to the existing knowledge and serve as evidence to form additional questions that can be investigated. It is this cyclical process that enables scientific research to make continuous progress over the years; the true purpose of research.

What is the Purpose of Research

From weather forecasts to the discovery of antibiotics, researchers are constantly trying to find new ways to understand the world and how things work – with the ultimate goal of improving our lives.

The purpose of research is therefore to find out what is known, what is not and what we can develop further. In this way, scientists can develop new theories, ideas and products that shape our society and our everyday lives.

Although research can take many forms, there are three main purposes of research:

  • Exploratory: Exploratory research is the first research to be conducted around a problem that has not yet been clearly defined. Exploration research therefore aims to gain a better understanding of the exact nature of the problem and not to provide a conclusive answer to the problem itself. This enables us to conduct more in-depth research later on.
  • Descriptive: Descriptive research expands knowledge of a research problem or phenomenon by describing it according to its characteristics and population. Descriptive research focuses on the ‘how’ and ‘what’, but not on the ‘why’.
  • Explanatory: Explanatory research, also referred to as casual research, is conducted to determine how variables interact, i.e. to identify cause-and-effect relationships. Explanatory research deals with the ‘why’ of research questions and is therefore often based on experiments.

Characteristics of Research

There are 8 core characteristics that all research projects should have. These are:

  • Empirical  – based on proven scientific methods derived from real-life observations and experiments.
  • Logical  – follows sequential procedures based on valid principles.
  • Cyclic  – research begins with a question and ends with a question, i.e. research should lead to a new line of questioning.
  • Controlled  – vigorous measures put into place to keep all variables constant, except those under investigation.
  • Hypothesis-based  – the research design generates data that sufficiently meets the research objectives and can prove or disprove the hypothesis. It makes the research study repeatable and gives credibility to the results.
  • Analytical  – data is generated, recorded and analysed using proven techniques to ensure high accuracy and repeatability while minimising potential errors and anomalies.
  • Objective  – sound judgement is used by the researcher to ensure that the research findings are valid.
  • Statistical treatment  – statistical treatment is used to transform the available data into something more meaningful from which knowledge can be gained.

Finding a PhD has never been this easy – search for a PhD by keyword, location or academic area of interest.

Types of Research

Research can be divided into two main types: basic research (also known as pure research) and applied research.

Basic Research

Basic research, also known as pure research, is an original investigation into the reasons behind a process, phenomenon or particular event. It focuses on generating knowledge around existing basic principles.

Basic research is generally considered ‘non-commercial research’ because it does not focus on solving practical problems, and has no immediate benefit or ways it can be applied.

While basic research may not have direct applications, it usually provides new insights that can later be used in applied research.

Applied Research

Applied research investigates well-known theories and principles in order to enhance knowledge around a practical aim. Because of this, applied research focuses on solving real-life problems by deriving knowledge which has an immediate application.

Methods of Research

Research methods for data collection fall into one of two categories: inductive methods or deductive methods.

Inductive research methods focus on the analysis of an observation and are usually associated with qualitative research. Deductive research methods focus on the verification of an observation and are typically associated with quantitative research.

Research definition

Qualitative Research

Qualitative research is a method that enables non-numerical data collection through open-ended methods such as interviews, case studies and focus groups .

It enables researchers to collect data on personal experiences, feelings or behaviours, as well as the reasons behind them. Because of this, qualitative research is often used in fields such as social science, psychology and philosophy and other areas where it is useful to know the connection between what has occurred and why it has occurred.

Quantitative Research

Quantitative research is a method that collects and analyses numerical data through statistical analysis.

It allows us to quantify variables, uncover relationships, and make generalisations across a larger population. As a result, quantitative research is often used in the natural and physical sciences such as engineering, biology, chemistry, physics, computer science, finance, and medical research, etc.

What does Research Involve?

Research often follows a systematic approach known as a Scientific Method, which is carried out using an hourglass model.

A research project first starts with a problem statement, or rather, the research purpose for engaging in the study. This can take the form of the ‘ scope of the study ’ or ‘ aims and objectives ’ of your research topic.

Subsequently, a literature review is carried out and a hypothesis is formed. The researcher then creates a research methodology and collects the data.

The data is then analysed using various statistical methods and the null hypothesis is either accepted or rejected.

In both cases, the study and its conclusion are officially written up as a report or research paper, and the researcher may also recommend lines of further questioning. The report or research paper is then shared with the wider research community, and the cycle begins all over again.

Although these steps outline the overall research process, keep in mind that research projects are highly dynamic and are therefore considered an iterative process with continued refinements and not a series of fixed stages.

Tips for New Graduate Teaching Assistants at University

Being a new graduate teaching assistant can be a scary but rewarding undertaking – our 7 tips will help make your teaching journey as smooth as possible.

Do you need to have published papers to do a PhD?

Do you need to have published papers to do a PhD? The simple answer is no but it could benefit your application if you can.

What are the consequences of Self-Plagiarism?

Self-plagiarism is when you try and pass off work that you’ve previously done as something that is completely new.

Join thousands of other students and stay up to date with the latest PhD programmes, funding opportunities and advice.

meaning of program of research

Browse PhDs Now

meaning of program of research

Considering whether to do an MBA or a PhD? If so, find out what their differences are, and more importantly, which one is better suited for you.

Types of Research Design

There are various types of research that are classified by objective, depth of study, analysed data and the time required to study the phenomenon etc.

meaning of program of research

Dr Pattison gained his PhD in Cosmology from the University of Portsmouth. He is now finishing a short term post-doc within the same research group and developing a career in science communication and science education.

Kai

Kai is a PhD student at Imperial College London. His research centres around sample efficiency, transfer learning and interpretability for deep reinforcement learning.​

Join Thousands of Students

Logo for University of Minnesota Libraries

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

What is Academic Research?

After completing this module you will be able to:

  • recognize why information exists, who creates it, and how information of all kinds can be valuable, even when it’s biased.
  • understand what scholarly research is, how to find it, how the process of peer-review works, and how it gets published.
  • identify types of databases and understand why databases are critical for academic research

How to use this module

This module is organized into a number of pages. To navigate, you can either:

  • use the “Previous” and “Next” buttons at the bottom of each page (suggested)

Example screenshot of bottom navigation buttons used in this tutorial.

  • follow the links in the “Contents” menu on the left side of the page

Image showing the side navigation "contents" menu

Introduction to Academic Research Copyright © by matt0341; ampala; and heitz106. All Rights Reserved.

Share This Book

What Is Research, and Why Do People Do It?

  • Open Access
  • First Online: 03 December 2022

Cite this chapter

You have full access to this open access chapter

meaning of program of research

  • James Hiebert 6 ,
  • Jinfa Cai 7 ,
  • Stephen Hwang 7 ,
  • Anne K Morris 6 &
  • Charles Hohensee 6  

Part of the book series: Research in Mathematics Education ((RME))

18k Accesses

Abstractspiepr Abs1

Every day people do research as they gather information to learn about something of interest. In the scientific world, however, research means something different than simply gathering information. Scientific research is characterized by its careful planning and observing, by its relentless efforts to understand and explain, and by its commitment to learn from everyone else seriously engaged in research. We call this kind of research scientific inquiry and define it as “formulating, testing, and revising hypotheses.” By “hypotheses” we do not mean the hypotheses you encounter in statistics courses. We mean predictions about what you expect to find and rationales for why you made these predictions. Throughout this and the remaining chapters we make clear that the process of scientific inquiry applies to all kinds of research studies and data, both qualitative and quantitative.

You have full access to this open access chapter,  Download chapter PDF

Part I. What Is Research?

Have you ever studied something carefully because you wanted to know more about it? Maybe you wanted to know more about your grandmother’s life when she was younger so you asked her to tell you stories from her childhood, or maybe you wanted to know more about a fertilizer you were about to use in your garden so you read the ingredients on the package and looked them up online. According to the dictionary definition, you were doing research.

Recall your high school assignments asking you to “research” a topic. The assignment likely included consulting a variety of sources that discussed the topic, perhaps including some “original” sources. Often, the teacher referred to your product as a “research paper.”

Were you conducting research when you interviewed your grandmother or wrote high school papers reviewing a particular topic? Our view is that you were engaged in part of the research process, but only a small part. In this book, we reserve the word “research” for what it means in the scientific world, that is, for scientific research or, more pointedly, for scientific inquiry .

Exercise 1.1

Before you read any further, write a definition of what you think scientific inquiry is. Keep it short—Two to three sentences. You will periodically update this definition as you read this chapter and the remainder of the book.

This book is about scientific inquiry—what it is and how to do it. For starters, scientific inquiry is a process, a particular way of finding out about something that involves a number of phases. Each phase of the process constitutes one aspect of scientific inquiry. You are doing scientific inquiry as you engage in each phase, but you have not done scientific inquiry until you complete the full process. Each phase is necessary but not sufficient.

In this chapter, we set the stage by defining scientific inquiry—describing what it is and what it is not—and by discussing what it is good for and why people do it. The remaining chapters build directly on the ideas presented in this chapter.

A first thing to know is that scientific inquiry is not all or nothing. “Scientificness” is a continuum. Inquiries can be more scientific or less scientific. What makes an inquiry more scientific? You might be surprised there is no universally agreed upon answer to this question. None of the descriptors we know of are sufficient by themselves to define scientific inquiry. But all of them give you a way of thinking about some aspects of the process of scientific inquiry. Each one gives you different insights.

An image of the book's description with the words like research, science, and inquiry and what the word research meant in the scientific world.

Exercise 1.2

As you read about each descriptor below, think about what would make an inquiry more or less scientific. If you think a descriptor is important, use it to revise your definition of scientific inquiry.

Creating an Image of Scientific Inquiry

We will present three descriptors of scientific inquiry. Each provides a different perspective and emphasizes a different aspect of scientific inquiry. We will draw on all three descriptors to compose our definition of scientific inquiry.

Descriptor 1. Experience Carefully Planned in Advance

Sir Ronald Fisher, often called the father of modern statistical design, once referred to research as “experience carefully planned in advance” (1935, p. 8). He said that humans are always learning from experience, from interacting with the world around them. Usually, this learning is haphazard rather than the result of a deliberate process carried out over an extended period of time. Research, Fisher said, was learning from experience, but experience carefully planned in advance.

This phrase can be fully appreciated by looking at each word. The fact that scientific inquiry is based on experience means that it is based on interacting with the world. These interactions could be thought of as the stuff of scientific inquiry. In addition, it is not just any experience that counts. The experience must be carefully planned . The interactions with the world must be conducted with an explicit, describable purpose, and steps must be taken to make the intended learning as likely as possible. This planning is an integral part of scientific inquiry; it is not just a preparation phase. It is one of the things that distinguishes scientific inquiry from many everyday learning experiences. Finally, these steps must be taken beforehand and the purpose of the inquiry must be articulated in advance of the experience. Clearly, scientific inquiry does not happen by accident, by just stumbling into something. Stumbling into something unexpected and interesting can happen while engaged in scientific inquiry, but learning does not depend on it and serendipity does not make the inquiry scientific.

Descriptor 2. Observing Something and Trying to Explain Why It Is the Way It Is

When we were writing this chapter and googled “scientific inquiry,” the first entry was: “Scientific inquiry refers to the diverse ways in which scientists study the natural world and propose explanations based on the evidence derived from their work.” The emphasis is on studying, or observing, and then explaining . This descriptor takes the image of scientific inquiry beyond carefully planned experience and includes explaining what was experienced.

According to the Merriam-Webster dictionary, “explain” means “(a) to make known, (b) to make plain or understandable, (c) to give the reason or cause of, and (d) to show the logical development or relations of” (Merriam-Webster, n.d. ). We will use all these definitions. Taken together, they suggest that to explain an observation means to understand it by finding reasons (or causes) for why it is as it is. In this sense of scientific inquiry, the following are synonyms: explaining why, understanding why, and reasoning about causes and effects. Our image of scientific inquiry now includes planning, observing, and explaining why.

An image represents the observation required in the scientific inquiry including planning and explaining.

We need to add a final note about this descriptor. We have phrased it in a way that suggests “observing something” means you are observing something in real time—observing the way things are or the way things are changing. This is often true. But, observing could mean observing data that already have been collected, maybe by someone else making the original observations (e.g., secondary analysis of NAEP data or analysis of existing video recordings of classroom instruction). We will address secondary analyses more fully in Chap. 4 . For now, what is important is that the process requires explaining why the data look like they do.

We must note that for us, the term “data” is not limited to numerical or quantitative data such as test scores. Data can also take many nonquantitative forms, including written survey responses, interview transcripts, journal entries, video recordings of students, teachers, and classrooms, text messages, and so forth.

An image represents the data explanation as it is not limited and takes numerous non-quantitative forms including an interview, journal entries, etc.

Exercise 1.3

What are the implications of the statement that just “observing” is not enough to count as scientific inquiry? Does this mean that a detailed description of a phenomenon is not scientific inquiry?

Find sources that define research in education that differ with our position, that say description alone, without explanation, counts as scientific research. Identify the precise points where the opinions differ. What are the best arguments for each of the positions? Which do you prefer? Why?

Descriptor 3. Updating Everyone’s Thinking in Response to More and Better Information

This descriptor focuses on a third aspect of scientific inquiry: updating and advancing the field’s understanding of phenomena that are investigated. This descriptor foregrounds a powerful characteristic of scientific inquiry: the reliability (or trustworthiness) of what is learned and the ultimate inevitability of this learning to advance human understanding of phenomena. Humans might choose not to learn from scientific inquiry, but history suggests that scientific inquiry always has the potential to advance understanding and that, eventually, humans take advantage of these new understandings.

Before exploring these bold claims a bit further, note that this descriptor uses “information” in the same way the previous two descriptors used “experience” and “observations.” These are the stuff of scientific inquiry and we will use them often, sometimes interchangeably. Frequently, we will use the term “data” to stand for all these terms.

An overriding goal of scientific inquiry is for everyone to learn from what one scientist does. Much of this book is about the methods you need to use so others have faith in what you report and can learn the same things you learned. This aspect of scientific inquiry has many implications.

One implication is that scientific inquiry is not a private practice. It is a public practice available for others to see and learn from. Notice how different this is from everyday learning. When you happen to learn something from your everyday experience, often only you gain from the experience. The fact that research is a public practice means it is also a social one. It is best conducted by interacting with others along the way: soliciting feedback at each phase, taking opportunities to present work-in-progress, and benefitting from the advice of others.

A second implication is that you, as the researcher, must be committed to sharing what you are doing and what you are learning in an open and transparent way. This allows all phases of your work to be scrutinized and critiqued. This is what gives your work credibility. The reliability or trustworthiness of your findings depends on your colleagues recognizing that you have used all appropriate methods to maximize the chances that your claims are justified by the data.

A third implication of viewing scientific inquiry as a collective enterprise is the reverse of the second—you must be committed to receiving comments from others. You must treat your colleagues as fair and honest critics even though it might sometimes feel otherwise. You must appreciate their job, which is to remain skeptical while scrutinizing what you have done in considerable detail. To provide the best help to you, they must remain skeptical about your conclusions (when, for example, the data are difficult for them to interpret) until you offer a convincing logical argument based on the information you share. A rather harsh but good-to-remember statement of the role of your friendly critics was voiced by Karl Popper, a well-known twentieth century philosopher of science: “. . . if you are interested in the problem which I tried to solve by my tentative assertion, you may help me by criticizing it as severely as you can” (Popper, 1968, p. 27).

A final implication of this third descriptor is that, as someone engaged in scientific inquiry, you have no choice but to update your thinking when the data support a different conclusion. This applies to your own data as well as to those of others. When data clearly point to a specific claim, even one that is quite different than you expected, you must reconsider your position. If the outcome is replicated multiple times, you need to adjust your thinking accordingly. Scientific inquiry does not let you pick and choose which data to believe; it mandates that everyone update their thinking when the data warrant an update.

Doing Scientific Inquiry

We define scientific inquiry in an operational sense—what does it mean to do scientific inquiry? What kind of process would satisfy all three descriptors: carefully planning an experience in advance; observing and trying to explain what you see; and, contributing to updating everyone’s thinking about an important phenomenon?

We define scientific inquiry as formulating , testing , and revising hypotheses about phenomena of interest.

Of course, we are not the only ones who define it in this way. The definition for the scientific method posted by the editors of Britannica is: “a researcher develops a hypothesis, tests it through various means, and then modifies the hypothesis on the basis of the outcome of the tests and experiments” (Britannica, n.d. ).

An image represents the scientific inquiry definition given by the editors of Britannica and also defines the hypothesis on the basis of the experiments.

Notice how defining scientific inquiry this way satisfies each of the descriptors. “Carefully planning an experience in advance” is exactly what happens when formulating a hypothesis about a phenomenon of interest and thinking about how to test it. “ Observing a phenomenon” occurs when testing a hypothesis, and “ explaining ” what is found is required when revising a hypothesis based on the data. Finally, “updating everyone’s thinking” comes from comparing publicly the original with the revised hypothesis.

Doing scientific inquiry, as we have defined it, underscores the value of accumulating knowledge rather than generating random bits of knowledge. Formulating, testing, and revising hypotheses is an ongoing process, with each revised hypothesis begging for another test, whether by the same researcher or by new researchers. The editors of Britannica signaled this cyclic process by adding the following phrase to their definition of the scientific method: “The modified hypothesis is then retested, further modified, and tested again.” Scientific inquiry creates a process that encourages each study to build on the studies that have gone before. Through collective engagement in this process of building study on top of study, the scientific community works together to update its thinking.

Before exploring more fully the meaning of “formulating, testing, and revising hypotheses,” we need to acknowledge that this is not the only way researchers define research. Some researchers prefer a less formal definition, one that includes more serendipity, less planning, less explanation. You might have come across more open definitions such as “research is finding out about something.” We prefer the tighter hypothesis formulation, testing, and revision definition because we believe it provides a single, coherent map for conducting research that addresses many of the thorny problems educational researchers encounter. We believe it is the most useful orientation toward research and the most helpful to learn as a beginning researcher.

A final clarification of our definition is that it applies equally to qualitative and quantitative research. This is a familiar distinction in education that has generated much discussion. You might think our definition favors quantitative methods over qualitative methods because the language of hypothesis formulation and testing is often associated with quantitative methods. In fact, we do not favor one method over another. In Chap. 4 , we will illustrate how our definition fits research using a range of quantitative and qualitative methods.

Exercise 1.4

Look for ways to extend what the field knows in an area that has already received attention by other researchers. Specifically, you can search for a program of research carried out by more experienced researchers that has some revised hypotheses that remain untested. Identify a revised hypothesis that you might like to test.

Unpacking the Terms Formulating, Testing, and Revising Hypotheses

To get a full sense of the definition of scientific inquiry we will use throughout this book, it is helpful to spend a little time with each of the key terms.

We first want to make clear that we use the term “hypothesis” as it is defined in most dictionaries and as it used in many scientific fields rather than as it is usually defined in educational statistics courses. By “hypothesis,” we do not mean a null hypothesis that is accepted or rejected by statistical analysis. Rather, we use “hypothesis” in the sense conveyed by the following definitions: “An idea or explanation for something that is based on known facts but has not yet been proved” (Cambridge University Press, n.d. ), and “An unproved theory, proposition, or supposition, tentatively accepted to explain certain facts and to provide a basis for further investigation or argument” (Agnes & Guralnik, 2008 ).

We distinguish two parts to “hypotheses.” Hypotheses consist of predictions and rationales . Predictions are statements about what you expect to find when you inquire about something. Rationales are explanations for why you made the predictions you did, why you believe your predictions are correct. So, for us “formulating hypotheses” means making explicit predictions and developing rationales for the predictions.

“Testing hypotheses” means making observations that allow you to assess in what ways your predictions were correct and in what ways they were incorrect. In education research, it is rarely useful to think of your predictions as either right or wrong. Because of the complexity of most issues you will investigate, most predictions will be right in some ways and wrong in others.

By studying the observations you make (data you collect) to test your hypotheses, you can revise your hypotheses to better align with the observations. This means revising your predictions plus revising your rationales to justify your adjusted predictions. Even though you might not run another test, formulating revised hypotheses is an essential part of conducting a research study. Comparing your original and revised hypotheses informs everyone of what you learned by conducting your study. In addition, a revised hypothesis sets the stage for you or someone else to extend your study and accumulate more knowledge of the phenomenon.

We should note that not everyone makes a clear distinction between predictions and rationales as two aspects of hypotheses. In fact, common, non-scientific uses of the word “hypothesis” may limit it to only a prediction or only an explanation (or rationale). We choose to explicitly include both prediction and rationale in our definition of hypothesis, not because we assert this should be the universal definition, but because we want to foreground the importance of both parts acting in concert. Using “hypothesis” to represent both prediction and rationale could hide the two aspects, but we make them explicit because they provide different kinds of information. It is usually easier to make predictions than develop rationales because predictions can be guesses, hunches, or gut feelings about which you have little confidence. Developing a compelling rationale requires careful thought plus reading what other researchers have found plus talking with your colleagues. Often, while you are developing your rationale you will find good reasons to change your predictions. Developing good rationales is the engine that drives scientific inquiry. Rationales are essentially descriptions of how much you know about the phenomenon you are studying. Throughout this guide, we will elaborate on how developing good rationales drives scientific inquiry. For now, we simply note that it can sharpen your predictions and help you to interpret your data as you test your hypotheses.

An image represents the rationale and the prediction for the scientific inquiry and different types of information provided by the terms.

Hypotheses in education research take a variety of forms or types. This is because there are a variety of phenomena that can be investigated. Investigating educational phenomena is sometimes best done using qualitative methods, sometimes using quantitative methods, and most often using mixed methods (e.g., Hay, 2016 ; Weis et al. 2019a ; Weisner, 2005 ). This means that, given our definition, hypotheses are equally applicable to qualitative and quantitative investigations.

Hypotheses take different forms when they are used to investigate different kinds of phenomena. Two very different activities in education could be labeled conducting experiments and descriptions. In an experiment, a hypothesis makes a prediction about anticipated changes, say the changes that occur when a treatment or intervention is applied. You might investigate how students’ thinking changes during a particular kind of instruction.

A second type of hypothesis, relevant for descriptive research, makes a prediction about what you will find when you investigate and describe the nature of a situation. The goal is to understand a situation as it exists rather than to understand a change from one situation to another. In this case, your prediction is what you expect to observe. Your rationale is the set of reasons for making this prediction; it is your current explanation for why the situation will look like it does.

You will probably read, if you have not already, that some researchers say you do not need a prediction to conduct a descriptive study. We will discuss this point of view in Chap. 2 . For now, we simply claim that scientific inquiry, as we have defined it, applies to all kinds of research studies. Descriptive studies, like others, not only benefit from formulating, testing, and revising hypotheses, but also need hypothesis formulating, testing, and revising.

One reason we define research as formulating, testing, and revising hypotheses is that if you think of research in this way you are less likely to go wrong. It is a useful guide for the entire process, as we will describe in detail in the chapters ahead. For example, as you build the rationale for your predictions, you are constructing the theoretical framework for your study (Chap. 3 ). As you work out the methods you will use to test your hypothesis, every decision you make will be based on asking, “Will this help me formulate or test or revise my hypothesis?” (Chap. 4 ). As you interpret the results of testing your predictions, you will compare them to what you predicted and examine the differences, focusing on how you must revise your hypotheses (Chap. 5 ). By anchoring the process to formulating, testing, and revising hypotheses, you will make smart decisions that yield a coherent and well-designed study.

Exercise 1.5

Compare the concept of formulating, testing, and revising hypotheses with the descriptions of scientific inquiry contained in Scientific Research in Education (NRC, 2002 ). How are they similar or different?

Exercise 1.6

Provide an example to illustrate and emphasize the differences between everyday learning/thinking and scientific inquiry.

Learning from Doing Scientific Inquiry

We noted earlier that a measure of what you have learned by conducting a research study is found in the differences between your original hypothesis and your revised hypothesis based on the data you collected to test your hypothesis. We will elaborate this statement in later chapters, but we preview our argument here.

Even before collecting data, scientific inquiry requires cycles of making a prediction, developing a rationale, refining your predictions, reading and studying more to strengthen your rationale, refining your predictions again, and so forth. And, even if you have run through several such cycles, you still will likely find that when you test your prediction you will be partly right and partly wrong. The results will support some parts of your predictions but not others, or the results will “kind of” support your predictions. A critical part of scientific inquiry is making sense of your results by interpreting them against your predictions. Carefully describing what aspects of your data supported your predictions, what aspects did not, and what data fell outside of any predictions is not an easy task, but you cannot learn from your study without doing this analysis.

An image represents the cycle of events that take place before making predictions, developing the rationale, and studying the prediction and rationale multiple times.

Analyzing the matches and mismatches between your predictions and your data allows you to formulate different rationales that would have accounted for more of the data. The best revised rationale is the one that accounts for the most data. Once you have revised your rationales, you can think about the predictions they best justify or explain. It is by comparing your original rationales to your new rationales that you can sort out what you learned from your study.

Suppose your study was an experiment. Maybe you were investigating the effects of a new instructional intervention on students’ learning. Your original rationale was your explanation for why the intervention would change the learning outcomes in a particular way. Your revised rationale explained why the changes that you observed occurred like they did and why your revised predictions are better. Maybe your original rationale focused on the potential of the activities if they were implemented in ideal ways and your revised rationale included the factors that are likely to affect how teachers implement them. By comparing the before and after rationales, you are describing what you learned—what you can explain now that you could not before. Another way of saying this is that you are describing how much more you understand now than before you conducted your study.

Revised predictions based on carefully planned and collected data usually exhibit some of the following features compared with the originals: more precision, more completeness, and broader scope. Revised rationales have more explanatory power and become more complete, more aligned with the new predictions, sharper, and overall more convincing.

Part II. Why Do Educators Do Research?

Doing scientific inquiry is a lot of work. Each phase of the process takes time, and you will often cycle back to improve earlier phases as you engage in later phases. Because of the significant effort required, you should make sure your study is worth it. So, from the beginning, you should think about the purpose of your study. Why do you want to do it? And, because research is a social practice, you should also think about whether the results of your study are likely to be important and significant to the education community.

If you are doing research in the way we have described—as scientific inquiry—then one purpose of your study is to understand , not just to describe or evaluate or report. As we noted earlier, when you formulate hypotheses, you are developing rationales that explain why things might be like they are. In our view, trying to understand and explain is what separates research from other kinds of activities, like evaluating or describing.

One reason understanding is so important is that it allows researchers to see how or why something works like it does. When you see how something works, you are better able to predict how it might work in other contexts, under other conditions. And, because conditions, or contextual factors, matter a lot in education, gaining insights into applying your findings to other contexts increases the contributions of your work and its importance to the broader education community.

Consequently, the purposes of research studies in education often include the more specific aim of identifying and understanding the conditions under which the phenomena being studied work like the observations suggest. A classic example of this kind of study in mathematics education was reported by William Brownell and Harold Moser in 1949 . They were trying to establish which method of subtracting whole numbers could be taught most effectively—the regrouping method or the equal additions method. However, they realized that effectiveness might depend on the conditions under which the methods were taught—“meaningfully” versus “mechanically.” So, they designed a study that crossed the two instructional approaches with the two different methods (regrouping and equal additions). Among other results, they found that these conditions did matter. The regrouping method was more effective under the meaningful condition than the mechanical condition, but the same was not true for the equal additions algorithm.

What do education researchers want to understand? In our view, the ultimate goal of education is to offer all students the best possible learning opportunities. So, we believe the ultimate purpose of scientific inquiry in education is to develop understanding that supports the improvement of learning opportunities for all students. We say “ultimate” because there are lots of issues that must be understood to improve learning opportunities for all students. Hypotheses about many aspects of education are connected, ultimately, to students’ learning. For example, formulating and testing a hypothesis that preservice teachers need to engage in particular kinds of activities in their coursework in order to teach particular topics well is, ultimately, connected to improving students’ learning opportunities. So is hypothesizing that school districts often devote relatively few resources to instructional leadership training or hypothesizing that positioning mathematics as a tool students can use to combat social injustice can help students see the relevance of mathematics to their lives.

We do not exclude the importance of research on educational issues more removed from improving students’ learning opportunities, but we do think the argument for their importance will be more difficult to make. If there is no way to imagine a connection between your hypothesis and improving learning opportunities for students, even a distant connection, we recommend you reconsider whether it is an important hypothesis within the education community.

Notice that we said the ultimate goal of education is to offer all students the best possible learning opportunities. For too long, educators have been satisfied with a goal of offering rich learning opportunities for lots of students, sometimes even for just the majority of students, but not necessarily for all students. Evaluations of success often are based on outcomes that show high averages. In other words, if many students have learned something, or even a smaller number have learned a lot, educators may have been satisfied. The problem is that there is usually a pattern in the groups of students who receive lower quality opportunities—students of color and students who live in poor areas, urban and rural. This is not acceptable. Consequently, we emphasize the premise that the purpose of education research is to offer rich learning opportunities to all students.

One way to make sure you will be able to convince others of the importance of your study is to consider investigating some aspect of teachers’ shared instructional problems. Historically, researchers in education have set their own research agendas, regardless of the problems teachers are facing in schools. It is increasingly recognized that teachers have had trouble applying to their own classrooms what researchers find. To address this problem, a researcher could partner with a teacher—better yet, a small group of teachers—and talk with them about instructional problems they all share. These discussions can create a rich pool of problems researchers can consider. If researchers pursued one of these problems (preferably alongside teachers), the connection to improving learning opportunities for all students could be direct and immediate. “Grounding a research question in instructional problems that are experienced across multiple teachers’ classrooms helps to ensure that the answer to the question will be of sufficient scope to be relevant and significant beyond the local context” (Cai et al., 2019b , p. 115).

As a beginning researcher, determining the relevance and importance of a research problem is especially challenging. We recommend talking with advisors, other experienced researchers, and peers to test the educational importance of possible research problems and topics of study. You will also learn much more about the issue of research importance when you read Chap. 5 .

Exercise 1.7

Identify a problem in education that is closely connected to improving learning opportunities and a problem that has a less close connection. For each problem, write a brief argument (like a logical sequence of if-then statements) that connects the problem to all students’ learning opportunities.

Part III. Conducting Research as a Practice of Failing Productively

Scientific inquiry involves formulating hypotheses about phenomena that are not fully understood—by you or anyone else. Even if you are able to inform your hypotheses with lots of knowledge that has already been accumulated, you are likely to find that your prediction is not entirely accurate. This is normal. Remember, scientific inquiry is a process of constantly updating your thinking. More and better information means revising your thinking, again, and again, and again. Because you never fully understand a complicated phenomenon and your hypotheses never produce completely accurate predictions, it is easy to believe you are somehow failing.

The trick is to fail upward, to fail to predict accurately in ways that inform your next hypothesis so you can make a better prediction. Some of the best-known researchers in education have been open and honest about the many times their predictions were wrong and, based on the results of their studies and those of others, they continuously updated their thinking and changed their hypotheses.

A striking example of publicly revising (actually reversing) hypotheses due to incorrect predictions is found in the work of Lee J. Cronbach, one of the most distinguished educational psychologists of the twentieth century. In 1955, Cronbach delivered his presidential address to the American Psychological Association. Titling it “Two Disciplines of Scientific Psychology,” Cronbach proposed a rapprochement between two research approaches—correlational studies that focused on individual differences and experimental studies that focused on instructional treatments controlling for individual differences. (We will examine different research approaches in Chap. 4 ). If these approaches could be brought together, reasoned Cronbach ( 1957 ), researchers could find interactions between individual characteristics and treatments (aptitude-treatment interactions or ATIs), fitting the best treatments to different individuals.

In 1975, after years of research by many researchers looking for ATIs, Cronbach acknowledged the evidence for simple, useful ATIs had not been found. Even when trying to find interactions between a few variables that could provide instructional guidance, the analysis, said Cronbach, creates “a hall of mirrors that extends to infinity, tormenting even the boldest investigators and defeating even ambitious designs” (Cronbach, 1975 , p. 119).

As he was reflecting back on his work, Cronbach ( 1986 ) recommended moving away from documenting instructional effects through statistical inference (an approach he had championed for much of his career) and toward approaches that probe the reasons for these effects, approaches that provide a “full account of events in a time, place, and context” (Cronbach, 1986 , p. 104). This is a remarkable change in hypotheses, a change based on data and made fully transparent. Cronbach understood the value of failing productively.

Closer to home, in a less dramatic example, one of us began a line of scientific inquiry into how to prepare elementary preservice teachers to teach early algebra. Teaching early algebra meant engaging elementary students in early forms of algebraic reasoning. Such reasoning should help them transition from arithmetic to algebra. To begin this line of inquiry, a set of activities for preservice teachers were developed. Even though the activities were based on well-supported hypotheses, they largely failed to engage preservice teachers as predicted because of unanticipated challenges the preservice teachers faced. To capitalize on this failure, follow-up studies were conducted, first to better understand elementary preservice teachers’ challenges with preparing to teach early algebra, and then to better support preservice teachers in navigating these challenges. In this example, the initial failure was a necessary step in the researchers’ scientific inquiry and furthered the researchers’ understanding of this issue.

We present another example of failing productively in Chap. 2 . That example emerges from recounting the history of a well-known research program in mathematics education.

Making mistakes is an inherent part of doing scientific research. Conducting a study is rarely a smooth path from beginning to end. We recommend that you keep the following things in mind as you begin a career of conducting research in education.

First, do not get discouraged when you make mistakes; do not fall into the trap of feeling like you are not capable of doing research because you make too many errors.

Second, learn from your mistakes. Do not ignore your mistakes or treat them as errors that you simply need to forget and move past. Mistakes are rich sites for learning—in research just as in other fields of study.

Third, by reflecting on your mistakes, you can learn to make better mistakes, mistakes that inform you about a productive next step. You will not be able to eliminate your mistakes, but you can set a goal of making better and better mistakes.

Exercise 1.8

How does scientific inquiry differ from everyday learning in giving you the tools to fail upward? You may find helpful perspectives on this question in other resources on science and scientific inquiry (e.g., Failure: Why Science is So Successful by Firestein, 2015).

Exercise 1.9

Use what you have learned in this chapter to write a new definition of scientific inquiry. Compare this definition with the one you wrote before reading this chapter. If you are reading this book as part of a course, compare your definition with your colleagues’ definitions. Develop a consensus definition with everyone in the course.

Part IV. Preview of Chap. 2

Now that you have a good idea of what research is, at least of what we believe research is, the next step is to think about how to actually begin doing research. This means how to begin formulating, testing, and revising hypotheses. As for all phases of scientific inquiry, there are lots of things to think about. Because it is critical to start well, we devote Chap. 2 to getting started with formulating hypotheses.

Agnes, M., & Guralnik, D. B. (Eds.). (2008). Hypothesis. In Webster’s new world college dictionary (4th ed.). Wiley.

Google Scholar  

Britannica. (n.d.). Scientific method. In Encyclopaedia Britannica . Retrieved July 15, 2022 from https://www.britannica.com/science/scientific-method

Brownell, W. A., & Moser, H. E. (1949). Meaningful vs. mechanical learning: A study in grade III subtraction . Duke University Press..

Cai, J., Morris, A., Hohensee, C., Hwang, S., Robison, V., Cirillo, M., Kramer, S. L., & Hiebert, J. (2019b). Posing significant research questions. Journal for Research in Mathematics Education, 50 (2), 114–120. https://doi.org/10.5951/jresematheduc.50.2.0114

Article   Google Scholar  

Cambridge University Press. (n.d.). Hypothesis. In Cambridge dictionary . Retrieved July 15, 2022 from https://dictionary.cambridge.org/us/dictionary/english/hypothesis

Cronbach, J. L. (1957). The two disciplines of scientific psychology. American Psychologist, 12 , 671–684.

Cronbach, L. J. (1975). Beyond the two disciplines of scientific psychology. American Psychologist, 30 , 116–127.

Cronbach, L. J. (1986). Social inquiry by and for earthlings. In D. W. Fiske & R. A. Shweder (Eds.), Metatheory in social science: Pluralisms and subjectivities (pp. 83–107). University of Chicago Press.

Hay, C. M. (Ed.). (2016). Methods that matter: Integrating mixed methods for more effective social science research . University of Chicago Press.

Merriam-Webster. (n.d.). Explain. In Merriam-Webster.com dictionary . Retrieved July 15, 2022, from https://www.merriam-webster.com/dictionary/explain

National Research Council. (2002). Scientific research in education . National Academy Press.

Weis, L., Eisenhart, M., Duncan, G. J., Albro, E., Bueschel, A. C., Cobb, P., Eccles, J., Mendenhall, R., Moss, P., Penuel, W., Ream, R. K., Rumbaut, R. G., Sloane, F., Weisner, T. S., & Wilson, J. (2019a). Mixed methods for studies that address broad and enduring issues in education research. Teachers College Record, 121 , 100307.

Weisner, T. S. (Ed.). (2005). Discovering successful pathways in children’s development: Mixed methods in the study of childhood and family life . University of Chicago Press.

Download references

Author information

Authors and affiliations.

School of Education, University of Delaware, Newark, DE, USA

James Hiebert, Anne K Morris & Charles Hohensee

Department of Mathematical Sciences, University of Delaware, Newark, DE, USA

Jinfa Cai & Stephen Hwang

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and permissions

Copyright information

© 2023 The Author(s)

About this chapter

Hiebert, J., Cai, J., Hwang, S., Morris, A.K., Hohensee, C. (2023). What Is Research, and Why Do People Do It?. In: Doing Research: A New Researcher’s Guide. Research in Mathematics Education. Springer, Cham. https://doi.org/10.1007/978-3-031-19078-0_1

Download citation

DOI : https://doi.org/10.1007/978-3-031-19078-0_1

Published : 03 December 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-19077-3

Online ISBN : 978-3-031-19078-0

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Building a program of research

Affiliation.

  • 1 School of Nursing, University of California, San Francisco, 2 Koret Way, San Francisco, CA 94143-0608, USA. [email protected]
  • PMID: 19566633
  • DOI: 10.1111/j.1742-7924.2009.00115.x

This article provides highlights of a talk titled, "Building a Program of Research," given at the Japan Academy of Nursing Science's 28th annual meeting, Fukuoka, Japan, on 13 December 2008. A program of research is defined as a coherent expression of a researcher's area of interest that has public health significance, builds from the published research literature in the field, has relevance for clinical nursing practice, and captures the passion and commitment of the researcher. The Outcomes Model for Health Care Research is proposed as a framework for how to develop and articulate a program of research. Eight steps are proposed to help a new researcher to think about how to build a program of research.

Publication types

  • Education, Nursing, Graduate / organization & administration
  • Models, Nursing
  • Nursing Research / education
  • Nursing Research / organization & administration*
  • Outcome and Process Assessment, Health Care / organization & administration*
  • Planning Techniques
  • Public Health
  • Research Design*
  • Review Literature as Topic
  • Time Management

New TTE Logo very Small

Teach the Earth the portal for Earth Education

From NAGT's On the Cutting Edge Collection

NAGT Join small

  • Course Topics
  • Atmospheric Science
  • Biogeoscience
  • Environmental Geology
  • Environmental Science
  • Geochemistry
  • Geomorphology
  • GIS/Remote Sensing
  • Hydrology/Hydrogeology
  • Oceanography
  • Paleontology
  • Planetary Science
  • Sedimentary Geology
  • Structural Geology
  • Incorporating Societal Issues
  • Climate Change
  • Complex Systems
  • Ethics and Environmental Justice
  • Geology and Health
  • Public Policy
  • Sustainability
  • Strengthening Your Department
  • Career Development
  • Strengthening Departments
  • Student Recruitment
  • Teacher Preparation
  • Teaching Topics
  • Biocomplexity
  • Early Earth
  • Earthquakes
  • Hydraulic Fracturing
  • Plate Tectonics
  • Teaching Environments
  • Intro Geoscience
  • Online Teaching
  • Teaching in the Field
  • Two-Year Colleges
  • Urban Students
  • Enhancing your Teaching
  • Affective Domain
  • Course Design
  • Data, Simulations, Models
  • Geophotography
  • Google Earth
  • Metacognition
  • Online Games
  • Problem Solving
  • Quantitative Skills
  • Rates and Time
  • Service Learning
  • Spatial Thinking
  • Teaching Methods
  • Teaching with Video
  • Undergrad Research
  • Visualization
  • Teaching Materials
  • Two Year Colleges
  • Departments
  • Workshops and Webinars

' crossorigin=

Early Career Geoscience Faculty: Teaching, Research, and Managing Your Career Topical Resources

  • ⋮⋮⋮ ×

Developing a Thriving Research Program

Related links.

  • Undergraduate Research
  • Career Prep: Moving Your Research Forward

As you begin an academic career in the geosciences, you may face many new challenges, including (probably) getting your own research program up and running. This involves many components, addressed on the pages below.

Jump down to Planning Your Research Program * Funding Your Research * Collaborating With Students * Setting Up Your Lab and Obtaining Equipment * Carving Out Time * Publishing Your Work: Strategies for Moving Forward

Student research symposium at Wittenberg University

General resources

  • Making the Right Moves: A Practical Guide to Scientific Management for Postdocs and New Faculty is a book about managing your research program. Click on the title to see the table of contents; you can then download it, if you're interested, in whole or in part.
  • For a down-to-earth guide to setting up and managing your new lab, read At The Helm: Leading Your Laboratory , by Kathy Barker.
  • Career Trends: Running Your Lab is a free booklet, available from Science Careers when you set up a free account (the link above will direct you to do so). It includes advice on managing people, time, projects, and budgets.
  • The EarthCube Early Career group is an online community "where graduate students, post docs, young professors and other young researchers can exchange experiences and ideas."

Flowchart designed by Richard Yuretich to facilitate planning a research program

Planning Your Research Program

Funding your research.

Photo of the Keck Ohio Project Team 1999.

Collaborating with Students

Sacred Method Book from Debbie Bronk

Setting Up Your Lab and Obtaining Equipment

Photo of the Big Ben clock tower London

Carving out Time

Hoe

Publishing Your Work: Strategies for Moving Forward

      Next Page »

  • Privacy Policy

Research Method

Home » Research Process – Steps, Examples and Tips

Research Process – Steps, Examples and Tips

Table of Contents

Research Process

Research Process

Definition:

Research Process is a systematic and structured approach that involves the collection, analysis, and interpretation of data or information to answer a specific research question or solve a particular problem.

Research Process Steps

Research Process Steps are as follows:

Identify the Research Question or Problem

This is the first step in the research process. It involves identifying a problem or question that needs to be addressed. The research question should be specific, relevant, and focused on a particular area of interest.

Conduct a Literature Review

Once the research question has been identified, the next step is to conduct a literature review. This involves reviewing existing research and literature on the topic to identify any gaps in knowledge or areas where further research is needed. A literature review helps to provide a theoretical framework for the research and also ensures that the research is not duplicating previous work.

Formulate a Hypothesis or Research Objectives

Based on the research question and literature review, the researcher can formulate a hypothesis or research objectives. A hypothesis is a statement that can be tested to determine its validity, while research objectives are specific goals that the researcher aims to achieve through the research.

Design a Research Plan and Methodology

This step involves designing a research plan and methodology that will enable the researcher to collect and analyze data to test the hypothesis or achieve the research objectives. The research plan should include details on the sample size, data collection methods, and data analysis techniques that will be used.

Collect and Analyze Data

This step involves collecting and analyzing data according to the research plan and methodology. Data can be collected through various methods, including surveys, interviews, observations, or experiments. The data analysis process involves cleaning and organizing the data, applying statistical and analytical techniques to the data, and interpreting the results.

Interpret the Findings and Draw Conclusions

After analyzing the data, the researcher must interpret the findings and draw conclusions. This involves assessing the validity and reliability of the results and determining whether the hypothesis was supported or not. The researcher must also consider any limitations of the research and discuss the implications of the findings.

Communicate the Results

Finally, the researcher must communicate the results of the research through a research report, presentation, or publication. The research report should provide a detailed account of the research process, including the research question, literature review, research methodology, data analysis, findings, and conclusions. The report should also include recommendations for further research in the area.

Review and Revise

The research process is an iterative one, and it is important to review and revise the research plan and methodology as necessary. Researchers should assess the quality of their data and methods, reflect on their findings, and consider areas for improvement.

Ethical Considerations

Throughout the research process, ethical considerations must be taken into account. This includes ensuring that the research design protects the welfare of research participants, obtaining informed consent, maintaining confidentiality and privacy, and avoiding any potential harm to participants or their communities.

Dissemination and Application

The final step in the research process is to disseminate the findings and apply the research to real-world settings. Researchers can share their findings through academic publications, presentations at conferences, or media coverage. The research can be used to inform policy decisions, develop interventions, or improve practice in the relevant field.

Research Process Example

Following is a Research Process Example:

Research Question : What are the effects of a plant-based diet on athletic performance in high school athletes?

Step 1: Background Research Conduct a literature review to gain a better understanding of the existing research on the topic. Read academic articles and research studies related to plant-based diets, athletic performance, and high school athletes.

Step 2: Develop a Hypothesis Based on the literature review, develop a hypothesis that a plant-based diet positively affects athletic performance in high school athletes.

Step 3: Design the Study Design a study to test the hypothesis. Decide on the study population, sample size, and research methods. For this study, you could use a survey to collect data on dietary habits and athletic performance from a sample of high school athletes who follow a plant-based diet and a sample of high school athletes who do not follow a plant-based diet.

Step 4: Collect Data Distribute the survey to the selected sample and collect data on dietary habits and athletic performance.

Step 5: Analyze Data Use statistical analysis to compare the data from the two samples and determine if there is a significant difference in athletic performance between those who follow a plant-based diet and those who do not.

Step 6 : Interpret Results Interpret the results of the analysis in the context of the research question and hypothesis. Discuss any limitations or potential biases in the study design.

Step 7: Draw Conclusions Based on the results, draw conclusions about whether a plant-based diet has a significant effect on athletic performance in high school athletes. If the hypothesis is supported by the data, discuss potential implications and future research directions.

Step 8: Communicate Findings Communicate the findings of the study in a clear and concise manner. Use appropriate language, visuals, and formats to ensure that the findings are understood and valued.

Applications of Research Process

The research process has numerous applications across a wide range of fields and industries. Some examples of applications of the research process include:

  • Scientific research: The research process is widely used in scientific research to investigate phenomena in the natural world and develop new theories or technologies. This includes fields such as biology, chemistry, physics, and environmental science.
  • Social sciences : The research process is commonly used in social sciences to study human behavior, social structures, and institutions. This includes fields such as sociology, psychology, anthropology, and economics.
  • Education: The research process is used in education to study learning processes, curriculum design, and teaching methodologies. This includes research on student achievement, teacher effectiveness, and educational policy.
  • Healthcare: The research process is used in healthcare to investigate medical conditions, develop new treatments, and evaluate healthcare interventions. This includes fields such as medicine, nursing, and public health.
  • Business and industry : The research process is used in business and industry to study consumer behavior, market trends, and develop new products or services. This includes market research, product development, and customer satisfaction research.
  • Government and policy : The research process is used in government and policy to evaluate the effectiveness of policies and programs, and to inform policy decisions. This includes research on social welfare, crime prevention, and environmental policy.

Purpose of Research Process

The purpose of the research process is to systematically and scientifically investigate a problem or question in order to generate new knowledge or solve a problem. The research process enables researchers to:

  • Identify gaps in existing knowledge: By conducting a thorough literature review, researchers can identify gaps in existing knowledge and develop research questions that address these gaps.
  • Collect and analyze data : The research process provides a structured approach to collecting and analyzing data. Researchers can use a variety of research methods, including surveys, experiments, and interviews, to collect data that is valid and reliable.
  • Test hypotheses : The research process allows researchers to test hypotheses and make evidence-based conclusions. Through the systematic analysis of data, researchers can draw conclusions about the relationships between variables and develop new theories or models.
  • Solve problems: The research process can be used to solve practical problems and improve real-world outcomes. For example, researchers can develop interventions to address health or social problems, evaluate the effectiveness of policies or programs, and improve organizational processes.
  • Generate new knowledge : The research process is a key way to generate new knowledge and advance understanding in a given field. By conducting rigorous and well-designed research, researchers can make significant contributions to their field and help to shape future research.

Tips for Research Process

Here are some tips for the research process:

  • Start with a clear research question : A well-defined research question is the foundation of a successful research project. It should be specific, relevant, and achievable within the given time frame and resources.
  • Conduct a thorough literature review: A comprehensive literature review will help you to identify gaps in existing knowledge, build on previous research, and avoid duplication. It will also provide a theoretical framework for your research.
  • Choose appropriate research methods: Select research methods that are appropriate for your research question, objectives, and sample size. Ensure that your methods are valid, reliable, and ethical.
  • Be organized and systematic: Keep detailed notes throughout the research process, including your research plan, methodology, data collection, and analysis. This will help you to stay organized and ensure that you don’t miss any important details.
  • Analyze data rigorously: Use appropriate statistical and analytical techniques to analyze your data. Ensure that your analysis is valid, reliable, and transparent.
  • I nterpret results carefully : Interpret your results in the context of your research question and objectives. Consider any limitations or potential biases in your research design, and be cautious in drawing conclusions.
  • Communicate effectively: Communicate your research findings clearly and effectively to your target audience. Use appropriate language, visuals, and formats to ensure that your findings are understood and valued.
  • Collaborate and seek feedback : Collaborate with other researchers, experts, or stakeholders in your field. Seek feedback on your research design, methods, and findings to ensure that they are relevant, meaningful, and impactful.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Evaluating Research

Evaluating Research – Process, Examples and...

Research Questions

Research Questions – Types, Examples and Writing...

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Implementation...

Implementation research: what it is and how to do it

  • Related content
  • Peer review
  • David H Peters , professor 1 ,
  • Taghreed Adam , scientist 2 ,
  • Olakunle Alonge , assistant scientist 1 ,
  • Irene Akua Agyepong , specialist public health 3 ,
  • Nhan Tran , manager 4
  • 1 Johns Hopkins University Bloomberg School of Public Health, Department of International Health, 615 N Wolfe St, Baltimore, MD 21205, USA
  • 2 Alliance for Health Policy and Systems Research, World Health Organization, CH-1211 Geneva 27, Switzerland
  • 3 University of Ghana School of Public Health/Ghana Health Service, Accra, Ghana
  • 4 Alliance for Health Policy and Systems Research, Implementation Research Platform, World Health Organization, CH-1211 Geneva 27, Switzerland
  • Correspondence to: D H Peters  dpeters{at}jhsph.edu
  • Accepted 8 October 2013

Implementation research is a growing but not well understood field of health research that can contribute to more effective public health and clinical policies and programmes. This article provides a broad definition of implementation research and outlines key principles for how to do it

The field of implementation research is growing, but it is not well understood despite the need for better research to inform decisions about health policies, programmes, and practices. This article focuses on the context and factors affecting implementation, the key audiences for the research, implementation outcome variables that describe various aspects of how implementation occurs, and the study of implementation strategies that support the delivery of health services, programmes, and policies. We provide a framework for using the research question as the basis for selecting among the wide range of qualitative, quantitative, and mixed methods that can be applied in implementation research, along with brief descriptions of methods specifically suitable for implementation research. Expanding the use of well designed implementation research should contribute to more effective public health and clinical policies and programmes.

Defining implementation research

Implementation research attempts to solve a wide range of implementation problems; it has its origins in several disciplines and research traditions (supplementary table A). Although progress has been made in conceptualising implementation research over the past decade, 1 considerable confusion persists about its terminology and scope. 2 3 4 The word “implement” comes from the Latin “implere,” meaning to fulfil or to carry into effect. 5 This provides a basis for a broad definition of implementation research that can be used across research traditions and has meaning for practitioners, policy makers, and the interested public: “Implementation research is the scientific inquiry into questions concerning implementation—the act of carrying an intention into effect, which in health research can be policies, programmes, or individual practices (collectively called interventions).”

Implementation research can consider any aspect of implementation, including the factors affecting implementation, the processes of implementation, and the results of implementation, including how to introduce potential solutions into a health system or how to promote their large scale use and sustainability. The intent is to understand what, why, and how interventions work in “real world” settings and to test approaches to improve them.

Principles of implementation research

Implementation research seeks to understand and work within real world conditions, rather than trying to control for these conditions or to remove their influence as causal effects. This implies working with populations that will be affected by an intervention, rather than selecting beneficiaries who may not represent the target population of an intervention (such as studying healthy volunteers or excluding patients who have comorbidities).

Context plays a central role in implementation research. Context can include the social, cultural, economic, political, legal, and physical environment, as well as the institutional setting, comprising various stakeholders and their interactions, and the demographic and epidemiological conditions. The structure of the health systems (for example, the roles played by governments, non-governmental organisations, other private providers, and citizens) is particularly important for implementation research on health.

Implementation research is especially concerned with the users of the research and not purely the production of knowledge. These users may include managers and teams using quality improvement strategies, executive decision makers seeking advice for specific decisions, policy makers who need to be informed about particular programmes, practitioners who need to be convinced to use interventions that are based on evidence, people who are influenced to change their behaviour to have a healthier life, or communities who are conducting the research and taking action through the research to improve their conditions (supplementary table A). One important implication is that often these actors should be intimately involved in the identification, design, and conduct phases of research and not just be targets for dissemination of study results.

Implementation outcome variables

Implementation outcome variables describe the intentional actions to deliver services. 6 These implementation outcome variables—acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, coverage, and sustainability—can all serve as indicators of the success of implementation (table 1 ⇓ ). Implementation research uses these variables to assess how well implementation has occurred or to provide insights about how this contributes to one’s health status or other important health outcomes.

 Implementation outcome variables

  • View inline

Implementation strategies

Curran and colleagues defined an “implementation intervention” as a method to “enhance the adoption of a ‘clinical’ intervention,” such as the use of job aids, provider education, or audit procedures. 7 The concept can be broadened to any type of strategy that is designed to support a clinical or population and public health intervention (for example, outreach clinics and supervision checklists are implementation strategies used to improve the coverage and quality of immunisation).

A review of ways to improve health service delivery in low and middle income countries identified a wide range of successful implementation strategies (supplementary table B). 8 Even in the most resource constrained environments, measuring change, informing stakeholders, and using information to guide decision making were found to be critical to successful implementation.

Implementation influencing variables

Other factors that influence implementation may need to be considered in implementation research. Sabatier summarised a set of such factors that influence policy implementation (clarity of objectives, causal theory, implementing personnel, support of interest groups, and managerial authority and resources). 9

The large array of contextual factors that influence implementation, interact with each other, and change over time highlights the fact that implementation often occurs as part of complex adaptive systems. 10 Some implementation strategies are particularly suitable for working in complex systems. These include strategies to provide feedback to key stakeholders and to encourage learning and adaptation by implementing agencies and beneficiary groups. Such strategies have implications for research, as the study methods need to be sufficiently flexible to account for changes or adaptations in what is actually being implemented. 8 11 Research designs that depend on having a single and fixed intervention, such as a typical randomised controlled trial, would not be an appropriate design to study phenomena that change, especially when they change in unpredictable and variable ways.

Another implication of studying complex systems is that the research may need to use multiple methods and different sources of information to understand an implementation problem. Because implementation activities and effects are not usually static or linear processes, research designs often need to be able to observe and analyse these sometimes iterative and changing elements at several points in time and to consider unintended consequences.

Implementation research questions

As in other types of health systems research, the research question is the king in implementation research. Implementation research takes a pragmatic approach, placing the research question (or implementation problem) as the starting point to inquiry; this then dictates the research methods and assumptions to be used. Implementation research questions can cover a wide variety of topics and are frequently organised around theories of change or the type of research objective (examples are in supplementary table C). 12 13

Implementation research can overlap with other types of research used in medicine and public health, and the distinctions are not always clear cut. A range of implementation research exists, based on the centrality of implementation in the research question, the degree to which the research takes place in a real world setting with routine populations, and the role of implementation strategies and implementation variables in the research (figure ⇓ ).

Spectrum of implementation research 33

  • Download figure
  • Open in new tab
  • Download powerpoint

A more detailed description of the research question can help researchers and practitioners to determine the type of research methods that should be used. In table 2 ⇓ , we break down the research question first by its objective: to explore, describe, influence, explain, or predict. This is followed by a typical implementation research question based on each objective. Finally, we describe a set of research methods for each type of research question.

 Type of implementation research objective, implementation question, and research methods

Much of evidence based medicine is concerned with the objective of influence, or whether an intervention produces an expected outcome, which can be broken down further by the level of certainty in the conclusions drawn from the study. The nature of the inquiry (for example, the amount of risk and considerations of ethics, costs, and timeliness), and the interests of different audiences, should determine the level of uncertainty. 8 14 Research questions concerning programmatic decisions about the process of an implementation strategy may justify a lower level of certainty for the manager and policy maker, using research methods that would support an adequacy or plausibility inference. 14 Where a high risk of harm exists and sufficient time and resources are available, a probability study design might be more appropriate, in which the result in an area where the intervention is implemented is compared with areas without implementation with a low probability of error (for example, P< 0.05). These differences in the level of confidence affect the study design in terms of sample size and the need for concurrent or randomised comparison groups. 8 14

Implementation specific research methods

A wide range of qualitative and quantitative research methods can be used in implementation research (table 2 ⇑ ). The box gives a set of basic questions to guide the design or reporting of implementation research that can be used across methods. More in-depth criteria have also been proposed to assess the external validity or generalisability of findings. 15 Some research methods have been developed specifically to deal with implementation research questions or are particularly suitable to implementation research, as identified below.

Key questions to assess research designs or reports on implementation research 33

Does the research clearly aim to answer a question concerning implementation?

Does the research clearly identify the primary audiences for the research and how they would use the research?

Is there a clear description of what is being implemented (for example, details of the practice, programme, or policy)?

Does the research involve an implementation strategy? If so, is it described and examined in its fullness?

Is the research conducted in a “real world” setting? If so, is the context and sample population described in sufficient detail?

Does the research appropriately consider implementation outcome variables?

Does the research appropriately consider context and other factors that influence implementation?

Does the research appropriately consider changes over time and the level of complexity of the system, including unintended consequences?

Pragmatic trials

Pragmatic trials, or practical trials, are randomised controlled trials in which the main research question focuses on effectiveness of an intervention in a normal practice setting with the full range of study participants. 16 This may include pragmatic trials on new healthcare delivery strategies, such as integrated chronic care clinics or nurse run community clinics. This contrasts with typical randomised controlled trials that look at the efficacy of an intervention in an “ideal” or controlled setting and with highly selected patients and standardised clinical outcomes, usually of a short term nature.

Effectiveness-implementation hybrid trials

Effectiveness-implementation hybrid designs are intended to assess the effectiveness of both an intervention and an implementation strategy. 7 These studies include components of an effectiveness design (for example, randomised allocation to intervention and comparison arms) but add the testing of an implementation strategy, which may also be randomised. This might include testing the effectiveness of a package of delivery and postnatal care in under-served areas, as well testing several strategies for providing the care. Whereas pragmatic trials try to fix the intervention under study, effectiveness-implementation hybrids also intervene and/or observe the implementation process as it actually occurs. This can be done by assessing implementation outcome variables.

Quality improvement studies

Quality improvement studies typically involve a set of structured and cyclical processes, often called the plan-do-study-act cycle, and apply scientific methods on a continuous basis to formulate a plan, implement the plan, and analyse and interpret the results, followed by an iteration of what to do next. 17 18 The focus might be on a clinical process, such as how to reduce hospital acquired infections in the intensive care unit, or management processes such as how to reduce waiting times in the emergency room. Guidelines exist on how to design and report such research—the Standards for Quality Improvement Reporting Excellence (SQUIRE). 17

Speroff and O’Connor describe a range of plan-do-study-act research designs, noting that they have in common the assessment of responses measured repeatedly and regularly over time, either in a single case or with comparison groups. 18 Balanced scorecards integrate performance measures across a range of domains and feed into regular decision making. 19 20 Standardised guidance for using good quality health information systems and health facility surveys has been developed and often provides the sources of information for these quasi-experimental designs. 21 22 23

Participatory action research

Participatory action research refers to a range of research methods that emphasise participation and action (that is, implementation), using methods that involve iterative processes of reflection and action, “carried out with and by local people rather than on them.” 24 In participatory action research, a distinguishing feature is that the power and control over the process rests with the participants themselves. Although most participatory action methods involve qualitative methods, quantitative and mixed methods techniques are increasingly being used, such as for participatory rural appraisal or participatory statistics. 25 26

Mixed methods

Mixed methods research uses both qualitative and quantitative methods of data collection and analysis in the same study. Although not designed specifically for implementation research, mixed methods are particularly suitable because they provide a practical way to understand multiple perspectives, different types of causal pathways, and multiple types of outcomes—all common features of implementation research problems.

Many different schemes exist for describing different types of mixed methods research, on the basis of the emphasis of the study, the sampling schemes for the different components, the timing and sequencing of the qualitative and quantitative methods, and the level of mixing between the qualitative and quantitative methods. 27 28 Broad guidance on the design and conduct of mixed methods designs is available. 29 30 31 A scheme for good reporting of mixed methods studies involves describing the justification for using a mixed methods approach to the research question; describing the design in terms of the purpose, priority, and sequence of methods; describing each method in terms of sampling, data collection, and analysis; describing where the integration has occurred, how it has occurred, and who has participated in it; describing any limitation of one method associated with the presence of the other method; and describing any insights gained from mixing or integrating methods. 32

Implementation research aims to cover a wide set of research questions, implementation outcome variables, factors affecting implementation, and implementation strategies. This paper has identified a range of qualitative, quantitative, and mixed methods that can be used according to the specific research question, as well as several research designs that are particularly suited to implementation research. Further details of these concepts can be found in a new guide developed by the Alliance for Health Policy and Systems Research. 33

Summary points

Implementation research has its origins in many disciplines and is usefully defined as scientific inquiry into questions concerning implementation—the act of fulfilling or carrying out an intention

In health research, these intentions can be policies, programmes, or individual practices (collectively called interventions)

Implementation research seeks to understand and work in “real world” or usual practice settings, paying particular attention to the audience that will use the research, the context in which implementation occurs, and the factors that influence implementation

A wide variety of qualitative, quantitative, and mixed methods techniques can be used in implementation research, which are best selected on the basis of the research objective and specific questions related to what, why, and how interventions work

Implementation research may examine strategies that are specifically designed to improve the carrying out of health interventions or assess variables that are defined as implementation outcomes

Implementation outcomes include acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, coverage, and sustainability

Cite this as: BMJ 2013;347:f6753

Contributors: All authors contributed to the conception and design, analysis and interpretation, drafting the article, or revising it critically for important intellectual content, and all gave final approval of the version to be published. NT had the original idea for the article, which was discussed by the authors (except OA) as well as George Pariyo, Jim Sherry, and Dena Javadi at a meeting at the World Health Organization (WHO). DHP and OA did the literature reviews, and DHP wrote the original outline and the draft manuscript, tables, and boxes. OA prepared the original figure. All authors reviewed the draft article and made substantial revisions to the manuscript. DHP is the guarantor.

Funding: Funding was provided by the governments of Norway and Sweden and the UK Department for International Development (DFID) in support of the WHO Implementation Research Platform, which financed a meeting of authors and salary support for NT. DHP is supported by the Future Health Systems research programme consortium, funded by DFID for the benefit of developing countries (grant number H050474). The funders played no role in the design, conduct, or reporting of the research.

Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: support for the submitted work as described above; NT and TA are employees of the Alliance for Health Policy and Systems Research at WHO, which is supporting their salaries to work on implementation research; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.

Provenance and peer review: Invited by journal; commissioned by WHO; externally peer reviewed.

  • ↵ Brownson RC, Colditz GA, Proctor EK, eds. Dissemination and implementation research in health: translating science to practice. Oxford University Press, 2012.
  • ↵ Ciliska D, Robinson P, Armour T, Ellis P, Brouwers M, Gauld M, et al. Diffusion and dissemination of evidence-based dietary strategies for the prevention of cancer. Nutr J 2005 ; 4 (1): 13 . OpenUrl CrossRef PubMed
  • ↵ Remme JHF, Adam T, Becerra-Posada F, D’Arcangues C, Devlin M, Gardner C, et al. Defining research to improve health systems. PLoS Med 2010 ; 7 : e1001000 . OpenUrl CrossRef PubMed
  • ↵ McKibbon KA, Lokker C, Mathew D. Implementation research. 2012. http://whatiskt.wikispaces.com/Implementation+Research .
  • ↵ The compact edition of the Oxford English dictionary. Oxford University Press, 1971.
  • ↵ Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health 2010 ; 38 : 65 -76. OpenUrl
  • ↵ Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care 2012 ; 50 : 217 -26. OpenUrl CrossRef PubMed Web of Science
  • ↵ Peters DH, El-Saharty S, Siadat B, Janovsky K, Vujicic M, eds. Improving health services in developing countries: from evidence to action. World Bank, 2009.
  • ↵ Sabatier PA. Top-down and bottom-up approaches to implementation research. J Public Policy 1986 ; 6 (1): 21 -48. OpenUrl CrossRef
  • ↵ Paina L, Peters DH. Understanding pathways for scaling up health services through the lens of complex adaptive systems. Health Policy Plan 2012 ; 27 : 365 -73. OpenUrl Abstract / FREE Full Text
  • ↵ Gilson L, ed. Health policy and systems research: a methodology reader. World Health Organization, 2012.
  • ↵ Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med 2012 ; 43 : 337 -50. OpenUrl CrossRef PubMed
  • ↵ Improved Clinical Effectiveness through Behavioural Research Group (ICEBeRG). Designing theoretically-informed implementation interventions. Implement Sci 2006 ; 1 : 4 . OpenUrl CrossRef PubMed
  • ↵ Habicht JP, Victora CG, Vaughn JP. Evaluation designs for adequacy, plausibility, and probability of public health programme performance and impact. Int J Epidemiol 1999 ; 28 : 10 -8. OpenUrl Abstract / FREE Full Text
  • ↵ Green LW, Glasgow RE. Evaluating the relevance, generalization, and applicability of research. Eval Health Prof 2006 ; 29 : 126 -53. OpenUrl Abstract / FREE Full Text
  • ↵ Swarenstein M, Treweek S, Gagnier JJ, Altman DG, Tunis S, Haynes B, et al, for the CONSORT and Pragmatic Trials in Healthcare (Practihc) Groups. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ 2008 ; 337 : a2390 . OpenUrl Abstract / FREE Full Text
  • ↵ Davidoff F, Batalden P, Stevens D, Ogrince G, Mooney SE, for the SQUIRE Development Group. Publication guidelines for quality improvement in health care: evolution of the SQUIRE project. Qual Saf Health Care 2008 ; 17 (suppl I): i3 -9. OpenUrl Abstract / FREE Full Text
  • ↵ Speroff T, O’Connor GT. Study designs for PDSA quality improvement research. Q Manage Health Care 2004 ; 13 (1): 17 -32. OpenUrl CrossRef
  • ↵ Peters DH, Noor AA, Singh LP, Kakar FK, Hansen PM, Burnham G. A balanced scorecard for health services in Afghanistan. Bull World Health Organ 2007 ; 85 : 146 -51. OpenUrl CrossRef PubMed Web of Science
  • ↵ Edward A, Kumar B, Kakar F, Salehi AS, Burnham G. Peters DH. Configuring balanced scorecards for measuring health systems performance: evidence from five years’ evaluation in Afghanistan. PLOS Med 2011 ; 7 : e1001066 . OpenUrl
  • ↵ Health Facility Assessment Technical Working Group. Profiles of health facility assessment method, MEASURE Evaluation, USAID, 2008.
  • ↵ Hotchkiss D, Diana M, Foreit K. How can routine health information systems improve health systems functioning in low-resource settings? Assessing the evidence base. MEASURE Evaluation, USAID, 2012.
  • ↵ Lindelow M, Wagstaff A. Assessment of health facility performance: an introduction to data and measurement issues. In: Amin S, Das J, Goldstein M, eds. Are you being served? New tools for measuring service delivery. World Bank, 2008:19-66.
  • ↵ Cornwall A, Jewkes R. “What is participatory research?” Soc Sci Med 1995 ; 41 : 1667 -76. OpenUrl CrossRef PubMed Web of Science
  • ↵ Mergler D. Worker participation in occupational health research: theory and practice. Int J Health Serv 1987 ; 17 : 151 . OpenUrl Abstract / FREE Full Text
  • ↵ Chambers R. Revolutions in development inquiry. Earthscan, 2008.
  • ↵ Creswell JW, Plano Clark VL. Designing and conducting mixed methods research. Sage Publications, 2011.
  • ↵ Tashakkori A, Teddlie C. Mixed methodology: combining qualitative and quantitative approaches. Sage Publications, 2003.
  • ↵ Leech NL, Onwuegbuzie AJ. Guidelines for conducting and reporting mixed research in the field of counseling and beyond. Journal of Counseling and Development 2010 ; 88 : 61 -9. OpenUrl CrossRef Web of Science
  • ↵ Creswell JW. Mixed methods procedures. In: Research design: qualitative, quantitative and mixed methods approaches. 3rd ed. Sage Publications, 2009.
  • ↵ Creswell JW, Klassen AC, Plano Clark VL, Clegg Smith K. Best practices for mixed methods research in the health sciences. National Institutes of Health, Office of Behavioral and Social Sciences Research, 2011.
  • ↵ O’Cathain A, Murphy E, Nicholl J. The quality of mixed methods studies in health services research. J Health Serv Res Policy 2008 ; 13 : 92 -8. OpenUrl Abstract / FREE Full Text
  • ↵ Peters DH, Tran N, Adam T, Ghaffar A. Implementation research in health: a practical guide. Alliance for Health Policy and Systems Research, World Health Organization, 2013.
  • Rogers EM. Diffusion of innovations. 5th ed. Free Press, 2003.
  • Carroll C, Patterson M, Wood S, Booth A, Rick J, Balain S. A conceptual framework for implementation fidelity. Implement Sci 2007 ; 2 : 40 . OpenUrl CrossRef PubMed
  • Victora CG, Schellenberg JA, Huicho L, Amaral J, El Arifeen S, Pariyo G, et al. Context matters: interpreting impact findings in child survival evaluations. Health Policy Plan 2005 ; 20 (suppl 1): i18 -31. OpenUrl Abstract

meaning of program of research

Example sentences research programme

That was the beginning of a major research programme examining the impact of chemicals on wildlife.
A poor performance can mean funding cuts, even an end to a faculty's research programme .
Any sharks he boats will be tagged and released as part of a research programme into the fearsome predators.
However, many trusts have freezes on staff recruitment, and existing staff cannot be spared for the research programme .
The applied research programme provides grants of up to 500,000, which you would have to match yourself.

Definition of 'programme' programme

IPA Pronunciation Guide

Definition of 'research' research

B1+

COBUILD Collocations research programme

English Quiz

Browse alphabetically research programme

  • research park
  • research participants
  • research priorities
  • research programme
  • research project
  • research publication
  • research quantum
  • All ENGLISH words that begin with 'R'

Quick word challenge

Quiz Review

Score: 0 / 5

Tile

Wordle Helper

Tile

Scrabble Tools

U.S. flag

An official website of the United States government

Here’s how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

https://www.nist.gov/artificial-intelligence

AI Hero Image

Artificial intelligence

NIST aims to cultivate trust in the design, development, use and governance of Artificial Intelligence (AI) technologies and systems in ways that enhance safety and security and improve quality of life. NIST focuses on improving measurement science, technology, standards and related tools — including evaluation and data.

With AI and Machine Learning (ML) changing how society addresses challenges and opportunities, the trustworthiness of AI technologies is critical. Trustworthy AI systems are those demonstrated to be valid and reliable; safe, secure and resilient; accountable and transparent; explainable and interpretable; privacy-enhanced; and fair with harmful bias managed. The agency’s AI goals and activities are driven by its statutory mandates, Presidential Executive Orders and policies, and the needs expressed by U.S. industry, the global research community, other federal agencies,and civil society.

NIST’s AI goals include:

  • Conduct fundamental research to advance trustworthy AI technologies.
  • Apply AI research and innovation across the NIST Laboratory Programs.
  • Establish benchmarks, data and metrics to evaluate AI technologies.
  • Lead and participate in development of technical AI standards.
  • Contribute technical expertise to discussions and development of AI policies.

NIST’s AI efforts fall in several categories:

Fundamental AI Research

NIST’s AI portfolio includes fundamental research to advance the development of AI technologies — including software, hardware, architectures and the ways humans interact with AI technology and AI-generated information  

Applied AI Research

AI approaches are increasingly an essential component in new research. NIST scientists and engineers use various machine learning and AI tools to gain a deeper understanding of and insight into their research. At the same time, NIST laboratory experiences with AI are leading to a better understanding of AI’s capabilities and limitations.

Test, Evaluation, Validation, and Verification (TEVV)

With a long history of working with the community to advance tools, standards and test beds, NIST increasingly is focusing on the sociotechnical evaluation of AI.  

Voluntary Consensus-Based Standards

NIST leads and participates in the development of technical standards, including international standards, that promote innovation and public trust in systems that use AI. A broad spectrum of standards for AI data, performance and governance are a priority for the use and creation of trustworthy and responsible AI.

A fact sheet describes NIST's AI programs .

Featured Content

Artificial intelligence topics.

  • AI Test, Evaluation, Validation and Verification (TEVV)
  • Fundamental AI
  • Hardware for AI
  • Machine learning
  • Trustworthy and Responsible AI

Stay in Touch

Sign up for our newsletter to stay up to date with the latest research, trends, and news for Artificial intelligence.

The Research

Projects & programs, deep learning for mri reconstruction and analysis.

circuit

Emerging Hardware for Artificial Intelligence

Embodied ai and data generation for manufacturing robotics, deep generative modeling for communication systems testing and data sharing.

JARVIS-ML overview

Additional Resources Links

Composite image representing artificial intelligence. Image of graphic human head with images representing healthcare, cybersecurity, transportation, energy, robotics, and manufacturing.

NIST Launches Trustworthy and Responsible AI Resource Center (AIRC)

One-stop shop offers industry, government and academic stakeholders knowledge of AI standards, measurement methods and metrics, data sets, and other resources.

In front of a laptop computer, a hand holds a cell phone that has a conversation with generative AI on the phone screen.

Minimizing Harms and Maximizing the Potential of Generative AI

Eight images show the same person, four wearing glasses and four without, and all with different face expressions. Label says: Database Facial Expressions.

NIST Reports First Results From Age Estimation Software Evaluation

ARIA illustration in blue and green with floating circuits and the silhouette of a person's face.

NIST Launches ARIA, a New Program to Advance Sociotechnical Testing and Evaluation for AI

The letters "AI" appear in blue on a background of binary numbers, ones and zeros.

U.S. Secretary of Commerce Gina Raimondo Releases Strategic Vision on AI Safety, Announces Plan for Global Cooperation Among AI Safety Institutes

Bias in AI

2024 Artificial Intelligence for Materials Science (AIMS) Workshop

What is decision making?

Signpost with three blank signs on sky backgrounds

Decisions, decisions. When was the last time you struggled with a choice? Maybe it was this morning, when you decided to hit the snooze button—again. Perhaps it was at a restaurant, with a miles-long menu and the server standing over you. Or maybe it was when you left your closet in a shambles after trying on seven different outfits before a big presentation. Often, making a decision—even a seemingly simple one—can be difficult. And people will go to great lengths—and pay serious sums of money—to avoid having to make a choice. The expensive tasting menu at the restaurant, for example. Or limiting your closet choices to black turtlenecks, à la Steve Jobs.

Get to know and directly engage with senior McKinsey experts on decision making

Aaron De Smet is a senior partner in McKinsey’s New Jersey office, Eileen Kelly Rinaudo  is McKinsey’s global director of advancing women executives and is based in the New York office, Frithjof Lund is a senior partner in the Oslo office, and Leigh Weiss is a senior adviser in the Boston office.

If you’ve ever wrestled with a decision at work, you’re definitely not alone. According to McKinsey research, executives spend a significant portion of their time— nearly 40 percent , on average—making decisions. Worse, they believe most of that time is poorly used. People struggle with decisions so much so that we actually get exhausted from having to decide too much, a phenomenon called decision fatigue.

But decision fatigue isn’t the only cost of ineffective decision making. According to a McKinsey survey of more than 1,200 global business leaders, inefficient decision making costs a typical Fortune 500 company 530,000 days  of managers’ time each year, equivalent to about $250 million in annual wages. That’s a lot of turtlenecks.

How can business leaders ease the burden of decision making and put this time and money to better use? Read on to learn the ins and outs of smart decision making—and how to put it to work.

Learn more about our People & Organizational Performance Practice .

How can organizations untangle ineffective decision-making processes?

McKinsey research has shown that agile is the ultimate solution for many organizations looking to streamline their decision making . Agile organizations are more likely to put decision making in the right hands, are faster at reacting to (or anticipating) shifts in the business environment, and often attract top talent who prefer working at companies with greater empowerment and fewer layers of management.

For organizations looking to become more agile, it’s possible to quickly boost decision-making efficiency by categorizing the type of decision to be made and adjusting the approach accordingly. In the next section, we review three types of decision making and how to optimize the process for each.

What are three keys to faster, better decisions?

Business leaders today have access to more sophisticated data than ever before. But it hasn’t necessarily made decision making any easier. For one thing, organizational dynamics—such as unclear roles, overreliance on consensus, and death by committee—can get in the way of straightforward decision making. And more data often means more decisions to be taken, which can become too much for one person, team, or department. This can make it more difficult for leaders to cleanly delegate, which in turn can lead to a decline in productivity.

Leaders are growing increasingly frustrated with broken decision-making processes, slow deliberations, and uneven decision-making outcomes. Fewer than half  of the 1,200 respondents of a McKinsey survey report that decisions are timely, and 61 percent say that at least half the time they spend making decisions is ineffective.

What’s the solution? According to McKinsey research, effective solutions center around categorizing decision types and organizing different processes to support each type. Further, each decision category should be assigned its own practice—stimulating debate, for example, or empowering employees—to yield improvements in effectiveness.

Here are the three decision categories  that matter most to senior leaders, and the standout practice that makes the biggest difference for each type of decision.

  • Big-bet decisions are infrequent but high risk, such as acquisitions. These decisions carry the potential to shape the future of the company, and as a result are generally made by top leaders and the board. Spurring productive debate by assigning someone to argue the case for and against a potential decision can improve big-bet decision making.
  • Cross-cutting decisions, such as pricing, can be frequent and high risk. These are usually made by business unit heads, in cross-functional forums as part of a collaborative process. These types of decisions can be improved by doubling down on process refinement. The ideal process should be one that helps clarify objectives, measures, and targets.
  • Delegated decisions are frequent but low risk and are handled by an individual or working team with some input from others. Delegated decision making can be improved by ensuring that the responsibility for the decision is firmly in the hands of those closest to the work. This approach also enhances engagement and accountability.

In addition, business leaders can take the following four actions to help sustain rapid decision making :

  • Focus on the game-changing decisions, ones that will help an organization create value and serve its purpose.
  • Convene only necessary meetings, and eliminate lengthy reports. Turn unnecessary meetings into emails, and watch productivity bloom. For necessary meetings, provide short, well-prepared prereads to aid in decision making.
  • Clarify the roles of decision makers and other voices. Who has a vote, and who has a voice?
  • Push decision-making authority to the front line—and tolerate mistakes.

Circular, white maze filled with white semicircles.

Introducing McKinsey Explainers : Direct answers to complex questions

How can business leaders effectively delegate decision making.

Business is more complex and dynamic than ever, meaning business leaders are faced with needing to make more decisions in less time. Decision making takes up an inordinate amount of management’s time—up to 70 percent for some executives—which leads to inefficiencies and opportunity costs.

As discussed above, organizations should treat different types of decisions differently . Decisions should be classified  according to their frequency, risk, and importance. Delegated decisions are the most mysterious for many organizations: they are the most frequent, and yet the least understood. Only about a quarter of survey respondents  report that their organizations make high-quality and speedy delegated decisions. And yet delegated decisions, because they happen so often, can have a big impact on organizational culture.

The key to better delegated decisions is to empower employees by giving them the authority and confidence to act. That means not simply telling employees which decisions they can or can’t make; it means giving employees the tools they need to make high-quality decisions and the right level of guidance as they do so.

Here’s how to support delegation and employee empowerment:

  • Ensure that your organization has a well-defined, universally understood strategy. When the strategic intent of an organization is clear, empowerment is much easier because it allows teams to pull in the same direction.
  • Clearly define roles and responsibilities. At the foundation of all empowerment efforts is a clear understanding of who is responsible for what, including who has input and who doesn’t.
  • Invest in capability building (and coaching) up front. To help managers spend meaningful coaching time, organizations should also invest in managers’ leadership skills.
  • Build an empowerment-oriented culture. Leaders should role model mindsets that promote empowerment, and managers should build the coaching skills they want to see. Managers and employees, in particular, will need to get comfortable with failure as a necessary step to success.
  • Decide when to get involved. Managers should spend effort up front to decide what is worth their focused attention. They should know when it’s appropriate to provide close guidance and when not to.

How can you guard against bias in decision making?

Cognitive bias is real. We all fall prey, no matter how we try to guard ourselves against it. And cognitive and organizational bias undermines good decision making, whether you’re choosing what to have for lunch or whether to put in a bid to acquire another company.

Here are some of the most common cognitive biases and strategies for how to avoid them:

  • Confirmation bias. Often, when we already believe something, our minds seek out information to support that belief—whether or not it is actually true. Confirmation bias  involves overweighting evidence that supports our belief, underweighting evidence against our belief, or even failing to search impartially for evidence in the first place. Confirmation bias is one of the most common traps organizational decision makers fall into. One famous—and painful—example of confirmation bias is when Blockbuster passed up the opportunity  to buy a fledgling Netflix for $50 million in 2000. (Actually, that’s putting it politely. Netflix executives remember being “laughed out” of Blockbuster’s offices.) Fresh off the dot-com bubble burst of 2000, Blockbuster executives likely concluded that Netflix had approached them out of desperation—not that Netflix actually had a baby unicorn on its hands.
  • Herd mentality. First observed by Charles Mackay in his 1841 study of crowd psychology, herd mentality happens when information that’s available to the group is determined to be more useful than privately held knowledge. Individuals buy into this bias because there’s safety in the herd. But ignoring competing viewpoints might ultimately be costly. To counter this, try a teardown exercise , wherein two teams use scenarios, advanced analytics, and role-playing to identify how a herd might react to a decision, and to ensure they can refute public perceptions.
  • Sunk-cost fallacy. Executives frequently hold onto underperforming business units or projects because of emotional or legacy attachment . Equally, business leaders hate shutting projects down . This, researchers say, is due to the ingrained belief that if everyone works hard enough, anything can be turned into gold. McKinsey research indicates two techniques for understanding when to hold on and when to let go. First, change the burden of proof from why an asset should be cut to why it should be retained. Next, categorize business investments according to whether they should be grown, maintained, or disposed of—and follow clearly differentiated investment rules  for each group.
  • Ignoring unpleasant information. Researchers call this the “ostrich effect”—when people figuratively bury their heads in the sand , ignoring information that will make their lives more difficult. One study, for example, found that investors were more likely to check the value of their portfolios when the markets overall were rising, and less likely to do so when the markets were flat or falling. One way to help get around this is to engage in a readout process, where individuals or teams summarize discussions as they happen. This increases the likelihood that everyone leaves a meeting with the same understanding of what was said.
  • Halo effect. Important personal and professional choices are frequently affected by people’s tendency to make specific judgments based on general impressions . Humans are tempted to use simple mental frames to understand complicated ideas, which means we frequently draw conclusions faster than we should. The halo effect is particularly common in hiring decisions. To avoid this bias, structured interviews can help mitigate the essentializing tendency. When candidates are measured against indicators, intuition is less likely to play a role.

For more common biases and how to beat them, check out McKinsey’s Bias Busters Collection .

Learn more about Strategy & Corporate Finance consulting  at McKinsey—and check out job opportunities related to decision making if you’re interested in working at McKinsey.

Articles referenced include:

  • “ Bias busters: When the crowd isn’t necessarily wise ,” McKinsey Quarterly , May 23, 2022, Eileen Kelly Rinaudo , Tim Koller , and Derek Schatz
  • “ Boards and decision making ,” April 8, 2021, Aaron De Smet , Frithjof Lund , Suzanne Nimocks, and Leigh Weiss
  • “ To unlock better decision making, plan better meetings ,” November 9, 2020, Aaron De Smet , Simon London, and Leigh Weiss
  • “ Reimagine decision making to improve speed and quality ,” September 14, 2020, Julie Hughes , J. R. Maxwell , and Leigh Weiss
  • “ For smarter decisions, empower your employees ,” September 9, 2020, Aaron De Smet , Caitlin Hewes, and Leigh Weiss
  • “ Bias busters: Lifting your head from the sand ,” McKinsey Quarterly , August 18, 2020, Eileen Kelly Rinaudo
  • “ Decision making in uncertain times ,” March 24, 2020, Andrea Alexander, Aaron De Smet , and Leigh Weiss
  • “ Bias busters: Avoiding snap judgments ,” McKinsey Quarterly , November 6, 2019, Tim Koller , Dan Lovallo, and Phil Rosenzweig
  • “ Three keys to faster, better decisions ,” McKinsey Quarterly , May 1, 2019, Aaron De Smet , Gregor Jost , and Leigh Weiss
  • “ Decision making in the age of urgency ,” April 30, 2019, Iskandar Aminov, Aaron De Smet , Gregor Jost , and David Mendelsohn
  • “ Bias busters: Pruning projects proactively ,” McKinsey Quarterly , February 6, 2019, Tim Koller , Dan Lovallo, and Zane Williams
  • “ Decision making in your organization: Cutting through the clutter ,” McKinsey Quarterly , January 16, 2018, Aaron De Smet , Simon London, and Leigh Weiss
  • “ Untangling your organization’s decision making ,” McKinsey Quarterly , June 21, 2017, Aaron De Smet , Gerald Lackey, and Leigh Weiss
  • “ Are you ready to decide? ,” McKinsey Quarterly , April 1, 2015, Philip Meissner, Olivier Sibony, and Torsten Wulf.

Signpost with three blank signs on sky backgrounds

Want to know more about decision making?

Related articles.

Three gear wheels in contact

What is productivity?

" "

What is the future of work?

" "

What is leadership?

IMAGES

  1. Summary of Research Programme.

    meaning of program of research

  2. Importance And Significance Of Research

    meaning of program of research

  3. Types of Research Report

    meaning of program of research

  4. PPT

    meaning of program of research

  5. What is Research? Definition , Purpose & Typical Research step?

    meaning of program of research

  6. Definition Of Research

    meaning of program of research

VIDEO

  1. Programming Proofs and Proving Programs

  2. Researching Alternative Teaching Program Research #teachercertification #teachinglicense #teachers

  3. PhD Program

  4. कोई पूरा रात करता है तो कोई थोड़ा देर करता है|| double meaning question funny video @bKhurpati

  5. NASA India X SCI-Arc, Information session

  6. Research Meaning

COMMENTS

  1. DEVELOPING A PROGRAM OF RESEARCH: An Essential Process for a Successful Research Career

    The first practical guide to creating, evolving, and sustaining a successful program of research in applied health, social sciences, and education fields. An indispensable resource for early- and ...

  2. Building a program of research

    A program of research is defined as a coherent expression of a researcher's area of interest that has public health significance, builds from the published research literature in the field, has relevance for clinical nursing practice, and captures the passion and commitment of the researcher.

  3. What is Research?

    The purpose of research is to further understand the world and to learn how this knowledge can be applied to better everyday life. It is an integral part of problem solving. Although research can take many forms, there are three main purposes of research: Exploratory: Exploratory research is the first research to be conducted around a problem ...

  4. What is Academic Research?

    What is Academic Research? After completing this module you will be able to: recognize why information exists, who creates it, and how information of all kinds can be valuable, even when it's biased. understand what scholarly research is, how to find it, how the process of peer-review works, and how it gets published. identify types of ...

  5. What is Research? Definition, Types, Methods and Process

    Research is defined as a meticulous and systematic inquiry process designed to explore and unravel specific subjects or issues with precision. This methodical approach encompasses the thorough collection, rigorous analysis, and insightful interpretation of information, aiming to delve deep into the nuances of a chosen field of study.

  6. Research program

    A research program ( British English: research programme) is a professional network of scientists conducting basic research. The term was used by philosopher of science Imre Lakatos to blend and revise the normative model of science offered by Karl Popper 's The Logic of Scientific Discovery (with its idea of falsifiability) and the descriptive ...

  7. What Is Research, and Why Do People Do It?

    In Chap. 4, we will illustrate how our definition fits research using a range of quantitative and qualitative methods. Exercise 1.4. Look for ways to extend what the field knows in an area that has already received attention by other researchers. Specifically, you can search for a program of research carried out by more experienced researchers ...

  8. Building a program of research

    A program of research is defined as a coherent expression of a researcher's area of interest that has public health significance, builds from the published research literature in the field, has relevance for clinical nursing practice, and captures the passion and commitment of the researcher. The Outcomes Model for Health Care Research is ...

  9. Basic Steps to Building a Research Program

    Planning From Within. Taking an entrepreneurial approach is a successful mechanism when developing a clinical research program. Maintaining a sustainable program requires fiscal planning, much like a business. When developing the financial infrastructure, it is helpful to consider budgeting from both broad and narrow perspectives.

  10. A Beginner's Guide to Starting the Research Process

    Step 1: Choose your topic. First you have to come up with some ideas. Your thesis or dissertation topic can start out very broad. Think about the general area or field you're interested in—maybe you already have specific research interests based on classes you've taken, or maybe you had to consider your topic when applying to graduate school and writing a statement of purpose.

  11. Developing a Research Program

    Developing a Thriving Research Program. As you begin an academic career in the geosciences, you may face many new challenges, including (probably) getting your own research program up and running. This involves many components, addressed on the pages below. Jump down to Planning Your Research Program * Funding Your Research * Collaborating With ...

  12. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  13. Research and training programmes

    Research is defined in the Oxford English Dictionary as "a systematic investigation and study of materials and sources in order to establish facts and reach new conclusions.". Research is embedded in the curricula of most postgraduate training programmes; students are expected to complete some form of original work towards a dissertation.

  14. Research Process

    Research Process. Definition: Research Process is a systematic and structured approach that involves the collection, analysis, and interpretation of data or information to answer a specific research question or solve a particular problem. Research Process Steps. Research Process Steps are as follows: Identify the Research Question or Problem

  15. Research

    Another definition of research is given by John W. Creswell, who states that "research is a process of steps used to collect and analyze information to increase our understanding of a topic or issue". It consists of three steps: pose a question, collect data to answer the question, and present an answer to the question. ...

  16. Implementation research: what it is and how to do it

    Implementation research is a growing but not well understood field of health research that can contribute to more effective public health and clinical policies and programmes. This article provides a broad definition of implementation research and outlines key principles for how to do it The field of implementation research is growing, but it is not well understood despite the need for better ...

  17. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  18. What Is Evaluation?: Perspectives of How Evaluation Differs (or Not

    Source Definition; Suchman (1968, pp. 2-3) [Evaluation applies] the methods of science to action programs in order to obtain objective and valid measures of what such programs are accomplishing.…Evaluation research asks about the kinds of change desired, the means by which this change is to be brought about, and the signs by which such changes can be recognized.

  19. (PDF) What is research? A conceptual understanding

    Naidoo (2011), stated that research is a systematic investigation of nature and the society to validate and refine existing information and generate new knowledge. Research refers to the ...

  20. Programmatic Research

    Programmatic Research. Two rather neglected approaches to research in the counseling nel service area will be discussed in this chapter under the matic research. Research can be viewed either from the program evaluation or from the standpoint of programs of extend longitudinally. In longitudinal programmatic terms reports of research which are ...

  21. RESEARCH PROGRAMME definition and meaning

    RESEARCH PROGRAMME definition | Meaning, pronunciation, translations and examples

  22. Artificial intelligence

    Overview. NIST aims to cultivate trust in the design, development, use and governance of Artificial Intelligence (AI) technologies and systems in ways that enhance safety and security and improve quality of life. NIST focuses on improving measurement science, technology, standards and related tools — including evaluation and data.

  23. Misinformation and disinformation

    Misinformation is false or inaccurate information—getting the facts wrong. Disinformation is false information which is deliberately intended to mislead—intentionally misstating the facts. The spread of misinformation and disinformation has affected our ability to improve public health, address climate change, maintain a stable democracy ...

  24. What is a research project?

    A research project is an academic, scientific, or professional undertaking to answer a research question. Research projects can take many forms, such as qualitative or quantitative, descriptive, longitudinal, experimental, or correlational. What kind of research approach you choose will depend on your topic.

  25. Research Program Definition: 1k Samples

    Research Program means the research to be conducted for the Collaboration including, without limitation, the activities described in the Research Plan and set forth in Sections 2.1 and 2.2 of this Agreement. Sample 1 Sample 2 Sample 3. Based on 17 documents. Research Program will have the meaning set forth in Section 4.4.

  26. What is decision making?

    According to McKinsey research, executives spend a significant portion of their time— nearly 40 percent, on average—making decisions. Worse, they believe most of that time is poorly used. People struggle with decisions so much so that we actually get exhausted from having to decide too much, a phenomenon called decision fatigue.

  27. Directed Energy Weapons: High Power Microwaves

    Additional research focus areas should include research into the coupling of high power RF to, its interaction with, and its effects upon electronic systems. The goal of this is to enable the development of predictive effects tools. It should also include the use of waveform parameter adjustability with the goal of maximizing effects on ...