• Privacy Policy

Buy Me a Coffee

Research Method

Home » Research Design – Types, Methods and Examples

Research Design – Types, Methods and Examples

Table of Contents

Research Design

Research Design

Definition:

Research design refers to the overall strategy or plan for conducting a research study. It outlines the methods and procedures that will be used to collect and analyze data, as well as the goals and objectives of the study. Research design is important because it guides the entire research process and ensures that the study is conducted in a systematic and rigorous manner.

Types of Research Design

Types of Research Design are as follows:

Descriptive Research Design

This type of research design is used to describe a phenomenon or situation. It involves collecting data through surveys, questionnaires, interviews, and observations. The aim of descriptive research is to provide an accurate and detailed portrayal of a particular group, event, or situation. It can be useful in identifying patterns, trends, and relationships in the data.

Correlational Research Design

Correlational research design is used to determine if there is a relationship between two or more variables. This type of research design involves collecting data from participants and analyzing the relationship between the variables using statistical methods. The aim of correlational research is to identify the strength and direction of the relationship between the variables.

Experimental Research Design

Experimental research design is used to investigate cause-and-effect relationships between variables. This type of research design involves manipulating one variable and measuring the effect on another variable. It usually involves randomly assigning participants to groups and manipulating an independent variable to determine its effect on a dependent variable. The aim of experimental research is to establish causality.

Quasi-experimental Research Design

Quasi-experimental research design is similar to experimental research design, but it lacks one or more of the features of a true experiment. For example, there may not be random assignment to groups or a control group. This type of research design is used when it is not feasible or ethical to conduct a true experiment.

Case Study Research Design

Case study research design is used to investigate a single case or a small number of cases in depth. It involves collecting data through various methods, such as interviews, observations, and document analysis. The aim of case study research is to provide an in-depth understanding of a particular case or situation.

Longitudinal Research Design

Longitudinal research design is used to study changes in a particular phenomenon over time. It involves collecting data at multiple time points and analyzing the changes that occur. The aim of longitudinal research is to provide insights into the development, growth, or decline of a particular phenomenon over time.

Structure of Research Design

The format of a research design typically includes the following sections:

  • Introduction : This section provides an overview of the research problem, the research questions, and the importance of the study. It also includes a brief literature review that summarizes previous research on the topic and identifies gaps in the existing knowledge.
  • Research Questions or Hypotheses: This section identifies the specific research questions or hypotheses that the study will address. These questions should be clear, specific, and testable.
  • Research Methods : This section describes the methods that will be used to collect and analyze data. It includes details about the study design, the sampling strategy, the data collection instruments, and the data analysis techniques.
  • Data Collection: This section describes how the data will be collected, including the sample size, data collection procedures, and any ethical considerations.
  • Data Analysis: This section describes how the data will be analyzed, including the statistical techniques that will be used to test the research questions or hypotheses.
  • Results : This section presents the findings of the study, including descriptive statistics and statistical tests.
  • Discussion and Conclusion : This section summarizes the key findings of the study, interprets the results, and discusses the implications of the findings. It also includes recommendations for future research.
  • References : This section lists the sources cited in the research design.

Example of Research Design

An Example of Research Design could be:

Research question: Does the use of social media affect the academic performance of high school students?

Research design:

  • Research approach : The research approach will be quantitative as it involves collecting numerical data to test the hypothesis.
  • Research design : The research design will be a quasi-experimental design, with a pretest-posttest control group design.
  • Sample : The sample will be 200 high school students from two schools, with 100 students in the experimental group and 100 students in the control group.
  • Data collection : The data will be collected through surveys administered to the students at the beginning and end of the academic year. The surveys will include questions about their social media usage and academic performance.
  • Data analysis : The data collected will be analyzed using statistical software. The mean scores of the experimental and control groups will be compared to determine whether there is a significant difference in academic performance between the two groups.
  • Limitations : The limitations of the study will be acknowledged, including the fact that social media usage can vary greatly among individuals, and the study only focuses on two schools, which may not be representative of the entire population.
  • Ethical considerations: Ethical considerations will be taken into account, such as obtaining informed consent from the participants and ensuring their anonymity and confidentiality.

How to Write Research Design

Writing a research design involves planning and outlining the methodology and approach that will be used to answer a research question or hypothesis. Here are some steps to help you write a research design:

  • Define the research question or hypothesis : Before beginning your research design, you should clearly define your research question or hypothesis. This will guide your research design and help you select appropriate methods.
  • Select a research design: There are many different research designs to choose from, including experimental, survey, case study, and qualitative designs. Choose a design that best fits your research question and objectives.
  • Develop a sampling plan : If your research involves collecting data from a sample, you will need to develop a sampling plan. This should outline how you will select participants and how many participants you will include.
  • Define variables: Clearly define the variables you will be measuring or manipulating in your study. This will help ensure that your results are meaningful and relevant to your research question.
  • Choose data collection methods : Decide on the data collection methods you will use to gather information. This may include surveys, interviews, observations, experiments, or secondary data sources.
  • Create a data analysis plan: Develop a plan for analyzing your data, including the statistical or qualitative techniques you will use.
  • Consider ethical concerns : Finally, be sure to consider any ethical concerns related to your research, such as participant confidentiality or potential harm.

When to Write Research Design

Research design should be written before conducting any research study. It is an important planning phase that outlines the research methodology, data collection methods, and data analysis techniques that will be used to investigate a research question or problem. The research design helps to ensure that the research is conducted in a systematic and logical manner, and that the data collected is relevant and reliable.

Ideally, the research design should be developed as early as possible in the research process, before any data is collected. This allows the researcher to carefully consider the research question, identify the most appropriate research methodology, and plan the data collection and analysis procedures in advance. By doing so, the research can be conducted in a more efficient and effective manner, and the results are more likely to be valid and reliable.

Purpose of Research Design

The purpose of research design is to plan and structure a research study in a way that enables the researcher to achieve the desired research goals with accuracy, validity, and reliability. Research design is the blueprint or the framework for conducting a study that outlines the methods, procedures, techniques, and tools for data collection and analysis.

Some of the key purposes of research design include:

  • Providing a clear and concise plan of action for the research study.
  • Ensuring that the research is conducted ethically and with rigor.
  • Maximizing the accuracy and reliability of the research findings.
  • Minimizing the possibility of errors, biases, or confounding variables.
  • Ensuring that the research is feasible, practical, and cost-effective.
  • Determining the appropriate research methodology to answer the research question(s).
  • Identifying the sample size, sampling method, and data collection techniques.
  • Determining the data analysis method and statistical tests to be used.
  • Facilitating the replication of the study by other researchers.
  • Enhancing the validity and generalizability of the research findings.

Applications of Research Design

There are numerous applications of research design in various fields, some of which are:

  • Social sciences: In fields such as psychology, sociology, and anthropology, research design is used to investigate human behavior and social phenomena. Researchers use various research designs, such as experimental, quasi-experimental, and correlational designs, to study different aspects of social behavior.
  • Education : Research design is essential in the field of education to investigate the effectiveness of different teaching methods and learning strategies. Researchers use various designs such as experimental, quasi-experimental, and case study designs to understand how students learn and how to improve teaching practices.
  • Health sciences : In the health sciences, research design is used to investigate the causes, prevention, and treatment of diseases. Researchers use various designs, such as randomized controlled trials, cohort studies, and case-control studies, to study different aspects of health and healthcare.
  • Business : Research design is used in the field of business to investigate consumer behavior, marketing strategies, and the impact of different business practices. Researchers use various designs, such as survey research, experimental research, and case studies, to study different aspects of the business world.
  • Engineering : In the field of engineering, research design is used to investigate the development and implementation of new technologies. Researchers use various designs, such as experimental research and case studies, to study the effectiveness of new technologies and to identify areas for improvement.

Advantages of Research Design

Here are some advantages of research design:

  • Systematic and organized approach : A well-designed research plan ensures that the research is conducted in a systematic and organized manner, which makes it easier to manage and analyze the data.
  • Clear objectives: The research design helps to clarify the objectives of the study, which makes it easier to identify the variables that need to be measured, and the methods that need to be used to collect and analyze data.
  • Minimizes bias: A well-designed research plan minimizes the chances of bias, by ensuring that the data is collected and analyzed objectively, and that the results are not influenced by the researcher’s personal biases or preferences.
  • Efficient use of resources: A well-designed research plan helps to ensure that the resources (time, money, and personnel) are used efficiently and effectively, by focusing on the most important variables and methods.
  • Replicability: A well-designed research plan makes it easier for other researchers to replicate the study, which enhances the credibility and reliability of the findings.
  • Validity: A well-designed research plan helps to ensure that the findings are valid, by ensuring that the methods used to collect and analyze data are appropriate for the research question.
  • Generalizability : A well-designed research plan helps to ensure that the findings can be generalized to other populations, settings, or situations, which increases the external validity of the study.

Research Design Vs Research Methodology

About the author.

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Citation

How to Cite Research Paper – All Formats and...

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Paper Formats

Research Paper Format – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Leave a comment x.

Save my name, email, and website in this browser for the next time I comment.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 9 April 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Grad Coach

Research Design 101

Everything You Need To Get Started (With Examples)

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Research design for qualitative and quantitative studies

Navigating the world of research can be daunting, especially if you’re a first-time researcher. One concept you’re bound to run into fairly early in your research journey is that of “ research design ”. Here, we’ll guide you through the basics using practical examples , so that you can approach your research with confidence.

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Free Webinar: Research Methodology 101

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental . 

Descriptive Research Design

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

That said, it’s important to keep in mind that correlational research design has limitations – most notably that it cannot be used to establish causality . In other words, correlation does not equal causation . To establish causality, you’ll need to move into the realm of experimental design, coming up next…

Need a helping hand?

research framework of design

Experimental Research Design

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

All that said, quasi-experimental designs can still be valuable in research contexts where random assignment is not possible and can often be undertaken on a much larger scale than experimental research, thus increasing the statistical power of the results. What’s important is that you, as the researcher, understand the limitations of the design and conduct your quasi-experiment as rigorously as possible, paying careful attention to any potential confounding variables .

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological Research Design

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

As you can see, grounded theory is ideally suited to studies where the research aims involve theory generation , especially in under-researched areas. Keep in mind though that this type of research design can be quite time-intensive , given the need for multiple rounds of data collection and analysis.

research framework of design

Ethnographic Research Design

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

As you can see, a case study research design is particularly useful where a deep and contextualised understanding of a specific phenomenon or issue is desired. However, this strength is also its weakness. In other words, you can’t generalise the findings from a case study to the broader population. So, keep this in mind if you’re considering going the case study route.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

research framework of design

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

research framework of design

Psst… there’s more (for free)

This post is part of our dissertation mini-course, which covers everything you need to get started with your dissertation, thesis or research project. 

You Might Also Like:

Survey Design 101: The Basics

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.

hetty

Thanks for this simplified explanations. it is quite very helpful.

Belz

This was really helpful. thanks

Imur

Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks

kelebogile

how to cite this page

Peter

Thank you very much for the post. It is wonderful and has cleared many worries in my mind regarding research designs. I really appreciate .

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Mule Design

A Design Research Framework

research framework of design

Design Research Process Model (PDF) |

Alternative Style for Color Perception/B&W Printing (PDF)

Recent discussions have been swirling around the phrase “ democratization of research ” concerning who should participate in what kind of research in design (product/service/technology/non-profit etc.) organizations.

There has been a lot of yelling about gatekeeping and handwringing about the potential for low-quality research. The discussion gets shouty when you don’t stop to define your terms and clarify what type of research, to what standard, and for what purpose. Without that, everything devolves into a territory battle.

Much of our work involves helping organizations create or develop their design/research practices, so I thought I’d try to help out. I created a cohesive visual representation of the general approach I recommend to clients and follow myself. You can see it above and download the PDF for better resolution.

It looks like a lot when it is broken out into discrete steps, but if your team is already working well together these can often be very short conversations—or even a chat in Slack—and a really small amount of additional documentation at the end. Skipping steps to rush ahead towards false economies or avoid hard conversations is how the wrong methods are applied and value gets lost. That’s the quality issue, there. Everything surrounding the specific methods. Why do research at all if you aren’t going to do it right? Why practice design if you aren’t going to be informed and conscientious about it?

You can just grab the diagram and enjoy, or read further for additional guidance.

(And if your team needs help with this stuff, we have a workshop ! Or we can work on your deep tissue organizational issues. Just ask.)

Design research?

When I say design research I mean asking and answering questions in a systematic way in order to make more intentional and informed decisions about planning and creating new things and ways of doing things .

Even when I’m talking about research that informs the design of interactive digital systems, I don’t say “UX research” or “user research.” Quite often, the decisions at hand have much wider implications and the information required crosses a variety of topics, including the organization and the design process itself.

Focusing on “the user” exclusively is an easy way to forget about the wider context around that user. And that’s how you get ants, and a lot of otherwise well-meaning designers participating in unsustainable and unethical businesses. Word choice matters. Design is the application of intention, and you are only as intentional as you are informed. 

The word “research” contributes a lot of confusion because there are different types of research and different professional standards. Different standards turn into dueling standards in many organizations, which themselves have been poorly designed. There’s that lack of clarity and intention again. Misapplying an academic standard to professional research activities can block learning. Using research tools and practices without sufficient training and critical thinking can be downright dangerous.

I see design research as a design activity, more than a research activity. What matters most is learning what you need to know in order to make the best possible design decisions within existing constraints. It doesn’t necessarily matter if you uncover anything new, or whether you document what you learn in a specific format. It does matter that you are intentional, conscientious, and ethical at every step.

Who does what in the design research process may vary among organizations, depending on industry, size, structure, capabilities, etc. It may make sense for different disciplines to handle different topics or categories of inquiry. Some organizations will outsource various aspects of the process, or specific types of research question. In all cases, it’s critical to define roles and communication protocols to increase collaboration and information flow and prevent silos and political territories from forming. Aim to cultivate generous expertise, instead of gatekeeping for clout.

How to Use This Diagram

The most important use of this diagram is as a touchstone and a checklist in conversation with anyone who participates in design decisions throughout your organization. Having a basic process to refer to enables more intentional decisions about practice and process, even when under pressure to deliver.

My intention is for this to be sufficiently generic to harmonize with any operating model. Your practice might look slightly different, but always ask yourself: are we doing what is most convenient, or what is most effective? The thing about critical thinking is that it doesn’t necessarily take any more time, especially considering how often fuzzy thinking creates more work down the line, but it feels arduous. Everyone’s lazy brain wants to put everything into procedural memory—autopilot. You can spent 5 minutes on each step, but don’t skip the discussion.

Are they phases? Sure. That’s a fun design word for the sequential subsections of the process. The sequence is critical.

This is the conversation you need to have before starting on any research project, no matter how small. This is the conversation you should be having on the regular, regardless. Both collaboration and effective design research require clear goals and roles. You need to understand what level of decision you’re talking about to know how much time and effort you should invest in answering your questions. The time-and-budget-based objections to research are fake. Just back your whole plan out from when you need to make a decision. You can always learn something useful if you’re clear on what you need to know and by when. If you need an entirely new product strategy by tomorrow, ask how you got in this spot. Probably by someone avoiding these conversations.

It is highly productive to spend an hour with your team separating your knowns from your unknowns, and establishing what your knowledge is based on. Maybe that is all you need to do.

Once you have identified the information gaps that either block or add risk to your work, move ahead to forming research questions . Just because you identify a question doesn’t mean you need to answer it immediately. Brainstorming questions is another really productive activity not enough teams do. It is far more useful and collaborative than brainstorming ideas in most cases. Keep a running list. This will prime everyone on your team to keep their antennae up for ambient insights. Learning can come from anywhere if you’re screening for it and thinking critically.

Select your highest priority question/s to carry forward and turn into actual research projects.

This is when you get really clear on your question, before choosing a method. How you phrase your research question (again, not an interview question) determines what is possible for you to learn. Do you need ideas, or descriptions about what happens in the real world? Are you at the point of evaluating a solution? (Don’t say “validate” or I’ll haunt you.) Are you trying to establish a cause and effect relationship? Do you already have a hypothesis to use as the basis for an experiment? (A hypothesis is based on data. If you have a wish, you do not have a hypothesis)

Be very clear on whether you need qualitative or quantitative data, or need to do a mixed methods study to establish and validate a hypothesis, for example.

And talk about how much of what kind of data you need to be confident in your decision. This is the time to discuss the concept of qualitative saturation with your business stakeholders, not when you’ve already done the study and they’re saying “pssh, 8 users”. If you need quantitative data, make sure you’re working in the realm of the actually measurable and potentially statistically valid and also relevant to your decision.

Now you get to choose a method. Research methods and activities are simply ways to answer your question. There’s no one right method. Pick the one that will give you the kind of data you need to inform your choices. And consider the colorful lines serving suggestions. Maybe you get some generative-style ideas from running a test on your competitor’s product or service, or on your own. The key is—say it with me—being intentional. Using the tool or the method is not learning. Learning is learning.

When you consider timing, scope, and scale, make sure to account for planning, conducting, and all of the analysis, synthesis, and follow-up.

Maybe you only have time to read articles and reports that are publicly available. There is a LOT of information out there a clear research question can unlock.

Analysis and Synthesis

The type of analysis you do depends on your question and method, so in some sense, this is the easy part. The learning to arguing ratio should be much higher here if you’ve followed all the proceeding steps. This is also the potentially time-consuming part. Sometimes you will need to do a few rounds of analysis.

It’s too big of a topic here to go into detail here, so I’ll just say, you should have planned how to approach analysis and synthesis when you chose your method, including roles of everyone involved.

Integration

So many organizations invest so much in doing research activities, and then POOF, it’s like the work never happened. Look ahead to integrating insights at the very beginning. How and when you go about documenting and communicating will depend upon how the organization as a whole works. One more process to design!

A “research repository” is not a substitute for the necessary interaction and communication among humans.

And finally, gather up all of the the other questions that arose and add them to your ever increasing collection of known unknowns. Questions are awesome.

Ridding Yourself of Imposter Syndrome

Design Is a Job, The Necessary Second Edition FAQ

Quick tips for picking design research activities.

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research framework of design

Home Market Research Research Tools and Apps

Research Design: What it is, Elements & Types

Research Design

Can you imagine doing research without a plan? Probably not. When we discuss a strategy to collect, study, and evaluate data, we talk about research design. This design addresses problems and creates a consistent and logical model for data analysis. Let’s learn more about it.

What is Research Design?

Research design is the framework of research methods and techniques chosen by a researcher to conduct a study. The design allows researchers to sharpen the research methods suitable for the subject matter and set up their studies for success.

Creating a research topic explains the type of research (experimental,  survey research ,  correlational , semi-experimental, review) and its sub-type (experimental design, research problem , descriptive case-study). 

There are three main types of designs for research:

  • Data collection
  • Measurement
  • Data Analysis

The research problem an organization faces will determine the design, not vice-versa. The design phase of a study determines which tools to use and how they are used.

The Process of Research Design

The research design process is a systematic and structured approach to conducting research. The process is essential to ensure that the study is valid, reliable, and produces meaningful results.

  • Consider your aims and approaches: Determine the research questions and objectives, and identify the theoretical framework and methodology for the study.
  • Choose a type of Research Design: Select the appropriate research design, such as experimental, correlational, survey, case study, or ethnographic, based on the research questions and objectives.
  • Identify your population and sampling method: Determine the target population and sample size, and choose the sampling method, such as random , stratified random sampling , or convenience sampling.
  • Choose your data collection methods: Decide on the methods, such as surveys, interviews, observations, or experiments, and select the appropriate instruments or tools for collecting data.
  • Plan your data collection procedures: Develop a plan for data collection, including the timeframe, location, and personnel involved, and ensure ethical considerations.
  • Decide on your data analysis strategies: Select the appropriate data analysis techniques, such as statistical analysis , content analysis, or discourse analysis, and plan how to interpret the results.

The process of research design is a critical step in conducting research. By following the steps of research design, researchers can ensure that their study is well-planned, ethical, and rigorous.

Research Design Elements

Impactful research usually creates a minimum bias in data and increases trust in the accuracy of collected data. A design that produces the slightest margin of error in experimental research is generally considered the desired outcome. The essential elements are:

  • Accurate purpose statement
  • Techniques to be implemented for collecting and analyzing research
  • The method applied for analyzing collected details
  • Type of research methodology
  • Probable objections to research
  • Settings for the research study
  • Measurement of analysis

Characteristics of Research Design

A proper design sets your study up for success. Successful research studies provide insights that are accurate and unbiased. You’ll need to create a survey that meets all of the main characteristics of a design. There are four key characteristics:

Characteristics of Research Design

  • Neutrality: When you set up your study, you may have to make assumptions about the data you expect to collect. The results projected in the research should be free from research bias and neutral. Understand opinions about the final evaluated scores and conclusions from multiple individuals and consider those who agree with the results.
  • Reliability: With regularly conducted research, the researcher expects similar results every time. You’ll only be able to reach the desired results if your design is reliable. Your plan should indicate how to form research questions to ensure the standard of results.
  • Validity: There are multiple measuring tools available. However, the only correct measuring tools are those which help a researcher in gauging results according to the objective of the research. The  questionnaire  developed from this design will then be valid.
  • Generalization:  The outcome of your design should apply to a population and not just a restricted sample . A generalized method implies that your survey can be conducted on any part of a population with similar accuracy.

The above factors affect how respondents answer the research questions, so they should balance all the above characteristics in a good design. If you want, you can also learn about Selection Bias through our blog.

Research Design Types

A researcher must clearly understand the various types to select which model to implement for a study. Like the research itself, the design of your analysis can be broadly classified into quantitative and qualitative.

Qualitative research

Qualitative research determines relationships between collected data and observations based on mathematical calculations. Statistical methods can prove or disprove theories related to a naturally existing phenomenon. Researchers rely on qualitative observation research methods that conclude “why” a particular theory exists and “what” respondents have to say about it.

Quantitative research

Quantitative research is for cases where statistical conclusions to collect actionable insights are essential. Numbers provide a better perspective for making critical business decisions. Quantitative research methods are necessary for the growth of any organization. Insights drawn from complex numerical data and analysis prove to be highly effective when making decisions about the business’s future.

Qualitative Research vs Quantitative Research

Here is a chart that highlights the major differences between qualitative and quantitative research:

In summary or analysis , the step of qualitative research is more exploratory and focuses on understanding the subjective experiences of individuals, while quantitative research is more focused on objective data and statistical analysis.

You can further break down the types of research design into five categories:

types of research design

1. Descriptive: In a descriptive composition, a researcher is solely interested in describing the situation or case under their research study. It is a theory-based design method created by gathering, analyzing, and presenting collected data. This allows a researcher to provide insights into the why and how of research. Descriptive design helps others better understand the need for the research. If the problem statement is not clear, you can conduct exploratory research. 

2. Experimental: Experimental research establishes a relationship between the cause and effect of a situation. It is a causal research design where one observes the impact caused by the independent variable on the dependent variable. For example, one monitors the influence of an independent variable such as a price on a dependent variable such as customer satisfaction or brand loyalty. It is an efficient research method as it contributes to solving a problem.

The independent variables are manipulated to monitor the change it has on the dependent variable. Social sciences often use it to observe human behavior by analyzing two groups. Researchers can have participants change their actions and study how the people around them react to understand social psychology better.

3. Correlational research: Correlational research  is a non-experimental research technique. It helps researchers establish a relationship between two closely connected variables. There is no assumption while evaluating a relationship between two other variables, and statistical analysis techniques calculate the relationship between them. This type of research requires two different groups.

A correlation coefficient determines the correlation between two variables whose values range between -1 and +1. If the correlation coefficient is towards +1, it indicates a positive relationship between the variables, and -1 means a negative relationship between the two variables. 

4. Diagnostic research: In diagnostic design, the researcher is looking to evaluate the underlying cause of a specific topic or phenomenon. This method helps one learn more about the factors that create troublesome situations. 

This design has three parts of the research:

  • Inception of the issue
  • Diagnosis of the issue
  • Solution for the issue

5. Explanatory research : Explanatory design uses a researcher’s ideas and thoughts on a subject to further explore their theories. The study explains unexplored aspects of a subject and details the research questions’ what, how, and why.

Benefits of Research Design

There are several benefits of having a well-designed research plan. Including:

  • Clarity of research objectives: Research design provides a clear understanding of the research objectives and the desired outcomes.
  • Increased validity and reliability: To ensure the validity and reliability of results, research design help to minimize the risk of bias and helps to control extraneous variables.
  • Improved data collection: Research design helps to ensure that the proper data is collected and data is collected systematically and consistently.
  • Better data analysis: Research design helps ensure that the collected data can be analyzed effectively, providing meaningful insights and conclusions.
  • Improved communication: A well-designed research helps ensure the results are clean and influential within the research team and external stakeholders.
  • Efficient use of resources: reducing the risk of waste and maximizing the impact of the research, research design helps to ensure that resources are used efficiently.

A well-designed research plan is essential for successful research, providing clear and meaningful insights and ensuring that resources are practical.

QuestionPro offers a comprehensive solution for researchers looking to conduct research. With its user-friendly interface, robust data collection and analysis tools, and the ability to integrate results from multiple sources, QuestionPro provides a versatile platform for designing and executing research projects.

Our robust suite of research tools provides you with all you need to derive research results. Our online survey platform includes custom point-and-click logic and advanced question types. Uncover the insights that matter the most.

FREE TRIAL         LEARN MORE

MORE LIKE THIS

Employee Engagement App

Employee Engagement App: Top 11 For Workforce Improvement 

Apr 10, 2024

employee evaluation software

Top 15 Employee Evaluation Software to Enhance Performance

event feedback software

Event Feedback Software: Top 11 Best in 2024

Apr 9, 2024

free market research tools

Top 10 Free Market Research Tools to Boost Your Business

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Methodology
  • What Is a Conceptual Framework? | Tips & Examples

What Is a Conceptual Framework? | Tips & Examples

Published on August 2, 2022 by Bas Swaen and Tegan George. Revised on March 18, 2024.

Conceptual-Framework-example

A conceptual framework illustrates the expected relationship between your variables. It defines the relevant objectives for your research process and maps out how they come together to draw coherent conclusions.

Keep reading for a step-by-step guide to help you construct your own conceptual framework.

Table of contents

Developing a conceptual framework in research, step 1: choose your research question, step 2: select your independent and dependent variables, step 3: visualize your cause-and-effect relationship, step 4: identify other influencing variables, frequently asked questions about conceptual models.

A conceptual framework is a representation of the relationship you expect to see between your variables, or the characteristics or properties that you want to study.

Conceptual frameworks can be written or visual and are generally developed based on a literature review of existing studies about your topic.

Your research question guides your work by determining exactly what you want to find out, giving your research process a clear focus.

However, before you start collecting your data, consider constructing a conceptual framework. This will help you map out which variables you will measure and how you expect them to relate to one another.

In order to move forward with your research question and test a cause-and-effect relationship, you must first identify at least two key variables: your independent and dependent variables .

  • The expected cause, “hours of study,” is the independent variable (the predictor, or explanatory variable)
  • The expected effect, “exam score,” is the dependent variable (the response, or outcome variable).

Note that causal relationships often involve several independent variables that affect the dependent variable. For the purpose of this example, we’ll work with just one independent variable (“hours of study”).

Now that you’ve figured out your research question and variables, the first step in designing your conceptual framework is visualizing your expected cause-and-effect relationship.

We demonstrate this using basic design components of boxes and arrows. Here, each variable appears in a box. To indicate a causal relationship, each arrow should start from the independent variable (the cause) and point to the dependent variable (the effect).

Sample-conceptual-framework-using-an-independent-variable-and-a-dependent-variable

It’s crucial to identify other variables that can influence the relationship between your independent and dependent variables early in your research process.

Some common variables to include are moderating, mediating, and control variables.

Moderating variables

Moderating variable (or moderators) alter the effect that an independent variable has on a dependent variable. In other words, moderators change the “effect” component of the cause-and-effect relationship.

Let’s add the moderator “IQ.” Here, a student’s IQ level can change the effect that the variable “hours of study” has on the exam score. The higher the IQ, the fewer hours of study are needed to do well on the exam.

Sample-conceptual-framework-with-a-moderator-variable

Let’s take a look at how this might work. The graph below shows how the number of hours spent studying affects exam score. As expected, the more hours you study, the better your results. Here, a student who studies for 20 hours will get a perfect score.

Figure-effect-without-moderator

But the graph looks different when we add our “IQ” moderator of 120. A student with this IQ will achieve a perfect score after just 15 hours of study.

Figure-effect-with-moderator-iq-120

Below, the value of the “IQ” moderator has been increased to 150. A student with this IQ will only need to invest five hours of study in order to get a perfect score.

Figure-effect-with-moderator-iq-150

Here, we see that a moderating variable does indeed change the cause-and-effect relationship between two variables.

Mediating variables

Now we’ll expand the framework by adding a mediating variable . Mediating variables link the independent and dependent variables, allowing the relationship between them to be better explained.

Here’s how the conceptual framework might look if a mediator variable were involved:

Conceptual-framework-mediator-variable

In this case, the mediator helps explain why studying more hours leads to a higher exam score. The more hours a student studies, the more practice problems they will complete; the more practice problems completed, the higher the student’s exam score will be.

Moderator vs. mediator

It’s important not to confuse moderating and mediating variables. To remember the difference, you can think of them in relation to the independent variable:

  • A moderating variable is not affected by the independent variable, even though it affects the dependent variable. For example, no matter how many hours you study (the independent variable), your IQ will not get higher.
  • A mediating variable is affected by the independent variable. In turn, it also affects the dependent variable. Therefore, it links the two variables and helps explain the relationship between them.

Control variables

Lastly,  control variables must also be taken into account. These are variables that are held constant so that they don’t interfere with the results. Even though you aren’t interested in measuring them for your study, it’s crucial to be aware of as many of them as you can be.

Conceptual-framework-control-variable

A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.

A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.

Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Swaen, B. & George, T. (2024, March 18). What Is a Conceptual Framework? | Tips & Examples. Scribbr. Retrieved April 11, 2024, from https://www.scribbr.com/methodology/conceptual-framework/

Is this article helpful?

Bas Swaen

Other students also liked

Independent vs. dependent variables | definition & examples, mediator vs. moderator variables | differences & examples, control variables | what are they & why do they matter, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Ohio State nav bar

The Ohio State University

  • BuckeyeLink
  • Find People
  • Search Ohio State

Basic Research Design

What is research design.

  • Definition of Research Design : A procedure for generating answers to questions, crucial in determining the reliability and relevance of research outcomes.
  • Importance of Strong Designs : Strong designs lead to answers that are accurate and close to their targets, while weak designs may result in misleading or irrelevant outcomes.
  • Criteria for Assessing Design Strength : Evaluating a design’s strength involves understanding the research question and how the design will yield reliable empirical information.

The Four Elements of Research Design (Blair et al., 2023)

research framework of design

  • The MIDA Framework : Research designs consist of four interconnected elements – Model (M), Inquiry (I), Data strategy (D), and Answer strategy (A), collectively referred to as MIDA.
  • Theoretical Side (M and I): This encompasses the researcher’s beliefs about the world (Model) and the target of inference or the primary question to be answered (Inquiry).
  • Empirical Side (D and A): This includes the strategies for collecting (Data strategy) and analyzing or summarizing information (Answer strategy).
  • Interplay between Theoretical and Empirical Sides : The theoretical side sets the research challenges, while the empirical side represents the researcher’s responses to these challenges.
  • Relation among MIDA Components: The diagram above shows how the four elements of a design are interconnected and how they relate to both real-world and simulated quantities.
  • Parallelism in Design Representation: The illustration highlights two key parallelisms in research design – between actual and simulated processes, and between the theoretical (M, I) and empirical (D, A) sides.
  • Importance of Simulated Processes: The parallelism between actual and simulated processes is crucial for understanding and evaluating research designs.
  • Balancing Theoretical and Empirical Aspects : Effective research design requires a balance between theoretical considerations (models and inquiries) and empirical methodologies (data and answer strategies).

Research Design Principles (Blair et al., 2023)

  • Integration of Components: Designs are effective not merely due to their individual components but how these components work together.
  • Focus on Entire Design: Assessing a design requires examining how each part, such as the question, estimator, and sampling method, fits into the overall design.
  • Importance of Diagnosis: The evaluation of a design’s strength lies in diagnosing the whole design, not just its parts.
  • Strong Design Characteristics: Designs with parallel theoretical and empirical aspects tend to be stronger.
  • The M:I:D:A Analogy: Effective designs often align data strategies with models and answer strategies with inquiries.
  • Flexibility in Models: Good designs should perform well even under varying world scenarios, not just under expected conditions.
  • Broadening Model Scope: Designers should consider a wide range of models, assessing the design’s effectiveness across these.
  • Robustness of Inquiries and Strategies: Inquiries should yield answers and strategies should be applicable regardless of variations in real-world events.
  • Diagnosis Across Models: It’s important to understand for which models a design excels and for which it falters.
  • Specificity of Purpose: A design is deemed good when it aligns with a specific purpose or goal.
  • Balancing Multiple Criteria: Designs should balance scientific precision, logistical constraints, policy goals, and ethical considerations.
  • Diverse Goals and Assessments: Different designs may be optimal for different goals; the purpose dictates the design evaluation.
  • Early Planning Benefits: Designing early allows for learning and improving design properties before data collection.
  • Avoiding Post-Hoc Regrets: Early design helps avoid regrets related to data collection or question formulation.
  • Iterative Improvement: The process of declaration, diagnosis, and redesign improves designs, ideally done before data collection.
  • Adaptability to Changes: Designs should be flexible to adapt to unforeseen circumstances or new information.
  • Expanding or Contracting Feasibility: The scope of feasible designs may change due to various practical factors.
  • Continual Redesign: The principle advocates for ongoing design modification, even post research completion, for robustness and response to criticism.
  • Improvement Through Sharing: Sharing designs via a formalized declaration makes it easier for others to understand and critique.
  • Enhancing Scientific Communication: Well-documented designs facilitate better communication and justification of research decisions.
  • Building a Design Library: The idea is to contribute designs to a shared library, allowing others to learn from and build upon existing work.

The Basics of Social Science Research Designs (Panke, 2018)

Deductive and inductive research.

research framework of design

  • Starting Point: Begins with empirical observations or exploratory studies.
  • Development of Hypotheses: Hypotheses are formulated after initial empirical analysis.
  • Case Study Analysis: Involves conducting explorative case studies and analyzing dynamics at play.
  • Generalization of Findings: Insights are then generalized across multiple cases to verify their applicability.
  • Application: Suitable for novel phenomena or where existing theories are not easily applicable.
  • Example Cases: Exploring new events like Donald Trump’s 2016 nomination or Russia’s annexation of Crimea in 2014.
  • Theory-Based: Starts with existing theories to develop scientific answers to research questions.
  • Hypothesis Development: Hypotheses are specified and then empirically examined.
  • Empirical Examination: Involves a thorough empirical analysis of hypotheses using sound methods.
  • Theory Refinement: Results can refine existing theories or contribute to new theoretical insights.
  • Application: Preferred when existing theories relate to the research question.
  • Example Projects: Usually explanatory projects asking ‘why’ questions to uncover relationships.

Explanatory and Interpretative Research Designs

research framework of design

  • Definition: Explanatory research aims to explain the relationships between variables, often addressing ‘why’ questions. It is primarily concerned with identifying cause-and-effect dynamics and is typically quantitative in nature. The goal is to test hypotheses derived from theories and to establish patterns that can predict future occurrences.
  • Definition: Interpretative research focuses on understanding the deeper meaning or underlying context of social phenomena. It often addresses ‘how is this possible’ questions, seeking to comprehend how certain outcomes or behaviors are produced within specific contexts. This type of research is usually qualitative and prioritizes individual experiences and perceptions.
  • Explanatory Research: Poses ‘why’ questions to explore causal relationships and understand what factors influence certain outcomes.
  • Interpretative Research: Asks ‘how is this possible’ questions to delve into the processes and meanings behind social phenomena.
  • Explanatory Research: Relies on established theories to form hypotheses about causal relationships between variables. These theories are then tested through empirical research.
  • Interpretative Research: Uses theories to provide a framework for understanding the social context and meanings. The focus is on constitutive relationships rather than causal ones.
  • Explanatory Research: Often involves studying multiple cases to allow for comparison and generalization. It seeks patterns across different scenarios.
  • Interpretative Research: Typically concentrates on single case studies, providing an in-depth understanding of that particular case without necessarily aiming for generalization.
  • Explanatory Research: Aims to produce findings that can be generalized to other similar cases or populations. It seeks universal or broad patterns.
  • Interpretative Research: Offers detailed insights specific to a single case or context. These findings are not necessarily intended to be generalized but to provide a deep understanding of the particular case.

Qualitative, Quantitative, and Mixed-method Projects

  • Definition: Qualitative research is exploratory and aims to understand human behavior, beliefs, feelings, and experiences. It involves collecting non-numerical data, often through interviews, focus groups, or textual analysis. This method is ideal for gaining in-depth insights into specific phenomena.
  • Example in Education: A qualitative study might involve conducting in-depth interviews with teachers to explore their experiences and challenges with remote teaching during the pandemic. This research would aim to understand the nuances of their experiences, challenges, and adaptations in a detailed and descriptive manner.
  • Definition: Quantitative research seeks to quantify data and generalize results from a sample to the population of interest. It involves measurable, numerical data and often uses statistical methods for analysis. This approach is suitable for testing hypotheses or examining relationships between variables.
  • Example in Education: A quantitative study could involve surveying a large number of students to determine the correlation between the amount of time spent on homework and their academic achievement. This would involve collecting numerical data (hours of homework, grades) and applying statistical analysis to examine relationships or differences.
  • Definition: Mixed-method research combines both qualitative and quantitative approaches, providing a more comprehensive understanding of the research problem. It allows for the exploration of complex research questions by integrating numerical data analysis with detailed narrative data.
  • Example in Education: A mixed-method study might investigate the impact of a new teaching method. The research could start with quantitative methods, like administering standardized tests to measure learning outcomes, followed by qualitative methods, such as conducting focus groups with students and teachers to understand their perceptions and experiences with the new teaching method. This combination provides both statistical results and in-depth understanding.
  • Research Questions: What kind of information is needed to answer the questions? Qualitative for “how” and “why”, quantitative for “how many” or “how much”, and mixed methods for a comprehensive understanding of both the breadth and depth of a phenomenon.
  • Nature of the Study: Is the study aiming to explore a new area (qualitative), confirm hypotheses (quantitative), or achieve both (mixed-method)?
  • Resources Available: Time, funding, and expertise available can influence the choice. Qualitative research can be more time-consuming, while quantitative research may require specific statistical skills.
  • Data Sources: Availability and type of data also guide the methodology. Existing numerical data might lean towards quantitative, while studies requiring personal experiences or opinions might be qualitative.

References:

Blair, G., Coppock, A., & Humphreys, M. (2023).  Research Design in the Social Sciences: Declaration, Diagnosis, and Redesign . Princeton University Press.

Panke, D. (2018). Research design & method selection: Making good choices in the social sciences.  Research Design & Method Selection , 1-368.

Open Access is an initiative that aims to make scientific research freely available to all. To date our community has made over 100 million downloads. It’s based on principles of collaboration, unobstructed discovery, and, most importantly, scientific progression. As PhD students, we found it difficult to access the research we needed, so we decided to create a new Open Access publisher that levels the playing field for scientists across the world. How? By making research easy to access, and puts the academic needs of the researchers before the business interests of publishers.

We are a community of more than 103,000 authors and editors from 3,291 institutions spanning 160 countries, including Nobel Prize winners and some of the world’s most-cited researchers. Publishing on IntechOpen allows authors to earn citations and find new collaborators, meaning more people see your work not only from your own field of study, but from other related fields too.

Brief introduction to this section that descibes Open Access especially from an IntechOpen perspective

Want to get in touch? Contact our London head office or media team here

Our team is growing all the time, so we’re always on the lookout for smart people who want to help us reshape the world of scientific publishing.

Home > Books > Cyberspace

Research Design and Methodology

Submitted: 23 January 2019 Reviewed: 08 March 2019 Published: 07 August 2019

DOI: 10.5772/intechopen.85731

Cite this chapter

There are two ways to cite this chapter:

From the Edited Volume

Edited by Evon Abu-Taieh, Abdelkrim El Mouatasim and Issam H. Al Hadid

To purchase hard copies of this book, please contact the representative in India: CBS Publishers & Distributors Pvt. Ltd. www.cbspd.com | [email protected]

Chapter metrics overview

30,666 Chapter Downloads

Impact of this chapter

Total Chapter Downloads on intechopen.com

IntechOpen

Total Chapter Views on intechopen.com

Overall attention for this chapters

There are a number of approaches used in this research method design. The purpose of this chapter is to design the methodology of the research approach through mixed types of research techniques. The research approach also supports the researcher on how to come across the research result findings. In this chapter, the general design of the research and the methods used for data collection are explained in detail. It includes three main parts. The first part gives a highlight about the dissertation design. The second part discusses about qualitative and quantitative data collection methods. The last part illustrates the general research framework. The purpose of this section is to indicate how the research was conducted throughout the study periods.

  • research design
  • methodology
  • data sources

Author Information

Kassu jilcha sileyew *.

  • School of Mechanical and Industrial Engineering, Addis Ababa Institute of Technology, Addis Ababa University, Addis Ababa, Ethiopia

*Address all correspondence to: [email protected]

1. Introduction

Research methodology is the path through which researchers need to conduct their research. It shows the path through which these researchers formulate their problem and objective and present their result from the data obtained during the study period. This research design and methodology chapter also shows how the research outcome at the end will be obtained in line with meeting the objective of the study. This chapter hence discusses the research methods that were used during the research process. It includes the research methodology of the study from the research strategy to the result dissemination. For emphasis, in this chapter, the author outlines the research strategy, research design, research methodology, the study area, data sources such as primary data sources and secondary data, population consideration and sample size determination such as questionnaires sample size determination and workplace site exposure measurement sample determination, data collection methods like primary data collection methods including workplace site observation data collection and data collection through desk review, data collection through questionnaires, data obtained from experts opinion, workplace site exposure measurement, data collection tools pretest, secondary data collection methods, methods of data analysis used such as quantitative data analysis and qualitative data analysis, data analysis software, the reliability and validity analysis of the quantitative data, reliability of data, reliability analysis, validity, data quality management, inclusion criteria, ethical consideration and dissemination of result and its utilization approaches. In order to satisfy the objectives of the study, a qualitative and quantitative research method is apprehended in general. The study used these mixed strategies because the data were obtained from all aspects of the data source during the study time. Therefore, the purpose of this methodology is to satisfy the research plan and target devised by the researcher.

2. Research design

The research design is intended to provide an appropriate framework for a study. A very significant decision in research design process is the choice to be made regarding research approach since it determines how relevant information for a study will be obtained; however, the research design process involves many interrelated decisions [ 1 ].

This study employed a mixed type of methods. The first part of the study consisted of a series of well-structured questionnaires (for management, employee’s representatives, and technician of industries) and semi-structured interviews with key stakeholders (government bodies, ministries, and industries) in participating organizations. The other design used is an interview of employees to know how they feel about safety and health of their workplace, and field observation at the selected industrial sites was undertaken.

Hence, this study employs a descriptive research design to agree on the effects of occupational safety and health management system on employee health, safety, and property damage for selected manufacturing industries. Saunders et al. [ 2 ] and Miller [ 3 ] say that descriptive research portrays an accurate profile of persons, events, or situations. This design offers to the researchers a profile of described relevant aspects of the phenomena of interest from an individual, organizational, and industry-oriented perspective. Therefore, this research design enabled the researchers to gather data from a wide range of respondents on the impact of safety and health on manufacturing industries in Ethiopia. And this helped in analyzing the response obtained on how it affects the manufacturing industries’ workplace safety and health. The research overall design and flow process are depicted in Figure 1 .

research framework of design

Research methods and processes (author design).

3. Research methodology

To address the key research objectives, this research used both qualitative and quantitative methods and combination of primary and secondary sources. The qualitative data supports the quantitative data analysis and results. The result obtained is triangulated since the researcher utilized the qualitative and quantitative data types in the data analysis. The study area, data sources, and sampling techniques were discussed under this section.

3.1 The study area

According to Fraenkel and Warren [ 4 ] studies, population refers to the complete set of individuals (subjects or events) having common characteristics in which the researcher is interested. The population of the study was determined based on random sampling system. This data collection was conducted from March 07, 2015 to December 10, 2016, from selected manufacturing industries found in Addis Ababa city and around. The manufacturing companies were selected based on their employee number, established year, and the potential accidents prevailing and the manufacturing industry type even though all criterions were difficult to satisfy.

3.2 Data sources

3.2.1 primary data sources.

It was obtained from the original source of information. The primary data were more reliable and have more confidence level of decision-making with the trusted analysis having direct intact with occurrence of the events. The primary data sources are industries’ working environment (through observation, pictures, and photograph) and industry employees (management and bottom workers) (interview, questionnaires and discussions).

3.2.2 Secondary data

Desk review has been conducted to collect data from various secondary sources. This includes reports and project documents at each manufacturing sectors (more on medium and large level). Secondary data sources have been obtained from literatures regarding OSH, and the remaining data were from the companies’ manuals, reports, and some management documents which were included under the desk review. Reputable journals, books, different articles, periodicals, proceedings, magazines, newsletters, newspapers, websites, and other sources were considered on the manufacturing industrial sectors. The data also obtained from the existing working documents, manuals, procedures, reports, statistical data, policies, regulations, and standards were taken into account for the review.

In general, for this research study, the desk review has been completed to this end, and it had been polished and modified upon manuals and documents obtained from the selected companies.

4. Population and sample size

4.1 population.

The study population consisted of manufacturing industries’ employees in Addis Ababa city and around as there are more representative manufacturing industrial clusters found. To select representative manufacturing industrial sector population, the types of the industries expected were more potential to accidents based on random and purposive sampling considered. The population of data was from textile, leather, metal, chemicals, and food manufacturing industries. A total of 189 sample sizes of industries responded to the questionnaire survey from the priority areas of the government. Random sample sizes and disproportionate methods were used, and 80 from wood, metal, and iron works; 30 from food, beverage, and tobacco products; 50 from leather, textile, and garments; 20 from chemical and chemical products; and 9 from other remaining 9 clusters of manufacturing industries responded.

4.2 Questionnaire sample size determination

A simple random sampling and purposive sampling methods were used to select the representative manufacturing industries and respondents for the study. The simple random sampling ensures that each member of the population has an equal chance for the selection or the chance of getting a response which can be more than equal to the chance depending on the data analysis justification. Sample size determination procedure was used to get optimum and reasonable information. In this study, both probability (simple random sampling) and nonprobability (convenience, quota, purposive, and judgmental) sampling methods were used as the nature of the industries are varied. This is because of the characteristics of data sources which permitted the researchers to follow the multi-methods. This helps the analysis to triangulate the data obtained and increase the reliability of the research outcome and its decision. The companies’ establishment time and its engagement in operation, the number of employees and the proportion it has, the owner types (government and private), type of manufacturing industry/production, types of resource used at work, and the location it is found in the city and around were some of the criteria for the selections.

The determination of the sample size was adopted from Daniel [ 5 ] and Cochran [ 6 ] formula. The formula used was for unknown population size Eq. (1) and is given as

research framework of design

where n  = sample size, Z  = statistic for a level of confidence, P  = expected prevalence or proportion (in proportion of one; if 50%, P  = 0.5), and d  = precision (in proportion of one; if 6%, d  = 0.06). Z statistic ( Z ): for the level of confidence of 95%, which is conventional, Z value is 1.96. In this study, investigators present their results with 95% confidence intervals (CI).

The expected sample number was 267 at the marginal error of 6% for 95% confidence interval of manufacturing industries. However, the collected data indicated that only 189 populations were used for the analysis after rejecting some data having more missing values in the responses from the industries. Hence, the actual data collection resulted in 71% response rate. The 267 population were assumed to be satisfactory and representative for the data analysis.

4.3 Workplace site exposure measurement sample determination

The sample size for the experimental exposure measurements of physical work environment has been considered based on the physical data prepared for questionnaires and respondents. The response of positive were considered for exposure measurement factors to be considered for the physical environment health and disease causing such as noise intensity, light intensity, pressure/stress, vibration, temperature/coldness, or hotness and dust particles on 20 workplace sites. The selection method was using random sampling in line with purposive method. The measurement of the exposure factors was done in collaboration with Addis Ababa city Administration and Oromia Bureau of Labour and Social Affair (AACBOLSA). Some measuring instruments were obtained from the Addis Ababa city and Oromia Bureau of Labour and Social Affair.

5. Data collection methods

Data collection methods were focused on the followings basic techniques. These included secondary and primary data collections focusing on both qualitative and quantitative data as defined in the previous section. The data collection mechanisms are devised and prepared with their proper procedures.

5.1 Primary data collection methods

Primary data sources are qualitative and quantitative. The qualitative sources are field observation, interview, and informal discussions, while that of quantitative data sources are survey questionnaires and interview questions. The next sections elaborate how the data were obtained from the primary sources.

5.1.1 Workplace site observation data collection

Observation is an important aspect of science. Observation is tightly connected to data collection, and there are different sources for this: documentation, archival records, interviews, direct observations, and participant observations. Observational research findings are considered strong in validity because the researcher is able to collect a depth of information about a particular behavior. In this dissertation, the researchers used observation method as one tool for collecting information and data before questionnaire design and after the start of research too. The researcher made more than 20 specific observations of manufacturing industries in the study areas. During the observations, it found a deeper understanding of the working environment and the different sections in the production system and OSH practices.

5.1.2 Data collection through interview

Interview is a loosely structured qualitative in-depth interview with people who are considered to be particularly knowledgeable about the topic of interest. The semi-structured interview is usually conducted in a face-to-face setting which permits the researcher to seek new insights, ask questions, and assess phenomena in different perspectives. It let the researcher to know the in-depth of the present working environment influential factors and consequences. It has provided opportunities for refining data collection efforts and examining specialized systems or processes. It was used when the researcher faces written records or published document limitation or wanted to triangulate the data obtained from other primary and secondary data sources.

This dissertation is also conducted with a qualitative approach and conducting interviews. The advantage of using interviews as a method is that it allows respondents to raise issues that the interviewer may not have expected. All interviews with employees, management, and technicians were conducted by the corresponding researcher, on a face-to-face basis at workplace. All interviews were recorded and transcribed.

5.1.3 Data collection through questionnaires

The main tool for gaining primary information in practical research is questionnaires, due to the fact that the researcher can decide on the sample and the types of questions to be asked [ 2 ].

In this dissertation, each respondent is requested to reply to an identical list of questions mixed so that biasness was prevented. Initially the questionnaire design was coded and mixed up from specific topic based on uniform structures. Consequently, the questionnaire produced valuable data which was required to achieve the dissertation objectives.

The questionnaires developed were based on a five-item Likert scale. Responses were given to each statement using a five-point Likert-type scale, for which 1 = “strongly disagree” to 5 = “strongly agree.” The responses were summed up to produce a score for the measures.

5.1.4 Data obtained from experts’ opinion

The data was also obtained from the expert’s opinion related to the comparison of the knowledge, management, collaboration, and technology utilization including their sub-factors. The data obtained in this way was used for prioritization and decision-making of OSH, improving factor priority. The prioritization of the factors was using Saaty scales (1–9) and then converting to Fuzzy set values obtained from previous researches using triangular fuzzy set [ 7 ].

5.1.5 Workplace site exposure measurement

The researcher has measured the workplace environment for dust, vibration, heat, pressure, light, and noise to know how much is the level of each variable. The primary data sources planned and an actual coverage has been compared as shown in Table 1 .

research framework of design

Planned versus actual coverage of the survey.

The response rate for the proposed data source was good, and the pilot test also proved the reliability of questionnaires. Interview/discussion resulted in 87% of responses among the respondents; the survey questionnaire response rate obtained was 71%, and the field observation response rate was 90% for the whole data analysis process. Hence, the data organization quality level has not been compromised.

This response rate is considered to be representative of studies of organizations. As the study agrees on the response rate to be 30%, it is considered acceptable [ 8 ]. Saunders et al. [ 2 ] argued that the questionnaire with a scale response of 20% response rate is acceptable. Low response rate should not discourage the researchers, because a great deal of published research work also achieves low response rate. Hence, the response rate of this study is acceptable and very good for the purpose of meeting the study objectives.

5.1.6 Data collection tool pretest

The pretest for questionnaires, interviews, and tools were conducted to validate that the tool content is valid or not in the sense of the respondents’ understanding. Hence, content validity (in which the questions are answered to the target without excluding important points), internal validity (in which the questions raised answer the outcomes of researchers’ target), and external validity (in which the result can generalize to all the population from the survey sample population) were reflected. It has been proved with this pilot test prior to the start of the basic data collections. Following feedback process, a few minor changes were made to the originally designed data collect tools. The pilot test made for the questionnaire test was on 10 sample sizes selected randomly from the target sectors and experts.

5.2 Secondary data collection methods

The secondary data refers to data that was collected by someone other than the user. This data source gives insights of the research area of the current state-of-the-art method. It also makes some sort of research gap that needs to be filled by the researcher. This secondary data sources could be internal and external data sources of information that may cover a wide range of areas.

Literature/desk review and industry documents and reports: To achieve the dissertation’s objectives, the researcher has conducted excessive document review and reports of the companies in both online and offline modes. From a methodological point of view, literature reviews can be comprehended as content analysis, where quantitative and qualitative aspects are mixed to assess structural (descriptive) as well as content criteria.

A literature search was conducted using the database sources like MEDLINE; Emerald; Taylor and Francis publications; EMBASE (medical literature); PsycINFO (psychological literature); Sociological Abstracts (sociological literature); accident prevention journals; US Statistics of Labor, European Safety and Health database; ABI Inform; Business Source Premier (business/management literature); EconLit (economic literature); Social Service Abstracts (social work and social service literature); and other related materials. The search strategy was focused on articles or reports that measure one or more of the dimensions within the research OSH model framework. This search strategy was based on a framework and measurement filter strategy developed by the Consensus-Based Standards for the Selection of Health Measurement Instruments (COSMIN) group. Based on screening, unrelated articles to the research model and objectives were excluded. Prior to screening, researcher (principal investigator) reviewed a sample of more than 2000 articles, websites, reports, and guidelines to determine whether they should be included for further review or reject. Discrepancies were thoroughly identified and resolved before the review of the main group of more than 300 articles commenced. After excluding the articles based on the title, keywords, and abstract, the remaining articles were reviewed in detail, and the information was extracted on the instrument that was used to assess the dimension of research interest. A complete list of items was then collated within each research targets or objectives and reviewed to identify any missing elements.

6. Methods of data analysis

Data analysis method follows the procedures listed under the following sections. The data analysis part answered the basic questions raised in the problem statement. The detailed analysis of the developed and developing countries’ experiences on OSH regarding manufacturing industries was analyzed, discussed, compared and contrasted, and synthesized.

6.1 Quantitative data analysis

Quantitative data were obtained from primary and secondary data discussed above in this chapter. This data analysis was based on their data type using Excel, SPSS 20.0, Office Word format, and other tools. This data analysis focuses on numerical/quantitative data analysis.

Before analysis, data coding of responses and analysis were made. In order to analyze the data obtained easily, the data were coded to SPSS 20.0 software as the data obtained from questionnaires. This task involved identifying, classifying, and assigning a numeric or character symbol to data, which was done in only one way pre-coded [ 9 , 10 ]. In this study, all of the responses were pre-coded. They were taken from the list of responses, a number of corresponding to a particular selection was given. This process was applied to every earlier question that needed this treatment. Upon completion, the data were then entered to a statistical analysis software package, SPSS version 20.0 on Windows 10 for the next steps.

Under the data analysis, exploration of data has been made with descriptive statistics and graphical analysis. The analysis included exploring the relationship between variables and comparing groups how they affect each other. This has been done using cross tabulation/chi square, correlation, and factor analysis and using nonparametric statistic.

6.2 Qualitative data analysis

Qualitative data analysis used for triangulation of the quantitative data analysis. The interview, observation, and report records were used to support the findings. The analysis has been incorporated with the quantitative discussion results in the data analysis parts.

6.3 Data analysis software

The data were entered using SPSS 20.0 on Windows 10 and analyzed. The analysis supported with SPSS software much contributed to the finding. It had contributed to the data validation and correctness of the SPSS results. The software analyzed and compared the results of different variables used in the research questionnaires. Excel is also used to draw the pictures and calculate some analytical solutions.

7. The reliability and validity analysis of the quantitative data

7.1 reliability of data.

The reliability of measurements specifies the amount to which it is without bias (error free) and hence ensures consistent measurement across time and across the various items in the instrument [ 8 ]. In reliability analysis, it has been checked for the stability and consistency of the data. In the case of reliability analysis, the researcher checked the accuracy and precision of the procedure of measurement. Reliability has numerous definitions and approaches, but in several environments, the concept comes to be consistent [ 8 ]. The measurement fulfills the requirements of reliability when it produces consistent results during data analysis procedure. The reliability is determined through Cranach’s alpha as shown in Table 2 .

research framework of design

Internal consistency and reliability test of questionnaires items.

K stands for knowledge; M, management; T, technology; C, collaboration; P, policy, standards, and regulation; H, hazards and accident conditions; PPE, personal protective equipment.

7.2 Reliability analysis

Cronbach’s alpha is a measure of internal consistency, i.e., how closely related a set of items are as a group [ 11 ]. It is considered to be a measure of scale reliability. The reliability of internal consistency most of the time is measured based on the Cronbach’s alpha value. Reliability coefficient of 0.70 and above is considered “acceptable” in most research situations [ 12 ]. In this study, reliability analysis for internal consistency of Likert-scale measurement after deleting 13 items was found similar; the reliability coefficients were found for 76 items were 0.964 and for the individual groupings made shown in Table 2 . It was also found internally consistent using the Cronbach’s alpha test. Table 2 shows the internal consistency of the seven major instruments in which their reliability falls in the acceptable range for this research.

7.3 Validity

Face validity used as defined by Babbie [ 13 ] is an indicator that makes it seem a reasonable measure of some variables, and it is the subjective judgment that the instrument measures what it intends to measure in terms of relevance [ 14 ]. Thus, the researcher ensured, in this study, when developing the instruments that uncertainties were eliminated by using appropriate words and concepts in order to enhance clarity and general suitability [ 14 ]. Furthermore, the researcher submitted the instruments to the research supervisor and the joint supervisor who are both occupational health experts, to ensure validity of the measuring instruments and determine whether the instruments could be considered valid on face value.

In this study, the researcher was guided by reviewed literature related to compliance with the occupational health and safety conditions and data collection methods before he could develop the measuring instruments. In addition, the pretest study that was conducted prior to the main study assisted the researcher to avoid uncertainties of the contents in the data collection measuring instruments. A thorough inspection of the measuring instruments by the statistician and the researcher’s supervisor and joint experts, to ensure that all concepts pertaining to the study were included, ensured that the instruments were enriched.

8. Data quality management

Insight has been given to the data collectors on how to approach companies, and many of the questionnaires were distributed through MSc students at Addis Ababa Institute of Technology (AAiT) and manufacturing industries’ experience experts. This made the data quality reliable as it has been continually discussed with them. Pretesting for questionnaire was done on 10 workers to assure the quality of the data and for improvement of data collection tools. Supervision during data collection was done to understand how the data collectors are handling the questionnaire, and each filled questionnaires was checked for its completeness, accuracy, clarity, and consistency on a daily basis either face-to-face or by phone/email. The data expected in poor quality were rejected out of the acting during the screening time. Among planned 267 questionnaires, 189 were responded back. Finally, it was analyzed by the principal investigator.

9. Inclusion criteria

The data were collected from the company representative with the knowledge of OSH. Articles written in English and Amharic were included in this study. Database information obtained in relation to articles and those who have OSH area such as interventions method, method of accident identification, impact of occupational accidents, types of occupational injuries/disease, and impact of occupational accidents, and disease on productivity and costs of company and have used at least one form of feedback mechanism. No specific time period was chosen in order to access all available published papers. The questionnaire statements which are similar in the questionnaire have been rejected from the data analysis.

10. Ethical consideration

Ethical clearance was obtained from the School of Mechanical and Industrial Engineering, Institute of Technology, Addis Ababa University. Official letters were written from the School of Mechanical and Industrial Engineering to the respective manufacturing industries. The purpose of the study was explained to the study subjects. The study subjects were told that the information they provided was kept confidential and that their identities would not be revealed in association with the information they provided. Informed consent was secured from each participant. For bad working environment assessment findings, feedback will be given to all manufacturing industries involved in the study. There is a plan to give a copy of the result to the respective study manufacturing industries’ and ministries’ offices. The respondents’ privacy and their responses were not individually analyzed and included in the report.

11. Dissemination and utilization of the result

The result of this study will be presented to the Addis Ababa University, AAiT, School of Mechanical and Industrial Engineering. It will also be communicated to the Ethiopian manufacturing industries, Ministry of Labor and Social Affair, Ministry of Industry, and Ministry of Health from where the data was collected. The result will also be availed by publication and online presentation in Google Scholars. To this end, about five articles were published and disseminated to the whole world.

12. Conclusion

The research methodology and design indicated overall process of the flow of the research for the given study. The data sources and data collection methods were used. The overall research strategies and framework are indicated in this research process from problem formulation to problem validation including all the parameters. It has laid some foundation and how research methodology is devised and framed for researchers. This means, it helps researchers to consider it as one of the samples and models for the research data collection and process from the beginning of the problem statement to the research finding. Especially, this research flow helps new researchers to the research environment and methodology in particular.

Conflict of interest

There is no “conflict of interest.”

  • 1. Aaker A, Kumar VD, George S. Marketing Research. New York: John Wiley & Sons Inc; 2000
  • 2. Saunders M, Lewis P, Thornhill A. Research Methods for Business Student. 5th ed. Edinburgh Gate: Pearson Education Limited; 2009
  • 3. Miller P. Motivation in the Workplace. Work and Organizational Psychology. Oxford: Blackwell Publishers; 1991
  • 4. Fraenkel FJ, Warren NE. How to Design and Evaluate Research in Education. 4th ed. New York: McGraw-Hill; 2002
  • 5. Danniel WW. Biostatist: A Foundation for Analysis in the Health Science. 7th ed. New York: John Wiley & Sons; 1999
  • 6. Cochran WG. Sampling Techniques. 3rd ed. New York: John Wiley & Sons; 1977
  • 7. Saaty TL. The Analytical Hierarchy Process. Pittsburg: PWS Publications; 1990
  • 8. Sekaran U, Bougie R. Research Methods for Business: A Skill Building Approach. 5th ed. New Delhi: John Wiley & Sons, Ltd; 2010. pp. 1-468
  • 9. Luck DJ, Rubin RS. Marketing Research. 7th ed. New Jersey: Prentice-Hall International; 1987
  • 10. Wong TC. Marketing Research. Oxford, UK: Butterworth-Heinemann; 1999
  • 11. Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951; 16 :297-334
  • 12. Tavakol M, Dennick R. Making sense of Cronbach’s alpha. International Journal of Medical Education. 2011; 2 :53-55. DOI: 10.5116/ijme.4dfb.8dfd
  • 13. Babbie E. The Practice of Social Research. 12th ed. Belmont, CA: Wadsworth; 2010
  • 14. Polit DF, Beck CT. Generating and Assessing Evidence for Nursing Practice. 8th ed. Williams and Wilkins: Lippincott; 2008

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Continue reading from the same book

Edited by Evon Abu-Taieh

Published: 17 June 2020

By Sabína Gáliková Tolnaiová and Slavomír Gálik

1001 downloads

By Carlos Pedro Gonçalves

1539 downloads

By Konstantinos-George Thanos, Andrianna Polydouri, A...

1040 downloads

Geektonight

What is Research Design? Features, Components

  • Post last modified: 13 August 2023
  • Reading time: 15 mins read
  • Post category: Research Methodology

Coursera 7-Day Trail offer

What is Research Design?

Research design refers to the overall strategy or plan that a researcher outlines to conduct a study and gather relevant data to address a research question or test a hypothesis. It serves as a blueprint for the entire research process, providing a structure and guidance for the collection, analysis, and interpretation of data.

In the field of research, the major purpose of research is to find a solution for a given research problem. The researcher can find a solution to a research problem by ensuring that he/she uses an appropriate research design.

Table of Content

  • 1 What is Research Design?
  • 2 Concept of Research Design
  • 3 Need and Features of Research Design
  • 4.1 Neutrality
  • 4.2 Reliability
  • 4.3 Performance
  • 4.4 General practice
  • 4.5 Qualitative
  • 4.6 Quantitative
  • 5.1 Research questions
  • 5.2 Course suggestions
  • 5.3 Unit analysis
  • 5.4 Data linking and propositions
  • 5.5 Interpretation of findings from the study

The chances of success of a research project depend on how the researcher has taken care to develop a research design that is in line with the research problem. A research design is created or developed when the researcher prepares a plan, structure and strategy for conducting research.

Research design is the base over which a researcher builds his research. A good research design provides vital information to a researcher with respect to a research topic, data type, data sources and techniques of data collection used in the research. In this chapter, you will study about the concept of research design, its need, features, components, etc.

Next, the chapter will describe the types of research design, research design framework, and types of errors affecting research design. Towards the end, you will study about the meaning of experiments and types of experiments.

Concept of Research Design

The research design refers to the framework of research methods and techniques selected by a researcher. The design chosen by the researchers allows them to use appropriate methods to study and plan their studies effectively and in the future. The descriptive research method focuses primarily on defining the nature of a class of people, without focusing on the “why” of something happening.

In other words, it “explains” the topic of research, without covering why “it” happens. Let us study in detail about the concept of research design, its requirements, features or characteristics, designing research framework its related case studies and observations.

Cross-sectional and longitudinal studies, casual research and errors arising while designing the research which are related to improper selection of respondents. This is a framework for determining the research methods and techniques to be used. This design enables researchers to set the research methods that are most relevant to the subject.

The design of the research topic describes the type of research (testing, research, integration, experimentation, review) and its sub-type (test design, research problem, descriptive case study). Research design can also be considered as the blueprint for collection, measurement and analysis of data.

The type of research problem the organisation is facing will determine the structure of the research and not the other way around. The study design phase determines which tools to use and how to use them. Impact studies often create less bias in the data and increase confidence in the accuracy of the data collected. A design that produces a small error limit in test studies is usually considered to be the desired result.

In research, the important things are:

  • A specific statement of intent
  • Strategies used to collect and analyse data
  • Type of research methodology
  • Potential objections to research
  • Research study settings
  • Analysis rating

Need and Features of Research Design

Much of what we do in our daily lives is based on understanding, what we have learned from others, or what we have learned through personal experience or observation. Sometimes, there are conflicting ideas about what is good or what works in a particular situation.

In addition, what works in one situation or situation may be ineffective or even harmful in another, or it may be combined with other measures. Psychological techniques ignore the impact of external factors that can influence what is seen. Even in health care settings, there are gaps in knowledge, ideas about how something can work better and ideas for improvement.

Since health professionals cannot afford to be risky, research is needed. For clinical trials, this is also a legal requirement that pharmaceutical companies cannot obtain marketing authorisation (i.e., permission to sell their new drugs) until they are approved by the relevant authorities.

Another advantage of doing research is that in most studies, the findings can be statistically recorded and statistically evaluated to determine if the findings are significant (meaning how much they can be called with a certain degree of certainty that they are not just a risk factor).

With limited studies, results can usually be performed in a broader population (for example, in people with dementia, caregivers, GPs, or generalised individuals, depending on the study group). This is because steps would be taken to ensure that the group of participants in the study, represented other people in that category, as far as possible.

The advantage of many quality studies is that they allow for a thorough investigation of a particular aspect of the human experience. They give people the opportunity to express in their own words how they feel, what they think, and how they make sense of the world around them.

In some cases, the results may be passed on to others as conditions. However, the advantage of quality studies is that it provides rich, logical and insightful information on the complexity of human experience with all the contradictions, differences and idiosyncrasies. Others discuss topics that have not been researched before and maybe facing issues that are controversial, critical, or illegal.Some courses also work to give voice to vulnerable or small groups

Features of Research Design

Proper research design makes your study a success. Effective research provides accurate and impartial information. You will need to create a survey that meets all the key design features. Key features of a good research design are:

When planning your study, you may need to think about the details you are going to collect. The results shown in the study should be fair and impartial. Understand the ideas about the last scores tested and the conclusions from most people and consider those who agree with the results obtained.

Reliability

With regular research, the researcher involved expects the same results regularly. Research design should be developed in a way that good research questions are developed and quality results are ensured. You will only be able to access the expected results if your design is reliable.

Performance

There are many measuring tools available. However, the only valid measurement tools are those that assist the researcher in measuring results according to the research purpose. The list of questions created from this project will be valid.

General practice

The effect of your design should apply to people and not just to the restricted sample. A comprehensive design means that your survey can be done on any part of the people with the same accuracy. The above factors affect the way respondents respond to research questions and therefore all of the above factors should be balanced in good design. The researcher must have a clear understanding of the different types of study design in order to choose which model to use in the study.

Qualitative

Quality research helps in understanding the problem and to develop hypothesis. Researchers rely on high-quality research methods that conclude “why” a certain idea exists and what “responders” say.

Quantitative

A quantitative study is one of the situations in which statistical conclusions are arrived at on the basis of collected data. Numbers provide a better idea of how to make critical business decisions. Research is needed for the growth of any organisation. The information taken from the data and the analysis of the hard data is very effective in making decisions related to the future of the business.

Components of Research Design

The main purpose behind the design of the study is to help avoid a situation where the evidence does not address the main research questions. The research design is about a logical problem and not a planning problem.

The five main components of a research design are:

Research questions

Course suggestions.

  • Units of analysis
  • Linking data to propositions
  • Interpretation of the findings of the study

The components of research design apply to all types of standardised, extra-terrestrial research, whether physical or social sciences.

This first item raises the type of question – about “who,” “what,” “where,” “how,” and “why” – provides important clues as to the proper research methodology used. Use three paragraphs: First, use the books to reduce your interest in one or two topics. In 2nd paragraph, take a closer look — or cut — a few key lessons from your favorite topic. Find questions in those few studies and conclude with new questions for future research. In the 3rd paragraph, check out another science group on the same topic. They may offer support for your potential questions or suggest ways to sharpen it.

Each suggestion directs the focus to something needed to be tested within the study. Only if you are forced to give some suggestions will you go the right way. For example, you would think that businesses are cooperating as they receive the same benefits. This suggestion, in addition to highlighting an important theoretical issue (that some corporate incentives do not exist or do not matter), also begins to tell you where to look for related evidence (defining and determining the magnitude of specific benefits in each business).

Unit analysis

It is associated with the basic problem of defining what “case” is – a problem that has affected many researchers at the beginning of the study. Take the example of medical patients. In this case, the person is being studied, and that person is an important unit of analysis.

Information about the right person will be collected, and few such people can be part of a multidisciplinary investigation. You will need study questions and suggestions to help you find the right information to collect about this person or people. Without such questions and suggestions, you may be tempted to cover “everything” about the person (s), which is not possible.

Data linking and propositions

Data linking methods and propositions such as pattern, definition structure, time series analysis, logic models and cross-case synthesis. The actual analysis will require you to compile or calculate your study data as a direct indication of your initial study suggestions.

Interpretation of findings from the study

Statistical analysis determines whether the research results support the hypothesis. Several statistical tests, for example, T-tests (determining whether two groups are statistically different from each other), Chi-square tests (where data is compared with the expected result), and oneway analysis of variance (provides multiple group comparisons), are performed by data type, number, and types of variables and data categories.

Statistical analysis provides some clear ways to translate. For example, according to the agreement, social science looks at a level below -55 to show that perceived differences are “statistically significant.” On the other hand, the analysis of many cases will not depend on the use of statistics and therefore focuses on alternative approaches to these approaches.

Business Ethics

( Click on Topic to Read )

  • What is Ethics?
  • What is Business Ethics?
  • Values, Norms, Beliefs and Standards in Business Ethics
  • Indian Ethos in Management
  • Ethical Issues in Marketing
  • Ethical Issues in HRM
  • Ethical Issues in IT
  • Ethical Issues in Production and Operations Management
  • Ethical Issues in Finance and Accounting
  • What is Corporate Governance?
  • What is Ownership Concentration?
  • What is Ownership Composition?
  • Types of Companies in India
  • Internal Corporate Governance
  • External Corporate Governance
  • Corporate Governance in India
  • What is Enterprise Risk Management (ERM)?
  • What is Assessment of Risk?
  • What is Risk Register?
  • Risk Management Committee

Corporate social responsibility (CSR)

  • Theories of CSR
  • Arguments Against CSR
  • Business Case for CSR
  • Importance of CSR in India
  • Drivers of Corporate Social Responsibility
  • Developing a CSR Strategy
  • Implement CSR Commitments
  • CSR Marketplace
  • CSR at Workplace
  • Environmental CSR
  • CSR with Communities and in Supply Chain
  • Community Interventions
  • CSR Monitoring
  • CSR Reporting
  • Voluntary Codes in CSR
  • What is Corporate Ethics?

Lean Six Sigma

  • What is Six Sigma?
  • What is Lean Six Sigma?
  • Value and Waste in Lean Six Sigma
  • Six Sigma Team
  • MAIC Six Sigma
  • Six Sigma in Supply Chains
  • What is Binomial, Poisson, Normal Distribution?
  • What is Sigma Level?
  • What is DMAIC in Six Sigma?
  • What is DMADV in Six Sigma?
  • Six Sigma Project Charter
  • Project Decomposition in Six Sigma
  • Critical to Quality (CTQ) Six Sigma
  • Process Mapping Six Sigma
  • Flowchart and SIPOC
  • Gage Repeatability and Reproducibility
  • Statistical Diagram
  • Lean Techniques for Optimisation Flow
  • Failure Modes and Effects Analysis (FMEA)
  • What is Process Audits?
  • Six Sigma Implementation at Ford
  • IBM Uses Six Sigma to Drive Behaviour Change
  • Research Methodology
  • What is Research?

What is Hypothesis?

  • Sampling Method
  • Research Methods
  • Data Collection in Research
  • Methods of Collecting Data
  • Application of Business Research
  • Levels of Measurement
  • What is Sampling?
  • Hypothesis Testing
  • Research Report
  • What is Management?
  • Planning in Management
  • Decision Making in Management
  • What is Controlling?
  • What is Coordination?
  • What is Staffing?
  • Organization Structure
  • What is Departmentation?
  • Span of Control
  • What is Authority?
  • Centralization vs Decentralization
  • Organizing in Management
  • Schools of Management Thought
  • Classical Management Approach
  • Is Management an Art or Science?
  • Who is a Manager?

Operations Research

  • What is Operations Research?
  • Operation Research Models
  • Linear Programming
  • Linear Programming Graphic Solution
  • Linear Programming Simplex Method
  • Linear Programming Artificial Variable Technique
  • Duality in Linear Programming
  • Transportation Problem Initial Basic Feasible Solution
  • Transportation Problem Finding Optimal Solution
  • Project Network Analysis with Critical Path Method
  • Project Network Analysis Methods
  • Project Evaluation and Review Technique (PERT)
  • Simulation in Operation Research
  • Replacement Models in Operation Research

Operation Management

  • What is Strategy?
  • What is Operations Strategy?
  • Operations Competitive Dimensions
  • Operations Strategy Formulation Process
  • What is Strategic Fit?
  • Strategic Design Process
  • Focused Operations Strategy
  • Corporate Level Strategy
  • Expansion Strategies
  • Stability Strategies
  • Retrenchment Strategies
  • Competitive Advantage
  • Strategic Choice and Strategic Alternatives
  • What is Production Process?
  • What is Process Technology?
  • What is Process Improvement?
  • Strategic Capacity Management
  • Production and Logistics Strategy
  • Taxonomy of Supply Chain Strategies
  • Factors Considered in Supply Chain Planning
  • Operational and Strategic Issues in Global Logistics
  • Logistics Outsourcing Strategy
  • What is Supply Chain Mapping?
  • Supply Chain Process Restructuring
  • Points of Differentiation
  • Re-engineering Improvement in SCM
  • What is Supply Chain Drivers?
  • Supply Chain Operations Reference (SCOR) Model
  • Customer Service and Cost Trade Off
  • Internal and External Performance Measures
  • Linking Supply Chain and Business Performance
  • Netflix’s Niche Focused Strategy
  • Disney and Pixar Merger
  • Process Planning at Mcdonald’s

Service Operations Management

  • What is Service?
  • What is Service Operations Management?
  • What is Service Design?
  • Service Design Process
  • Service Delivery
  • What is Service Quality?
  • Gap Model of Service Quality
  • Juran Trilogy
  • Service Performance Measurement
  • Service Decoupling
  • IT Service Operation
  • Service Operations Management in Different Sector

Procurement Management

  • What is Procurement Management?
  • Procurement Negotiation
  • Types of Requisition
  • RFX in Procurement
  • What is Purchasing Cycle?
  • Vendor Managed Inventory
  • Internal Conflict During Purchasing Operation
  • Spend Analysis in Procurement
  • Sourcing in Procurement
  • Supplier Evaluation and Selection in Procurement
  • Blacklisting of Suppliers in Procurement
  • Total Cost of Ownership in Procurement
  • Incoterms in Procurement
  • Documents Used in International Procurement
  • Transportation and Logistics Strategy
  • What is Capital Equipment?
  • Procurement Process of Capital Equipment
  • Acquisition of Technology in Procurement
  • What is E-Procurement?
  • E-marketplace and Online Catalogues
  • Fixed Price and Cost Reimbursement Contracts
  • Contract Cancellation in Procurement
  • Ethics in Procurement
  • Legal Aspects of Procurement
  • Global Sourcing in Procurement
  • Intermediaries and Countertrade in Procurement

Strategic Management

  • What is Strategic Management?
  • What is Value Chain Analysis?
  • Mission Statement
  • Business Level Strategy
  • What is SWOT Analysis?
  • What is Competitive Advantage?
  • What is Vision?
  • What is Ansoff Matrix?
  • Prahalad and Gary Hammel
  • Strategic Management In Global Environment
  • Competitor Analysis Framework
  • Competitive Rivalry Analysis
  • Competitive Dynamics
  • What is Competitive Rivalry?
  • Five Competitive Forces That Shape Strategy
  • What is PESTLE Analysis?
  • Fragmentation and Consolidation Of Industries
  • What is Technology Life Cycle?
  • What is Diversification Strategy?
  • What is Corporate Restructuring Strategy?
  • Resources and Capabilities of Organization
  • Role of Leaders In Functional-Level Strategic Management
  • Functional Structure In Functional Level Strategy Formulation
  • Information And Control System
  • What is Strategy Gap Analysis?
  • Issues In Strategy Implementation
  • Matrix Organizational Structure
  • What is Strategic Management Process?

Supply Chain

  • What is Supply Chain Management?
  • Supply Chain Planning and Measuring Strategy Performance
  • What is Warehousing?
  • What is Packaging?
  • What is Inventory Management?
  • What is Material Handling?
  • What is Order Picking?
  • Receiving and Dispatch, Processes
  • What is Warehouse Design?
  • What is Warehousing Costs?

You Might Also Like

What is literature review importance, functions, process,, what is parametric tests types: z-test, t-test, f-test, types of errors affecting research design, steps in questionnaire design, what is sampling need, advantages, limitations, what is hypothesis definition, meaning, characteristics, sources, what is measurement scales, types, criteria and developing measurement tools, what is measure of skewness, what is sample size determination, formula, determining,, what is measure of central tendency, what is research types, purpose, characteristics, process, leave a reply cancel reply.

You must be logged in to post a comment.

World's Best Online Courses at One Place

We’ve spent the time in finding, so you can spend your time in learning

Digital Marketing

Personal growth.

research framework of design

Development

research framework of design

Research design: the methodology for interdisciplinary research framework

  • Open access
  • Published: 27 April 2017
  • Volume 52 , pages 1209–1225, ( 2018 )

Cite this article

You have full access to this open access article

  • Hilde Tobi   ORCID: orcid.org/0000-0002-8804-0298 1 &
  • Jarl K. Kampen 1 , 2  

87k Accesses

84 Citations

9 Altmetric

Explore all metrics

Many of today’s global scientific challenges require the joint involvement of researchers from different disciplinary backgrounds (social sciences, environmental sciences, climatology, medicine, etc.). Such interdisciplinary research teams face many challenges resulting from differences in training and scientific culture. Interdisciplinary education programs are required to train truly interdisciplinary scientists with respect to the critical factor skills and competences. For that purpose this paper presents the Methodology for Interdisciplinary Research (MIR) framework. The MIR framework was developed to help cross disciplinary borders, especially those between the natural sciences and the social sciences. The framework has been specifically constructed to facilitate the design of interdisciplinary scientific research, and can be applied in an educational program, as a reference for monitoring the phases of interdisciplinary research, and as a tool to design such research in a process approach. It is suitable for research projects of different sizes and levels of complexity, and it allows for a range of methods’ combinations (case study, mixed methods, etc.). The different phases of designing interdisciplinary research in the MIR framework are described and illustrated by real-life applications in teaching and research. We further discuss the framework’s utility in research design in landscape architecture, mixed methods research, and provide an outlook to the framework’s potential in inclusive interdisciplinary research, and last but not least, research integrity.

Similar content being viewed by others

research framework of design

Ethical Considerations of Conducting Systematic Reviews in Educational Research

research framework of design

Ethical Issues in Research: Perceptions of Researchers, Research Ethics Board Members and Research Ethics Experts

Marie-Josée Drolet, Eugénie Rose-Derouin, … Bryn Williams-Jones

research framework of design

When Does a Researcher Choose a Quantitative, Qualitative, or Mixed Research Approach?

Feyisa Mulisa

Avoid common mistakes on your manuscript.

1 Introduction

Current challenges, e.g., energy, water, food security, one world health and urbanization, involve the interaction between humans and their environment. A (mono)disciplinary approach, be it a psychological, economical or technical one, is too limited to capture any one of these challenges. The study of the interaction between humans and their environment requires knowledge, ideas and research methodology from different disciplines (e.g., ecology or chemistry in the natural sciences, psychology or economy in the social sciences). So collaboration between natural and social sciences is called for (Walsh et al. 1975 ).

Over the past decades, different forms of collaboration have been distinguished although the terminology used is diverse and ambiguous. For the present paper, the term interdisciplinary research is used for (Aboelela et al. 2007 , p. 341):

any study or group of studies undertaken by scholars from two or more distinct scientific disciplines. The research is based upon a conceptual model that links or integrates theoretical frameworks from those disciplines, uses study design and methodology that is not limited to any one field, and requires the use of perspectives and skills of the involved disciplines throughout multiple phases of the research process.

Scientific disciplines (e.g., ecology, chemistry, biology, psychology, sociology, economy, philosophy, linguistics, etc.) are categorized into distinct scientific cultures: the natural sciences, the social sciences and the humanities (Kagan 2009 ). Interdisciplinary research may involve different disciplines within a single scientific culture, and it can also cross cultural boundaries as in the study of humans and their environment.

A systematic review of the literature on natural-social science collaboration (Fischer et al. 2011 ) confirmed the general impression of this collaboration to be a challenge. The nearly 100 papers in their analytic set mentioned more instances of barriers than of opportunities (72 and 46, respectively). Four critical factors for success or failure in natural-social science collaboration were identified: the paradigms or epistemologies in the current (mono-disciplinary) sciences, the skills and competences of the scientists involved, the institutional context of the research, and the organization of collaborations (Fischer et al. 2011 ). The so-called “paradigm war” between neopositivist versus constructivists within the social and behavioral sciences (Onwuegbuzie and Leech 2005 ) may complicate pragmatic collaboration further.

It has been argued that interdisciplinary education programs are required to train truly interdisciplinary scientists with respect to the critical factor skills and competences (Frischknecht 2000 ) and accordingly, some interdisciplinary programs have been developed since (Baker and Little 2006 ; Spelt et al. 2009 ). The overall effect of interdisciplinary programs can be expected to be small as most programs are mono-disciplinary and based on a single paradigm (positivist-constructivist, qualitative-quantitative; see e.g., Onwuegbuzie and Leech 2005 ). We saw in our methodology teaching, consultancy and research practices working with heterogeneous groups of students and staff, that most had received mono-disciplinary training with a minority that had received multidisciplinary training, with few exceptions within the same paradigm. During our teaching and consultancy for heterogeneous groups of students and staff aimed at designing interdisciplinary research, we built the framework for methodology in interdisciplinary research (MIR). With the MIR framework, we aspire to contribute to the critical factors skills and competences (Fischer et al. 2011 ) for social and natural sciences collaboration. Note that the scale of interdisciplinary research projects we have in mind may vary from comparably modest ones (e.g., finding a link between noise reducing asphalt and quality of life; Vuye et al. 2016 ) to very large projects (finding a link between anthropogenic greenhouse gas emissions, climate change, and food security; IPCC 2015 ).

In the following section of this paper we describe the MIR framework and elaborate on its components. The third section gives two examples of the application of the MIR framework. The paper concludes with a discussion of the MIR framework in the broader contexts of mixed methods research, inclusive research, and other promising strains of research.

2 The methodology in interdisciplinary research framework

2.1 research as a process in the methodology in interdisciplinary research framework.

The Methodology for Interdisciplinary Research (MIR) framework was built on the process approach (Kumar 1999 ), because in the process approach, the research question or hypothesis is leading for all decisions in the various stages of research. That means that it helps the MIR framework to put the common goal of the researchers at the center, instead of the diversity of their respective backgrounds. The MIR framework also introduces an agenda: the research team needs to carefully think through different parts of the design of their study before starting its execution (Fig.  1 ). First, the team discusses the conceptual design of their study which contains the ‘why’ and ‘what’ of the research. Second, the team discusses the technical design of the study which contains the ‘how’ of the research. Only after the team agrees that the complete research design is sufficiently crystalized, the execution of the work (including fieldwork) starts.

The Methodology of Interdisciplinary Research framework

Whereas the conceptual and technical designs are by definition interdisciplinary team work, the respective team members may do their (mono)disciplinary parts of fieldwork and data analysis on a modular basis (see Bruns et al. 2017 : p. 21). Finally, when all evidence is collected, an interdisciplinary synthesis of analyses follows which conclusions are input for the final report. This implies that the MIR framework allows for a range of scales of research projects, e.g., a mixed methods project and its smaller qualitative and quantitative modules, or a multi-national sustainability project and its national sociological, economic and ecological modules.

2.2 The conceptual design

Interdisciplinary research design starts with the “conceptual design” which addresses the ‘why’ and ‘what’ of a research project at a conceptual level to ascertain the common goals pivotal to interdisciplinary collaboration (Fischer et al. 2011 ). The conceptual design includes mostly activities such as thinking, exchanging interdisciplinary knowledge, reading and discussing. The product of the conceptual design is called the “conceptual frame work” which comprises of the research objective (what is to be achieved by the research), the theory or theories that are central in the research project, the research questions (what knowledge is to be produced), and the (partial) operationalization of constructs and concepts that will be measured or recorded during execution. While the members of the interdisciplinary team and the commissioner of the research must reach a consensus about the research objective, the ‘why’, the focus in research design must be the production of the knowledge required to achieve that objective the ‘what’.

With respect to the ‘why’ of a research project, an interdisciplinary team typically starts with a general aim as requested by the commissioner or funding agency, and a set of theories to formulate a research objective. This role of theory is not always obvious to students from the natural sciences, who tend to think in terms of ‘models’ with directly observable variables. On the other hand, students from the social sciences tend to think in theories with little attention to observable variables. In the MIR framework, models as simplified descriptions or explanations of what is studied in the natural sciences play the same role in informing research design, raising research questions, and informing how a concept is understood, as do theories in social science.

Research questions concern concepts, i.e. general notions or ideas based on theory or common sense that are multifaceted and not directly visible or measurable. For example, neither food security (with its many different facets) nor a person’s attitude towards food storage may be directly observed. The operationalization of concepts, the transformation of concepts into observable indicators, in interdisciplinary research requires multiple steps, each informed by theory. For instance, in line with particular theoretical frameworks, sustainability and food security may be seen as the composite of a social, an economic and an ecological dimension (e.g., Godfray et al. 2010 ).

As the concept of interest is multi-disciplinary and multi-dimensional, the interdisciplinary team will need to read, discuss and decide on how these dimensions and their indicators are weighted to measure the composite interdisciplinary concept to get the required interdisciplinary measurements. The resulting measure or measures for the interdisciplinary concept may be of the nominal, ordinal, interval and ratio level, or a combination thereof. This operationalization procedure is known as the port-folio approach to widely defined measurements (Tobi 2014 ). Only after the research team has finalized the operationalization of the concepts under study, the research questions and hypotheses can be made operational. For example, a module with descriptive research questions may now be turned into an operational one like, what are the means and variances of X1, X2, and X3 in a given population? A causal research question may take on the form, is X (a composite of X1, X2 and X3) a plausible cause for the presence or absence of Y? A typical qualitative module could study, how do people talk about X1, X2 and X3 in their everyday lives?

2.3 The technical design

Members of an interdisciplinary team usually have had different training with respect to research methods, which makes discussing and deciding on the technical design more challenging but also potentially more creative than in a mono-disciplinary team. The technical design addresses the issues ‘how, where and when will research units be studied’ (study design), ‘how will measurement proceed’ (instrument selection or design), ‘how and how many research units will be recruited’ (sampling plan), and ‘how will collected data be analyzed and synthesized’ (analysis plan). The MIR framework provides the team a set of topics and their relationships to one another and to generally accepted quality criteria (see Fig.  1 ), which helps in designing this part of the project.

Interdisciplinary teams need be pragmatic as the research questions agreed on are leading in decisions on the data collection set-up (e.g., a cross-sectional study of inhabitants of a region, a laboratory experiment, a cohort study, a case control study, etc.), the so-called “study design” (e.g., Kumar 2014 ; De Vaus 2001 ; Adler and Clark 2011 ; Tobi and van den Brink 2017 ) instead of traditional ‘pet’ approaches. Typical study designs for descriptive research questions and research questions on associations are the cross-sectional study design. Longitudinal study designs are required to investigate development over time and cause-effect relationships ideally are studied in experiments (e.g., Kumar 2014 ; Shipley 2016 ). Phenomenological questions concern a phenomenon about which little is known and which has to be studied in the environment where it takes place, which calls for a case study design (e.g., Adler and Clark 2011 : p. 178). For each module, the study design is to be further explicated by the number of data collection waves, the level of control by the researcher and its reference period (e.g., Kumar 2014 ) to ensure the teams common understanding.

Then, decisions about the way data is to be collected, e.g., by means of certified instruments, observation, interviews, questionnaires, queries on existing data bases, or a combination of these are to be made. It is especially important to discuss the role of the observer (researcher) as this is often a source of misunderstanding in interdisciplinary teams. In the sciences, the observer is usually considered a neutral outsider when reading a standardized measurement instrument (e.g., a pyranometer to measure incoming solar radiation). In contrast, in the social sciences, the observer may be (part of) the measurement instrument, for example in participant observation or when doing in-depth interviews. After all, in participant observation the researcher observes from a member’s perspective and influences what is observed owing to the researcher’s participation (Flick 2006 : p. 220). Similarly in interviews, by which we mean “a conversation that has a structure and a purpose determined by the one party—the interviewer” (Kvale 2007 : p. 7), the interviewer and the interviewee are part of the measurement instrument (Kvale and Brinkmann 2009 : p. 2). In on-line and mail questionnaires the interviewer is eliminated as part of the instrument by standardizing the questions and answer options. Queries on existing data bases refer to the use of secondary data or secondary analysis. Different disciplines tend to use different bibliographic data bases (e.g., CAB Abstracts, ABI/INFORM or ERIC) and different data repositories (e.g., the European Social Survey at europeansocialsurvey.org or the International Council for Science data repository hosted by www.pangaea.de ).

Depending on whether or not the available, existing, measurement instruments tally with the interdisciplinary operationalisations from the conceptual design, the research team may or may not need to design instruments. Note that in some cases the social scientists’ instinct may be to rely on a questionnaire whereas the collaboration with another discipline may result in more objective possibilities (e.g., compare asking people about what they do with surplus medication, versus measuring chemical components from their input into the sewer system). Instrument design may take on different forms, such as the design of a device (e.g., pyranometer), a questionnaire (Dillman 2007 ) or a part thereof (e.g., a scale see DeVellis 2012 ; Danner et al. 2016 ), an interview guide with topics or questions for the interviewees, or a data extraction form in the context of secondary analysis and literature review (e.g., the Cochrane Collaboration aiming at health and medical sciences or the Campbell Collaboration aiming at evidence based policies).

Researchers from different disciplines are inclined to think of different research objects (e.g., animals, humans or plots), which is where the (specific) research questions come in as these identify the (possibly different) research objects unambiguously. In general, research questions that aim at making an inventory, whether it is an inventory of biodiversity or of lodging, call for a random sampling design. Both in the biodiversity and lodging example, one may opt for random sampling of geographic areas by means of a list of coordinates. Studies that aim to explain a particular phenomenon in a particular context would call for a purposive sampling design (non-random selection). Because studies of biodiversity and housing obey the same laws in terms of appropriate sampling design for similar research questions, individual students and researchers are sensitized to commonalities of their respective (mono)disciplines. For example, a research team interested in the effects of landslides on a socio-ecological system may select for their study one village that suffered from landslides and one village that did not suffer from landslides that have other characteristics in common (e.g., kind of soil, land use, land property legislation, family structure, income distribution, et cetera).

The data analysis plan describes how data will be analysed, for each of the separate modules and for the project at large. In the context of a multi-disciplinary quantitative research project, the data analysis plan will list the intended uni-, bi- and multivariate analyses such as measures for distributions (e.g., means and variances), measures for association (e.g., Pearson Chi square or Kendall Tau) and data reduction and modelling techniques (e.g., factor analysis and multiple linear regression or structural equation modelling) for each of the research modules using the data collected. When applicable, it will describe interim analyses and follow-up rules. In addition to the plans at modular level, the data analysis plan must describe how the input from the separate modules, i.e. different analyses, will be synthesized to answer the overall research question. In case of mixed methods research, the particular type of mixed methods design chosen describes how, when, and to what extent the team will synthesize the results from the different modules.

Unfortunately, in our experience, when some of the research modules rely on a qualitative approach, teams tend to refrain from designing a data analysis plan before starting the field work. While absence of a data analysis plan may be regarded acceptable in fields that rely exclusively on qualitative research (e.g., ethnography), failure to communicate how data will be analysed and what potential evidence will be produced posits a deathblow to interdisciplinarity. For many researchers not familiar with qualitative research, the black box presented as “qualitative data analysis” is a big hurdle, and a transparent and systematic plan is a sine qua non for any scientific collaboration. The absence of a data analysis plan for all modules results in an absence of synthesis of perspectives and skills of the disciplines involved, and in separate (disciplinary) research papers or separate chapters in the research report without an answer to the overall research question. So, although researchers may find it hard to write the data analysis plan for qualitative data, it is pivotal in interdisciplinary research teams.

Similar to the quantitative data analysis plan, the qualitative data analysis plan presents the description of how the researcher will get acquainted with the data collected (e.g., by constructing a narrative summary per interviewee or a paired-comparison of essays). Additionally, the rules to decide on data saturation need be presented. Finally, the types of qualitative analyses are to be described in the data analysis plan. Because there is little or no standardized terminology in qualitative data analysis, it is important to include a precise description as well as references to the works that describe the method intended (e.g., domain analysis as described by Spradley 1979 ; or grounded theory by means of constant-comparison as described by Boeije 2009 ).

2.4 Integration

To benefit optimally from the research being interdisciplinary the modules need to be brought together in the integration stage. The modules may be mono- or interdisciplinary and may rely on quantitative, qualitative or mixed methods approaches. So the MIR framework fits the view that distinguishes three multimethods approaches (quali–quali, quanti–quanti, and quali–quant).

Although the MIR framework has not been designed with the intention to promote mixed methods research, it is suitable for the design of mixed methods research as the kind of research that calls for both quantitative and qualitative components (Creswell and Piano Clark 2011 ). Indeed, just like the pioneers in mixed methods research (Creswell and Piano Clark 2011 : p. 2), the MIR framework deconstructs the package deals of paradigm and data to be collected. The synthesis of the different mono or interdisciplinary modules may benefit from research done on “the unique challenges and possibilities of integration of qualitative and quantitative approaches” (Fetters and Molina-Azorin 2017 : p. 5). We distinguish (sub) sets of modules being designed as convergent, sequential or embedded (adapted from mixed methods design e.g., Creswell and Piano Clark 2011 : pp. 69–70). Convergent modules, whether mono or interdisciplinary, may be done parallel and are integrated after completion. Sequential modules are done after one another and the first modules inform the latter ones (this includes transformative and multiphase mixed methods design). Embedded modules are intertwined. Here, modules depend on one another for data collection and analysis, and synthesis may be planned both during and after completion of the embedded modules.

2.5 Scientific quality and ethical considerations in the design of interdisciplinary research

A minimum set of jargon related to the assessment of scientific quality of research (e.g., triangulation, validity, reliability, saturation, etc.) can be found scattered in Fig.  1 . Some terms are reserved by particular paradigms, others may be seen in several paradigms with more or less subtle differences in meaning. In the latter case, it is important that team members are prepared to explain and share ownership of the term and respect the different meanings. By paying explicit attention to the quality concepts, researchers from different disciplines learn to appreciate each other’s concerns for good quality research and recognize commonalities. For example, the team may discuss measurement validity of both a standardized quantitative instrument and that of an interview and discover that the calibration of the machine serves a similar purpose as the confirmation of the guarantee of anonymity at the start of an interview.

Throughout the process of research design, ethics require explicit discussion among all stakeholders in the project. Ethical issues run through all components in the MIR framework in Fig.  1 . Where social and medical scientists may be more sensitive to ethical issues related to humans (e.g., the 1979 Belmont Report criteria of beneficence, justice, and respect), others may be more sensitive to issues related to animal welfare, ecology, legislation, the funding agency (e.g., implications for policy), data and information sharing (e.g., open access publishing), sloppy research practices, or long term consequences of the research. This is why ethics are an issue for the entire interdisciplinary team and cannot be discussed on project module level only.

3 The MIR framework in practice: two examples

3.1 teaching research methodology to heterogeneous groups of students, 3.1.1 institutional context and background of the mir framework.

Wageningen University and Research (WUR) advocates in its teaching and research an interdisciplinary approach to the study of global issues related to the motto “To explore the potential of nature to improve the quality of life.” Wageningen University’s student population is multidisciplinary and international (e.g., Tobi and Kampen 2013 ). Traditionally, this challenge of diversity in one classroom is met by covering a width of methodological topics and examples from different disciplines. However, when students of various programmes received methodological education in mixed classes, students of some disciplines would regard with disinterest or even disdain methods and techniques of the other disciplines. Different disciplines, especially from the qualitative respectively quantitative tradition in the social sciences (Onwuegbuzie and Leech 2005 : p. 273), claim certain study designs, methods of data collection and analysis as their territory, a claim reflected in many textbooks. We found that students from a qualitative tradition would not be interested, and would not even study, content like the design of experiments and quantitative data collection; and students from a quantitative tradition would ignore case study design and qualitative data collection. These students assumed they didn’t need any knowledge about ‘the other tradition’ for their future careers, despite the call for interdisciplinarity.

To enhance interdisciplinarity, WUR provides an MSc course mandatory for most students, in which multi-disciplinary teams do research for a commissioner. Students reported difficulties similar to the ones found in the literature: miscommunication due to talking different scientific languages and feelings of distrust and disrespect due to prejudice. This suggested that research methodology courses ought help prepare for interdisciplinary collaboration by introducing a single methodological framework that 1) creates sensitivity to the pros and challenges of interdisciplinary research by means of a common vocabulary and fosters respect for other disciplines, 2) starts from the research questions as pivotal in decision making on research methods instead of tradition or ontology, and 3) allows available methodologies and methods to be potentially applicable to any scientific research problem.

3.1.2 Teaching with MIR—the conceptual framework

As a first step, we replaced textbooks by ones refusing the idea that any scientific tradition has exclusive ownership of any methodological approach or method. The MIR framework further guides our methodology teaching in two ways. First, it presents a logical sequence of topics (first conceptual design, then technical design; first research question(s) or hypotheses, then study design; etc.). Second, it allows for a conceptual separation of topics (e.g., study design from instrument design). Educational programmes at Wageningen University and Research consistently stress the vital importance of good research design. In fact, 50% of the mark in most BSc and MSc courses in research methodology is based on the assessment of a research proposal that students design in small (2-4 students) and heterogeneous (discipline, gender and nationality) groups. The research proposal must describe a project which can be executed in practice, and which limitations (measurement, internal, and external validity) are carefully discussed.

Groups start by selecting a general research topic. They discuss together previously attained courses from a range of programs to identify personal and group interests, with the aim to reach an initial research objective and a general research question as input for the conceptual design. Often, their initial research objective and research question are too broad to be researchable (e.g., Kumar 2014 : p. 64; Adler and Clark 2011 : p. 71). In plenary sessions, the (basics of) critical assessment of empirical research papers is taught with special attention to the ‘what’ and ‘why’ section of research papers. During tutorials students generate research questions until the group agrees on a research objective, with one general research question that consists of a small set of specific research questions. Each of the specific research questions may stem from a different discipline, whereas answering the general research question requires integrating the answers to all specific research questions.

The group then identifies the key concepts in their research questions, while exchanging thoughts on possible attributes based on what they have learnt from previous courses (theories) and literature. When doing so they may judge the research question as too broad, in which case they will turn to the question strategies toolbox again. Once they agree on the formulation of the research questions and the choice of concepts, tasks are divided. In general, each student turns to the literature he/she is most familiar with or interested in, for the operationalization of the concept into measurable attributes and writes a paragraph or two about it. In the next meeting, the groups read and discuss the input and decide on the set-up and division of tasks with respect to the technical design.

3.1.3 Teaching with MIR—the technical framework

The technical part of research design distinguishes between study design, instrument design, sampling design, and the data analysis plan. In class, we first present students with a range of study designs (cross sectional, experimental, etc.). Student groups select an appropriate study design by comparing the demands made by the research questions with criteria for internal validity. When a (specific) research question calls for a study design that is not seen as practically feasible or ethically possible, they will rephrase the research question until the demands of the research question tally with the characteristics of at least one ethical, feasible and internally valid study design.

While following plenary sessions during which different random and non-random sampling or selection strategies are taught, groups start working on their sampling design. The groups make two decisions informed by their research question: the population(s) of research units, and the requirements of the sampling strategy for each population. Like many other aspects in research design, this can be an iterative process. For example, suppose the research question mentioned “local policy makers,” which is too vague for a sampling design. Then the decision may be to limit the study to “policy makers at the municipality level in the Netherlands” and adapt the general and the specific research questions accordingly. Next, the group identifies whether a sample design needs to focus on diversity (e.g., when the objective is to make an inventory of possible local policies), representativeness (e.g., when the objective is to estimate prevalence of types of local policies), or people with particular information (e.g., when the objective is to study people having experience with a given local policy). When a sample has to representative, the students must produce an assessment of external validity, whereas when the aim is to map diversity the students must discuss possible ways of source triangulation. Finally, in conjunction with the data analysis plan, students decide on the sample size and/or the saturation criteria.

When the group has agreed on their population(s) and the strategy for recruiting research units, the next step is to finalize the technical aspects of operationalisation i.e. addressing the issue of exactly how information will be extracted from the research units. Depending on what is practically feasible qua measurement, the choice of a data collection instrument may be a standardised (e.g., a spectrograph, a questionnaire) or less standardised (e.g., semi-structured interviews, visual inspection) one. The students have to discuss the possibilities of method triangulation, and explain the possible weaknesses of their data collection plan in terms of measurement validity and reliability.

3.1.4 Recent developments

Presently little attention is payed to the data analysis plan, procedures for synthesis and reporting because the programmes differ on their offer in data analysis courses, and because execution of the research is not part of the BSc and MSc methodology courses. Recently, we have designed one course for an interdisciplinary BSc program in which the research question is put central in learning and deciding on statistics and qualitative data analysis. Nonetheless, during the past years the number of methodology courses for graduate students that supported the MIR framework have been expanded, e.g., a course “From Topic to Proposal”; separate training modules on questionnaire construction, interviewing, and observation; and optional courses on quantitative and qualitative data analysis. These courses are open to (and attended by) PhD students regardless of their program. In Flanders (Belgium), the Flemish Training Network for Statistics and Methodology (FLAMES) has for the last four years successfully applied the approach outlined in Fig.  1 in its courses for research design and data collection methods. The division of the research process in terms of a conceptual design, technical design, operationalisation, analysis plan, and sampling plan, has proved to be appealing for students of disciplines ranging from linguistics to bioengineering.

3.2 Researching with MIR: noise reducing asphalt layers and quality of life

3.2.1 research objective and research question.

This example of the application of the MIR framework comes from a study about the effects of “noise reducing asphalt layers” on the quality of life (Vuye et al. 2016 ), a project commissioned by the City of Antwerp in 2015 and executed by a multidisciplinary research team of Antwerp University (Belgium). The principal researcher was an engineer from the Faculty of Applied Engineering (dept. Construction), supported by two researchers from the Faculty of Medicine and Health Sciences (dept. of Epidemiology and Social Statistics), one with a background in qualitative and one with a background in quantitative research methods. A number of meetings were held where the research team and the commissioners discussed the research objective (the ‘what’ and ‘why’).The research objective was in part dictated by the European Noise Directive 2002/49/EC, which forces all EU member states to draft noise action plans, and the challenge in this study was to produce evidence of a link between the acoustic and mechanical properties of different types of asphalt, and the quality of life of people living in the vicinity of the treated roads. While there was literature available about the effects of road surface on sound, and other studies had studied the link between noise and health, no study was found that produced evidence simultaneously about noise levels of roads and quality of life. The team therefore decided to test the hypothesis that traffic noise reduction has a beneficial effect on the quality of life of people into the central research. The general research question was, “to what extent does the placing of noise reducing asphalt layers increase the quality of life of the residents?”

3.2.2 Study design

In order to test the effect of types of asphalt, initially a pretest–posttest experiment was designed, which was expanded by several added experimental (change of road surface) and control (no change of road surface) groups. The research team gradually became aware that quality of life may not be instantly affected by lower noise levels, and that a time lag is involved. A second posttest aimed to follow up on this effect although it could only be implemented in a selection of experimental sites.

3.2.3 Instrument selection and design

Sound pressure levels were measured by an ISO-standardized procedure called the Statistical Pass-By (SPB) method. A detailed description of the method is in Vuye et al. ( 2016 ). No such objective procedure is available for measuring quality of life, which can only be assessed by self-reports of the residents. Some time was needed for the research team to accept that measuring a multidimensional concept like quality of life is more complicated than just having people rate their “quality of life” on a 10 point scale. For instance, questions had to be phrased in a way that gave not away the purpose of the research (Hawthorne effect), leading to the inclusion of questions about more nuisances than traffic noise alone. This led to the design of a self-administered questionnaire, with questions of Flanders Survey on Living Environment (Departement Leefmilieu, Natuur & Energie 2013 ) appended by new questions. Among other things, the questionnaire probed for experienced nuisance by sound, quality of sleep, effort to concentrate, effort to have a conversation inside or outside the home, physical complaints such as headaches, etc.

3.2.4 Sampling design

The selected sites needed to accommodate both types of measurements: that of noise from traffic and quality of life of residents. This was a complicating factor that required several rounds of deliberation. While countrywide only certain roads were available for changing the road surface, these roads had to be mutually comparable in terms of the composition of the population, type of residential area (e.g., reports from the top floor of a tall apartment building cannot be compared to those at ground level), average volume of traffic, vicinity of hospitals, railroads and airports, etc. At the level of roads therefore, targeted sampling was applied, whereas at the level of residents the aim was to realize a census of all households within a given perimeter from the treated road surfaces. Considerations about the reliability of applied instruments were guiding decisions with respect to sampling. While the measurements of the SPB method were sufficiently reliable to allow for relatively few measurements, the questionnaire suffered from considerable nonresponse which hampered statistical power. It was therefore decided to increase the power of the study by adding control groups in areas where the road surface was not replaced. This way, detecting an effect of the intervention did not solely depend on the turnout of the pre and the post-test.

3.2.5 Data analysis plan

The statistical analysis had to account for the fact that data were collected at two different levels: the level of the residents filling out the questionnaires, and the level of the roads which surface was changed. Because survey participation was confidential, results of the pre- and posttest could only be compared at aggregate (street) level. The analysis had to control for confounding variables (e.g., sample composition, variety in traffic volume, etc.), experimental factors (varieties in experimental conditions, and controls), and non-normal dependent variables. The statistical model appropriate for analysis of such data is a Generalised Linear Mixed Model.

3.2.6 Execution

Data were collected during the course of 2015, 2016 and 2017 and are awaiting final analysis in Spring 2017. Intermediate analyses resulted in several MSc theses, conference presentations, and working papers that reported on parts of the research.

4 Discussion

In this paper we presented the Methodology in Interdisciplinary Research framework that we developed over the past decade building on our experience as lecturers, consultants and researchers. The MIR framework recognizes research methodology and methods as important content in the critical factor skills and competences. It approaches research and collaboration as a process that needs to be designed with the sole purpose to answer the general research question. For the conceptual design the team members have to discuss and agree on the objective of their communal efforts without squeezing it into one single discipline and, thus, ignoring complexity. The specific research questions, when formulated, contribute to (self) respect in collaboration as they represent and stand witness of the need for interdisciplinarity. In the technical design, different parts were distinguished to stimulate researchers to think and design research out of their respective disciplinary boxes and consider, for example, an experimental design with qualitative data collection, or a case study design based on quantitative information.

In our teaching and consultancy, we first developed a MIR framework for social sciences, economics, health and environmental sciences interdisciplinarity. It was challenged to include research in the design discipline of landscape architecture. What characterizes research in landscape architecture and other design principles, is that the design product as well as the design process may be the object of study. Lenzholder et al. ( 2017 ) therefore distinguish three kinds of research in landscape architecture. The first kind, “Research into design” studies the design product post hoc and the MIR framework suits the interdisciplinary study of such a product. In contrast, “Research for design” generates knowledge that feeds into the noun and the verb ‘design’, which means it precedes the design(ing). The third kind, Research through Design(ing) employs designing as a research method. At first, just like Deming and Swaffield ( 2011 ), we were a bit skeptical about “designing” as a research method. Lenzholder et al. ( 2017 ) pose that the meaning of research through design has evolved through a (neo)positivist, constructivist and transformative paradigm to include a pragmatic stance that resembles the pragmatic stance assumed in the MIR framework. We learned that, because landscape architecture is such an interdisciplinary field, the process approach and the distinction between a conceptual and technical research design was considered very helpful and embraced by researchers in landscape architecture (Tobi and van den Brink 2017 ).

Mixed methods research (MMR) has been considered to study topics as diverse as education (e.g., Powell et al. 2008 ), environmental management (e.g., Molina-Azorin and Lopez-Gamero 2016 ), health psychology (e.g., Bishop 2015 ) and information systems (e.g., Venkatesh et al. 2013 ). Nonetheless, the MIR framework is the first to put MMR in the context of integrating disciplines beyond social inquiry (Greene 2008 ). The splitting of the research into modules stimulates the identification and recognition of the contribution of both distinct and collaborating disciplines irrespective of whether they contribute qualitative and/or quantitative research in the interdisciplinary research design. As mentioned in Sect.  2.4 the integration of the different research modules in one interdisciplinary project design may follow one of the mixed methods designs. For example, we witnessed at several occasions the integration of social and health sciences in interdisciplinary teams opting for sequential modules in a sequential exploratory mixed methods fashion (e.g., Adamson 2005 : 234). In sustainability science research, we have seen the design of concurrent modules for a concurrent nested mixed methods strategy (ibid) in research integrating the social and natural sciences and economics.

The limitations of the MIR framework are those of any kind of collaboration: it cannot work wonders in the absence of awareness of the necessity and it requires the willingness to work, learn, and research together. We developed MIR framework in and alongside our own teaching, consultancy and research, it has not been formally evaluated and compared in an experiment with teaching, consultancy and research with, for example, the regulative cycle for problem solving (van Strien 1986 ), or the wheel of science from Babbie ( 2013 ). In fact, although we wrote “developed” in the previous sentence, we are fully aware of the need to further develop and refine the framework as is.

The importance of the MIR framework lies in the complex, multifaceted nature of issues like sustainability, food security and one world health. For progress in the study of these pressing issues the understanding, construction and quality of interdisciplinary portfolio measurements (Tobi 2014 ) are pivotal and require further study as well as procedures facilitating the integration across different disciplines.

Another important strain of further research relates to the continuum of Responsible Conduct of Research (RCR), Questionable Research Practices (QRP), and deliberate misconduct (Steneck 2006 ). QRP includes failing to report all of a study’s conditions, stopping collecting data earlier than planned because one found the result one had been looking for, etc. (e.g., John et al. 2012 ; Simmons et al. 2011 ; Kampen and Tamás 2014 ). A meta-analysis on selfreports obtained through surveys revealed that about 2% of researchers had admitted to research misconduct at least once, whereas up to 33% admitted to QRPs (Fanelli 2009 ). While the frequency of QRPs may easily eclipse that of deliberate fraud (John et al. 2012 ) these practices have received less attention than deliberate misconduct. Claimed research findings may often be accurate measures of the prevailing biases and methodological rigor in a research field (Fanelli and Ioannidis 2013 ; Fanelli 2010 ). If research misconduct and QRP are to be understood then the disciplinary context must be grasped as a locus of both legitimate and illegitimate activity (Fox 1990 ). It would be valuable to investigate how working in interdisciplinary teams and, consequently, exposure to other standards of QRP and RCR influence research integrity as the appropriate research behavior from the perspective of different professional standards (Steneck 2006 : p. 56). These differences in scientific cultures concern criteria for quality in design and execution of research, reporting (e.g., criteria for authorship of a paper, preferred publication outlets, citation practices, etc.), archiving and sharing of data, and so on.

Other strains of research include interdisciplinary collaboration and negotiation, where we expect contributions from the “science of team science” (Falk-Krzesinski et al. 2010 ); and compatibility of the MIR framework with new research paradigms such as “inclusive research” (a mode of research involving people with intellectual disabilities as more than just objects of research; e.g., Walmsley and Johnson 2003 ). Because of the complexity and novelty of inclusive health research a consensus statement was developed on how to conduct health research inclusively (Frankena et al., under review). The eight attributes of inclusive health research identified may also be taken as guiding attributes in the design of inclusive research according to the MIR framework. For starters, there is the possibility of inclusiveness in the conceptual framework, particularly in determining research objectives, and in discussing possible theoretical frameworks with team members with an intellectual disability which Frankena et al. labelled the “Designing the study” attribute. There are also opportunities for inclusiveness in the technical design, and in execution. For example, the inclusiveness attribute “generating data” overlaps with the operationalization and measurement instrument design/selection and the attribute “analyzing data” aligns with the data analysis plan in the technical design.

On a final note, we hope to have aroused the reader’s interest in, and to have demonstrated the need for, a methodology for interdisciplinary research design. We further hope that the MIR framework proposed and explained in this article helps those involved in designing an interdisciplinary research project to get a clearer view of the various processes that must be secured during the project’s design and execution. And we look forward to further collaboration with scientists from all cultures to contribute to improving the MIR framework and make interdisciplinary collaborations successful.

Aboelela, S.W., Larson, E., Bakken, S., Carrasquillo, O., Formicola, A., Glied, S.A., Gebbie, K.M.: Defining interdisciplinary research: conclusions from a critical review of the literature. Health Serv. Res 42 (1), 329–346 (2007)

Article   Google Scholar  

Adamson, J.: Combined qualitative and quantitative designs. In: Bowling, A., Ebrahim, S. (eds.) Handbook of Health Research Methods: Investigation, Measurement and Analysis, pp. 230–245. Open University Press, Maidenhead (2005)

Google Scholar  

Adler, E.S., Clark, R.: An Invitation to Social Research: How it’s Done, 4th edn. Sage, London (2011)

Babbie, E.R.: The Practice of Social Research, 13th edn. Wadsworth Cengage Learning, Belmont Ca (2013)

Baker, G.H., Little, R.G.: Enhancing homeland security: development of a course on critical infrastructure systems. J. Homel. Secur. Emerg. Manag. (2006). doi: 10.2202/1547-7355.1263

Bishop, F.L.: Using mixed methods research designs in health psychology: an illustrated discussion from a pragmatist perspective. Br. J. Health. Psychol. 20 (1), 5–20 (2015)

Boeije, H.R.: Analysis in Qualitative Research. Sage, London (2009)

Bruns, D., van den Brink, A., Tobi, H., Bell, S.: Advancing landscape architecture research. In: van den Brink, A., Bruns, D., Tobi, H., Bell, S. (eds.) Research in Landscape Architecture: Methods And Methodology, pp. 11–23. Routledge, New York (2017)

Creswell, J.W., Piano Clark, V.L.: Designing and Conducting Mixed Methods Research, 2nd edn. Sage, Los Angeles (2011)

Danner, D., Blasius, J., Breyer, B., Eifler, S., Menold, N., Paulhus, D.L., Ziegler, M.: Current challenges, new developments, and future directions in scale construction. Eur. J. Psychol. Assess. 32 (3), 175–180 (2016)

Deming, M.E., Swaffield, S.: Landscape Architecture Research. Wiley, Hoboken (2011)

Departement Leefmilieu, Natuur en Energie: Uitvoeren van een uitgebreide schriftelijke enquête en een beperkte CAWI-enquête ter bepaling van het percentage gehinderden door geur, geluid en licht in Vlaanderen–SLO-3. Leuven: Market Analysis & Synthesis. www.lne.be/sites/default/files/atoms/files/lne-slo-3-eindrapport.pdf (2013). Accessed 8 March 2017

De Vaus, D.: Research Design in Social Research. Sage, London (2001)

DeVellis, R.F.: Scale Development: Theory and Applications, 3rd edn. Sage, Los Angeles (2012)

Dillman, D.A.: Mail and Internet Surveys, 2nd edn. Wiley, Hobroken (2007)

Falk-Krzesinski, H.J., Borner, K., Contractor, N., Fiore, S.M., Hall, K.L., Keyton, J., Uzzi, B., et al.: Advancing the science of team science. CTS Clin. Transl. Sci. 3 (5), 263–266 (2010)

Fanelli, D.: How many scientists fabricate and falsify research? A systematic review and metaanalysis of survey data. PLoS ONE (2009). doi: 10.1371/journal.pone.0005738

Fanelli, D.: Positive results increase down the hierarchy of the sciences. PLoS ONE (2010). doi: 10.1371/journal.pone.0010068

Fanelli, D., Ioannidis, J.P.A.: US studies may overestimate effect sizes in softer research. Proc. Natl. Acad. Sci. USA 110 (37), 15031–15036 (2013)

Fetters, M.D., Molina-Azorin, J.F.: The journal of mixed methods research starts a new decade: principles for bringing in the new and divesting of the old language of the field. J. Mixed Methods Res. 11 (1), 3–10 (2017)

Fischer, A.R.H., Tobi, H., Ronteltap, A.: When natural met social: a review of collaboration between the natural and social sciences. Interdiscip. Sci. Rev. 36 (4), 341–358 (2011)

Flick, U.: An Introduction to Qualitative Research, 3rd edn. Sage, London (2006)

Fox, M.F.: Fraud, ethics, and the disciplinary contexts of science and scholarship. Am. Sociol. 21 (1), 67–71 (1990)

Frischknecht, P.M.: Environmental science education at the Swiss Federal Institute of Technology (ETH). Water Sci. Technol. 41 (2), 31–36 (2000)

Godfray, H.C.J., Beddington, J.R., Crute, I.R., Haddad, L., Lawrence, D., Muir, J.F., Pretty, J., Robinson, S., Thomas, S.M., Toulmin, C.: Food security: the challenge of feeding 9 billion people. Science 327 (5967), 812–818 (2010)

Greene, J.C.: Is mixed methods social inquiry a distinctive methodology? J. Mixed Methods Res. 2 (1), 7–22 (2008)

IPCC.: Climate Change 2014 Synthesis Report. Geneva: Intergovernmental Panel on Climate Change. www.ipcc.ch/pdf/assessment-report/ar5/syr/SYR_AR5_FINAL_full_wcover.pdf (2015) Accessed 8 March 2017

John, L.K., Loewenstein, G., Prelec, D.: Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol. Sci. 23 (5), 524–532 (2012)

Kagan, J.: The Three Cultures: Natural Sciences, Social Sciences and the Humanities in the 21st Century. Cambridge University Press, Cambridge (2009)

Book   Google Scholar  

Kampen, J.K., Tamás, P.: Should I take this seriously? A simple checklist for calling bullshit on policy supporting research. Qual. Quant. 48 , 1213–1223 (2014)

Kumar, R.: Research Methodology: A Step-by-Step Guide for Beginners, 1st edn. Sage, Los Angeles (1999)

Kumar, R.: Research Methodology: A Step-by-Step Guide for Beginners, 4th edn. Sage, Los Angeles (2014)

Kvale, S.: Doing Interviews. Sage, London (2007)

Kvale, S., Brinkmann, S.: Interviews: Learning the Craft of Qualitative Interviews, 2nd edn. Sage, London (2009)

Lenzholder, S., Duchhart, I., van den Brink, A.: The relationship between research design. In: van den Brink, A., Bruns, D., Tobi, H., Bell, S. (eds.) Research in Landscape Architecture: Methods and Methodology, pp. 54–64. Routledge, New York (2017)

Molina-Azorin, J.F., Lopez-Gamero, M.D.: Mixed methods studies in environmental management research: prevalence, purposes and designs. Bus. Strateg. Environ. 25 (2), 134–148 (2016)

Onwuegbuzie, A.J., Leech, N.L.: Taking the “Q” out of research: teaching research methodology courses without the divide between quantitative and qualitative paradigms. Qual. Quant. 39 (3), 267–296 (2005)

Powell, H., Mihalas, S., Onwuegbuzie, A.J., Suldo, S., Daley, C.E.: Mixed methods research in school psychology: a mixed methods investigation of trends in the literature. Psychol. Sch. 45 (4), 291–309 (2008)

Shipley, B.: Cause and Correlation in Biology, 2nd edn. Cambridge University Press, Cambridge (2016)

Simmons, J.P., Nelson, L.D., Simonsohn, U.: False positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol. Sci. 22 , 1359–1366 (2011)

Spelt, E.J.H., Biemans, H.J.A., Tobi, H., Luning, P.A., Mulder, M.: Teaching and learning in interdisciplinary higher education: a systematic review. Educ. Psychol. Rev. 21 (4), 365–378 (2009)

Spradley, J.P.: The Ethnographic Interview. Holt, Rinehart and Winston, New York (1979)

Steneck, N.H.: Fostering integrity in research: definitions, current knowledge, and future directions. Sci. Eng. Eth. 12 (1), 53–74 (2006)

Tobi, H.: Measurement in interdisciplinary research: the contributions of widely-defined measurement and portfolio representations. Measurement 48 , 228–231 (2014)

Tobi, H., Kampen, J.K.: Survey error in an international context: an empirical assessment of crosscultural differences regarding scale effects. Qual. Quant. 47 (1), 553–559 (2013)

Tobi, H., van den Brink, A.: A process approach to research in landscape architecture. In: van den Brink, A., Bruns, D., Tobi, H., Bell, S. (eds.) Research in Landscape Architecture: Methods and Methodology, pp. 24–34. Routledge, New York (2017)

van Strien, P.J.: Praktijk als wetenschap: Methodologie van het sociaal-wetenschappelijk handelen [Practice as science. Methodology of social scientific acting.]. Van Gorcum, Assen (1986)

Venkatesh, V., Brown, S.A., Bala, H.: Bridging the qualitative-quantitative divide: guidelines for conducting mixed methods research in information systems. MIS Q 37 (1), 21–54 (2013)

Vuye, C., Bergiers, A., Vanhooreweder, B.: The acoustical durability of thin noise reducing asphalt layers. Coatings (2016). doi: 10.3390/coatings6020021

Walmsley, J., Johnson, K.: Inclusive Research with People with Learning Disabilities: Past, Present and Futures. Jessica Kingsley, London (2003)

Walsh, W.B., Smith, G.L., London, M.: Developing an interface between engineering and social sciences- interdisciplinary team-approach to solving societal problems. Am. Psychol. 30 (11), 1067–1071 (1975)

Download references

Acknowledgements

The MIR framework is the result of many discussions with students, researchers and colleagues, with special thanks to Peter Tamás, Jennifer Barrett, Loes Maas, Giel Dik, Ruud Zaalberg, Jurian Meijering, Vanessa Torres van Grinsven, Matthijs Brink, Gerda Casimir, and, last but not least, Jenneken Naaldenberg.

Author information

Authors and affiliations.

Biometris, Wageningen University and Research, PO Box 16, 6700 AA, Wageningen, The Netherlands

Hilde Tobi & Jarl K. Kampen

Statua, Dept. of Epidemiology and Medical Statistics, Antwerp University, Venusstraat 35, 2000, Antwerp, Belgium

Jarl K. Kampen

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Hilde Tobi .

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Tobi, H., Kampen, J.K. Research design: the methodology for interdisciplinary research framework. Qual Quant 52 , 1209–1225 (2018). https://doi.org/10.1007/s11135-017-0513-8

Download citation

Published : 27 April 2017

Issue Date : May 2018

DOI : https://doi.org/10.1007/s11135-017-0513-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research design
  • Interdisciplinarity
  • Consultancy
  • Multidisciplinary collaboration
  • Find a journal
  • Publish with us
  • Track your research
  • Open access
  • Published: 30 August 2022

Barriers and facilitators to clinical behaviour change by primary care practitioners: a theory-informed systematic review of reviews using the Theoretical Domains Framework and Behaviour Change Wheel

  • Melissa Mather   ORCID: orcid.org/0000-0001-9746-0131 1 ,
  • Luisa M. Pettigrew 2 , 3 &
  • Stefan Navaratnam 4  

Systematic Reviews volume  11 , Article number:  180 ( 2022 ) Cite this article

6741 Accesses

10 Citations

11 Altmetric

Metrics details

Understanding the barriers and facilitators to behaviour change by primary care practitioners (PCPs) is vital to inform the design and implementation of successful Behaviour Change Interventions (BCIs), embed evidence-based medicine into routine clinical practice, and improve quality of care and population health outcomes.

A theory-led systematic review of reviews examining barriers and facilitators to clinical behaviour change by PCPs in high-income primary care contexts using PRISMA. Embase, MEDLINE, PsychInfo, HMIC and Cochrane Library were searched. Content and framework analysis was used to map reported barriers and facilitators to the Theoretical Domains Framework (TDF) and describe emergent themes. Intervention functions and policy categories to change behaviour associated with these domains were identified using the COM-B Model and Behaviour Change Wheel (BCW).

Four thousand three hundred eighty-eight reviews were identified. Nineteen were included. The average quality score was 7.5/11. Reviews infrequently used theory to structure their methods or interpret their findings. Barriers and facilitators most frequently identified as important were principally related to ‘ Knowledge ’, ‘ Environmental context and resources ’ and ‘ Social influences ’ TDF domains. These fall under the ‘Capability’ and ‘Opportunity’ domains of COM-B, and are linked with interventions related to education, training, restriction, environmental restructuring and enablement. From this, three key areas for policy change include guidelines, regulation and legislation. Factors least frequently identified as important were related to ‘Motivation’ and other psychological aspects of ‘Capability’ of COM-B. Based on this, BCW intervention functions of persuasion, incentivisation, coercion and modelling may be perceived as less relevant by PCPs to change behaviour.

Conclusions

PCPs commonly perceive barriers and facilitators to behaviour change related to the ‘Capability’ and ‘Opportunity’ domains of COM-B. PCPs may lack insight into the role that ‘Motivation’ and aspects of psychological ‘Capability’ have in behaviour change and/or that research methods have been inadequate to capture their function. Future research should apply theory-based frameworks and appropriate design methods to explore these factors. With no ‘one size fits all’ intervention, these findings provide general, transferable insights into how to approach changing clinical behaviour by PCPs, based on their own views on the barriers and facilitators to behaviour change.

Systematic review registration

A protocol was submitted to the London School of Hygiene and Tropical Medicine via the Ethics and CARE form submission on 16.4.2020, ref number 21478 (available on request). The project was not registered on PROSPERO.

Peer Review reports

Known as the “second translational gap” [ 1 ], a gap in translation between evidence-based interventions and everyday clinical practice has been shown across different clinical areas and international settings [ 2 , 3 , 4 ], with numerous organisational and individual factors influencing clinical behaviour. Existing literature has shown that there is particularly wide variation in clinical behaviour in the primary care setting, which cannot be explained by case mix and clinical factors alone [ 5 , 6 ]. This variation is of particular concern, as it is widely accepted that primary care is the cornerstone of a strong healthcare system [ 7 ], and stronger primary care systems are generally associated with better and more equitable population health outcomes [ 8 , 9 , 10 , 11 ]. With an ageing population and unique evolving challenges faced in primary care, understanding the contextual barriers and facilitators to successful behaviour change by primary care practitioners (PCPs) is vital to inform the design and implementation of successful behaviour change interventions (BCIs), and is likely to offer the greatest potential improvement in quality of care and population health outcomes.

Behaviour change interventions

Changing behaviour of healthcare professionals is not easy, but has been shown to be easier when evidence-based theory informs intervention development [ 12 ]. BCIs aimed at healthcare professionals have traditionally been related to incentivisation schemes, guidelines, educational outreach, audit and feedback, printed materials and reminders [ 13 , 14 ]. These have often emerged from approaches to understanding behaviour change, focused on individual attitude-intention processes [ 15 ] and theories emphasising self-interest [ 16 , 17 ]. However, the impact of these interventions on changing clinicians’ behaviour has been found to variable [ 18 ]. Within the context of primary care, attitude-intention processes may not fully explain (lack of) behaviour change, where PCPs face competing pressures, such as caring for multiple patients with limited time, identifying pathology among undifferentiated symptoms, coping with emotional situations, managing uncertainty and keeping up-to-date with substantial volumes of new evidence. Similarly, theories of self-interest may not fully translate to PCPs. BCIs are often implemented through collective action across teams or based on financial levers [ 19 , 20 , 21 ]; however, the organisational context where PCPs work can vary from a single or group community-based practices with variable payment systems [ 22 ]. Therefore, while other healthcare professionals, patients and carers are likely to offer valuable insights, understanding PCPs’ own perspectives on the barriers and facilitators to behaviour change by PCPs is a vital starting point.

Theoretical Domains Framework and Behaviour Change Wheel

The Theoretical Domains Framework (TDF) of behaviour change [ 23 ] simplifies and integrates 33 theories and 128 key theoretical constructs related to behaviour change into a single framework for use across multiple disciplines. Theoretical constructs are grouped into 14 domains in the final paper by Michie et al. [ 24 ], encompassing individual, social and environmental factors, with the majority relating to individual motivation and capability factors [ 25 ] (Fig. 1 ). Skills can be subcategorised into cognitive and interpersonal, and physical, although cognitive and inter-personal skills are more relevant to primary care (Table 1 ).

figure 1

The Behaviour Change Wheel (BCW) [ 26 ] (above) and the relationship with the Theoretical Domains Framework (TDF) [ 25 ] (below)

The TDF has been widely used to examine clinical behaviour change in healthcare settings [ 25 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 ]. Key advantages of the TDF include a comprehensive range of domains useful for synthesising large amounts of data [ 24 ] and the domains can be used to identify the types of interventions and policy strategies necessary to change those mechanisms of behaviour, using the Behaviour Change Wheel (BCW) [ 26 ]. Developed by Michie et al., the BCW can be used to characterise interventions by their “functions” and link these to behavioural targets, categorised in terms of capability (individual capacity to engage in the activity concerned), opportunity (all the factors that lie outside the individual that make the behaviour possible or prompt it) and motivation (brain processes that energize and direct behaviour), known as the COM-B System. Capability encompasses not only individual physical capability, but also psychological capability, defined as the capacity to engage in the necessary thought processes using comprehension, reasoning etc. Strategies to modify behaviour can be identified based on salient TDF and COM-B domains [ 35 ].

The evidence gap

Never having been done before, the aims of this systematic review of reviews were to:

Identify barriers and facilitators to clinical behaviour change by PCPs through the theoretical lenses of the TDF and BCW, from the perspective of PCPs.

Help inform the future development and implementation of theory-led BCIs, to embed EBM into routine clinical practice, improve quality of care and population health outcomes.

A systematic review of reviews was deemed an appropriate method to address these aims, as the literature is substantial and heterogeneous. Existing reviews of reviews have looked at different types of effective BCIs, both in primary care [ 36 , 37 ] and in healthcare in general [ 18 ], however none have looked at barriers and facilitators to PCPs’ behaviour change, using both the TDF and BCW models as a theoretical basis.

We aimed to answer the following questions:

Which TDF domains are most frequently identified as important by PCPs when barriers and facilitators to clinical behaviour change by PCPs are mapped to the TDF framework?

What important themes emerge within these TDF domains?

What intervention functions and policy strategies from the COM-B Model and BCW link to these TDF domains, and what are the implication of this?

Guidance presented in the Joanna Briggs Institute (JBI) Manual for Evidence Synthesis [ 38 ] was used as methodological guidance to conduct the review, which provides guidance for umbrella reviews synthesising qualitative and quantitative data on topics other than intervention effectiveness. This guidance, alongside a modified version of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [ 39 ], were used for reporting (Additional file 1 ).

A comprehensive database search strategy was devised by MM with assistance from a librarian from LSHTM. The search was conducted by MM on April 16th 2020 without date restriction, using the following databases: Embase (1947 to 2020 April 14), MEDLINE (1946 to April week 1 2020), PsychInfo (1806 to April week 1 2020), Health Management Information Consortium (HMIC) (1979 to March 2020) and Cochrane Library (inception to April 2020). The full search strategies are shown in Additional file 2 .

In addition, grey literature was hand-searched by MM on the following websites: Public Health England [ 40 ], the University College London (UCL) Centre for Behaviour Change [ 41 ] and the National Institute for Health and Care Excellence (NICE) Evidence Search [ 42 ]. After screening and selection, reference lists of the included reviews were screened for additional relevant reviews.

Inclusion and exclusion criteria

To be included, articles had to be reviews of qualitative, quantitative or mixed methods empirical studies examining barriers and facilitators to clinical behaviour change by PCPs. Inclusion and exclusion criteria were defined using the PICo framework (Population, phenomena of Interest, Context) [ 43 ], to enable transparency and reproducibility. The element of ‘types of studies’ was added to specify types of evidence included (Table 2 ).

In most HIC settings, general practitioners/family doctors are the main providers of primary care, however often included a mix of PCPs (healthcare professionals working in primary care).

PCPs usually provide the mainstay of care in high-income settings. Common barriers and facilitators across a wide range of high-income settings provides stronger evidence for context-specific recommendations.

Types of studies:

The inclusion of all types of reviews (including but not limited to narrative and realist reviews, meta-ethnography and meta-aggregation) allows for a broader review of available literature and they are not bound by the specificity of systematic reviews [ 44 , 45 ].

Only reviews published in English were included.

Screening and selection

Results from database searches were exported to EndNote X9 software and deduplicated. Titles and abstracts were screened independently by two reviewers (MM and SN). If the abstract contained insufficient information to determine eligibility, a copy of the full text was obtained. The full texts of articles meeting the inclusion criteria were obtained and reviewed. A standardised form including elements of the PICo framework was used at the full text review stage to identify relevant articles in a consistent way. Articles which could not be accessed online were obtained by contacting authors. Authors were also contacted to obtain clarification where eligibility was unclear. Reference lists of included articles were hand searched by MM and SN to identify additional relevant articles, subject to the same screening and selection processes described.

Quality appraisal

The Joanna Briggs Institute (JBI) Critical Appraisal Checklist for Systematic Reviews and Research Syntheses [ 38 ] was used for quality appraisal, conducted independently by MM and SN. This tool was applicable to reviews of observational studies, which constituted the majority of the included articles; therefore, all reviews, regardless of their type, were subject to quality appraisal using the JBI checklist. This also allowed for consistency in scoring and easier comparison between the reviews. A scoring system was pre-defined using a small sample of five articles and guidance in the JBI Manual for Evidence Synthesis [ 38 ]. Some articles fulfilled some but not all of the criteria for each question, which was believed to be reasonable, therefore an additional ‘partial yes’ response was added to reflect this (Additional file 3 ). With a maximum score of 11, scores were used to indicate low (≤ 4 points), moderate (> 4 and < 8 points) and high (≥ 8 points) quality. As outlined by Pope and Mays [ 46 ], the value of specific pieces of qualitative research may only emerge during the synthesis process and may still offer valuable insights despite low quality. No articles were therefore excluded on the basis of low quality scores.

Data extraction

The JBI Data Extraction Form for Reviews of Systematic Reviews and Research Syntheses [ 38 ] was adapted to extract relevant data from included reviews. A citation matrix was created to map the included empirical studies of each review and identify duplicate references.

Data analysis and synthesis

Data analysis was conducted independently by MM and SN. Previously reported analysis methods [ 25 , 47 ] were used to guide data analysis and synthesis methods. A combination of content and framework analysis was used, described in five steps:

Data extraction: full-text versions of the included articles were imported into NVivo software and data were extracted from results and discussion sections and supplementary files. Data included barriers, facilitators and factors which could be both barriers and facilitators.

Deductive analysis: extracted barriers, facilitators and factors were mapped to relevant TDF domains using component constructs of each domain, outlined by Cane et al. [ 24 ]. Almost all reported barriers and facilitators related to skills were cognitive and interpersonal, therefore the TDF domain ‘skills: physical’ was removed.

Counts were used to identify the most frequently-reported TDF domains. Owing to the vast amount of information across the included reviews, counts were also used to identify the TDF domains most frequently reported as important by authors. This was done in three ways: where authors explicitly stated they were important or salient, where they were most frequently reported where authors used frequency counts, and where authors highlighted or focused on them in the discussion section to draw main conclusions.

Inductive analysis: thematic analysis was conducted to identify emergent themes within the TDF domains most frequently identified as important to provide context to the role each barrier, facilitator and factor plays in hindering or facilitating clinical behaviour change. Owing to the vast amount of information across the included reviews, themes reported as important or salient by five or more reviews were labelled as important overall.

TDF domains most frequently identified as important were mapped to the COM-B model of the BCW to identify the associated intervention functions and policy categories.

Discrepancies between reviewers at the screening, selection, quality appraisal and analysis stages were discussed until a consensus was reached.

Search results and selection

Database searches identified 6308 records. After duplicates were removed, there were 4374 records remaining. An additional 14 articles were identified from grey literature and reference list searches. The vast majority of these articles were either not a review of empirical studies, or they did not focus on behaviour change. Where they did focus on behaviour change, they focused on patient behaviour change, rather than that of PCPs. One hundred and nine full-text articles were assessed for eligibility. Clarification was sought from 19 authors on participant roles, search strategies and synthesis methods, and was obtained from 11 authors. Nineteen reviews [ 33 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 ] were included in the data synthesis (Fig. 2 ).

figure 2

Flow chart [ 66 ] of review process

Characteristics of included reviews

Of the 19 included reviews, 17 [ 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 64 , 65 ] were systematic reviews and 2 [ 33 , 63 ] were narrative reviews. The reviews were all published between 2005 and 2020. Four hundred and one empirical studies were included in total across a wide range of settings and healthcare systems. Almost all studies were conducted in high-income countries, the majority of which were conducted in Europe, USA, Canada, Australia and New Zealand. Five studies (1%) were conducted in upper-middle-income countries, including Jordan, Turkey, South Africa and Bosnia and Herzegovina. Seven reviews [ 48 , 49 , 50 , 51 , 52 , 53 , 65 ] only included qualitative studies, two [ 54 , 55 ] only included quantitative studies, and 10 [ 33 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 ] included qualitative, quantitative and/or mixed methods studies. Most studies were observational, utilising qualitative interviews and/or focus groups, or cross-sectional surveys. A minority of observational studies from six [ 55 , 58 , 59 , 62 , 63 , 64 ] reviews were part of larger intervention studies.

More than 72,000 PCPs were included in total. Seven [ 33 , 48 , 49 , 52 , 55 , 63 , 64 ] reviews only reported general practitioner (GP) or family physician (FP) data and 12 [ 50 , 51 , 53 , 54 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 65 ] reported a mix of PCP data, the majority of which were GPs, FPs, and community paediatrics and obstetrics and gynaecology physicians. Of these, five reviews [ 51 , 56 , 60 , 62 , 65 ] included primary care non-physicians, including nurse practitioners (NPs) and physician assistants. Seven reviews [ 50 , 52 , 53 , 56 , 58 , 61 , 65 ] examined behaviour related to clinical management of a range of topics, of which three specifically examined prescribing behaviour; four reviews [ 33 , 51 , 59 , 63 ] examined diagnostic processes; two [ 49 , 54 ] examined prevention; two [ 55 , 57 ] examined communication and engagement with patients; two [ 48 , 64 ] examined the practice of EBM in general; one [ 62 ] examined collaborative practice; and one [ 60 ] examined service provision. Eighteen studies were referenced by two reviews each due to overlapping phenomena of interest. The most common data synthesis methods were thematic/narrative synthesis and meta-ethnography, used by 13 reviews [ 48 , 49 , 50 , 51 , 52 , 53 , 55 , 56 , 58 , 60 , 61 , 62 , 65 ]. Six reviews [ 33 , 54 , 57 , 59 , 63 , 64 ] used framework synthesis; only three reviews [ 33 , 59 , 64 ] used existing theoretical frameworks or models, such as the TDF and COM-B. A summary of review characteristics is shown in Table 3 . Additional information is shown in Additional file 4 .

Quality of empirical studies

Three reviews [ 33 , 55 , 63 ] did not conduct quality appraisal, two of which [ 33 , 63 ] were narrative reviews. The remaining reviews used a wide range of appraisal tools to suit the type of data they included, which were primarily existing tools in their original or adapted forms. Five reviews [ 56 , 57 , 60 , 62 , 64 ] used more than one tool. The most common appraisal tools used were CASP (Critical Appraisal Skills Programme) Checklists [ 67 ] for qualitative and quantitative research, used by seven reviews [ 48 , 52 , 53 , 56 , 57 , 59 , 65 ]. There was large variation in how authors used the appraisal tools, therefore quality of studies could not be reliably compared between reviews. Detailed information is shown in Additional file 3 .

Quality of reviews

Ten reviews were of high quality, seven were of moderate-quality and two were of low quality using the JBI checklist. The highest score was 10.5/11 and the lowest was 3/11. The average score was 7.5/11, which is considered moderate quality. Reviews generally included well-evidenced recommendations for policy and practice and appropriate directives for future research. On validity, reviews scored highest on appropriate inclusion criteria for the review question, and appropriate methods used to combine studies. Reviews scored poorly on using an appropriate search strategy and assessing for publication bias. Most reviews did not justify search limits and/or did not provide evidence of a search strategy. Scores for each of the criteria are shown in Additional file 3 .

Main findings

A large number of barriers, facilitators and factors were identified by authors, often interacting with each other in a complex way (Table 4 ). As a result, some barriers and facilitators were mapped to more than one TDF domain. All TDF domains were identified. All reviews identified ‘environmental context and resources’ as important, and all but two reviews identified ‘knowledge’ and ‘social influences’ as important. TDF domains identified least frequently as important were ‘goals’, ‘intentions’ and ‘optimism’. Although ‘social/professional role and identity’, ‘skills’ and ‘emotion’ TDF domains were frequently identified, they were less frequently highlighted as important by authors. Table 4 shows how each TDF domain and COM-B domains were mapped to each of the included reviews, as well as which domains were identified as important.

Forty-two themes were identified in total across all TDF domains, 12 of which were labelled as important overall. A theme was labelled as important overall if five or more reviews identified it as important or salient. Within the ‘Knowledge’, ‘Environmental context and resources’, and ‘Social influences’ TDF domains, nine important themes emerged, of which the most frequently cited as important were ‘knowledge, awareness and uncertainty’ and ‘time, workload and general resources’. Across the remaining TDF domains, other important themes included ‘skills and competence’, ‘roles and responsibilities’, and ‘confidence in own ability’. Additional file 5 shows how themes were mapped to each TDF domain and each review, with corresponding quotes.

Capability: psychological (COM-B domain)

Knowledge (tdf domain).

Knowledge, awareness and uncertainty (theme)

Identified as important by 13 reviews [ 33 , 50 , 51 , 52 , 54 , 55 , 58 , 59 , 60 , 61 , 62 , 63 , 65 ] (average quality score 7.2/11).

Inadequate knowledge and awareness and uncertainty were identified as important barriers to depression diagnosis and management [ 51 , 56 ], recognition of insomnia [ 33 ], antibiotic prescribing in childhood infections [ 50 ] and acute respiratory tract infections (ARTIs) [ 65 ], engagement in cancer care [ 58 ], integration of genetics services [ 60 ], discussing smoking cessation [ 55 ], collaborative practice [ 62 ], management of multimorbidity [ 52 ], breast and colorectal screening in older adults [ 54 ] and chlamydia testing [ 59 , 63 ]. This varied from a lack of knowledge of the topic as a whole, to more specific skills or outcomes. For example, PCPs reported a lack of knowledge around the epidemiology and presentation of chlamydia, benefits of testing, how to take specimens, and treatment options [ 59 ]. When prescribing antibiotics for childhood infections and ARTIs, PCPs reported they tend to prescribe “just in case” when they are uncertain of the consequences of not prescribing, such as when the diagnosis is unclear, or where there is no established doctor-patient relationship [ 50 , 65 ]. There was widespread lack of knowledge within the field of genetics, including uncertainty around cancer genetics, genetic testing, genetic discrimination legislation, and local genetics service provision [ 60 ]. As well as a lack of knowledge of national guidelines and strategy [ 60 , 63 ], inadequate guidelines were reported to exacerbate a lack of knowledge. For example, a lack of attention in guidelines on how social problems affect response to depression management was reported to exacerbate PCPs’ uncertainty around their role in managing depression [ 56 ]. Lack of knowledge and uncertainty were frequently reported to cause discomfort, low confidence, and reluctance to fill certain roles.

Opportunity: physical (COM-B domain)

Environmental context and resources (tdf domain).

Time, workload and general resources (theme)

Identified as important by 13 reviews [ 33 , 48 , 50 , 51 , 53 , 54 , 55 , 58 , 59 , 60 , 61 , 63 , 64 ] (average quality score 7.3/11).

A lack of time to implement a variety of different tasks and clinical behaviours was reported, compounded by a large and complex workload and lack of general resources. A prominent barrier was time-pressured consultations, where PCPs reported difficulty in ensuring the clinician and parents are satisfied with the outcome when treating childhood infections [ 50 ], offering alternative interventions [ 53 ], listening to patients with depression [ 56 ], discussing emotions in cancer care [ 58 ] or smoking cessation with patients [ 55 ], introducing chlamydia testing and addressing sexual health-related concerns [ 63 ], recognising, diagnosing and managing child and adolescent mental health problems [ 61 ], and negotiating with patients [ 48 ]. PCPs also reported a lack of time to read and assess evidence and guidelines and reflect on their own practice [ 48 , 64 ].

Guidelines, evidence and decision-making tools (theme)

Identified as important by five reviews [ 48 , 52 , 58 , 64 , 65 ] (average quality score 7.8/11).

Guidelines were a common factor reported to affect clinical behaviour, including a lack of guidelines/guidance, questionable evidence-base, and a disjunction between guidelines and personal experience. For example, PCPs reported difficulty in adapting recommendations to individual patient circumstances and practical constraints of the consultation [ 48 , 51 , 52 ]. PCPs felt that some guidelines lack the necessary flexibility when taking patient preferences and multi-morbidity into account, which can add to complexity and even cause harm in some cases [ 52 , 64 ]. PCPs questioned the evidence-base of the guidelines due to low generalisability and narrow inclusion criteria of trials [ 48 , 64 ] and potential biased sources of research, such as pharmaceutical companies [ 64 ]. The validity of criteria used for depression diagnosis was also questioned, with national guideline criteria defining depressive disorders using symptom counts, as opposed to viewing it as a syndrome requiring aetiological and conceptual thinking [ 51 ]. For non-English PCPs, access to evidence and guidelines in their native language was reported as a major barrier to implementing EBM [ 64 ].

Financial resources and insurance coverage (theme)

Identified as important by 6 reviews [ 49 , 54 , 58 , 61 , 62 , 64 ] (average quality score 8/11).

Poor remuneration and increasing costs were common barriers reported in areas such as PCP involvement in cancer care [ 58 ], child and adolescent mental health [ 61 ] and use of EBM [ 64 ]. A major barrier to recognition and management of child and adolescent mental health problems and cancer in older adults was inadequate insurance coverage, including inadequate coverage of screening tests [ 54 ], restrictions on the number of funded therapy visits, and lack of psychiatrists [ 61 ]. As a result, increased reimbursement was identified as a potential facilitator that could increase child and adolescent mental health diagnoses [ 61 ].

Education and training (theme)

Identified as important by five reviews [ 33 , 59 , 63 , 64 , 65 ] (average quality score 5.7/11).

A lack of education and training was highlighted as an important barrier to chlamydia testing [ 59 , 63 ], recognition of insomnia [ 33 ], antibiotic prescribing [ 65 ], and use of EBM [ 64 ]. PCPs reported that undergraduate sexual health teaching is inadequate [ 63 ] and that they have a lack of appropriate training and skills to discuss sexual health, take a sexual history, offer a test, manage treatment and notify partners. This has led to a reduction in knowledge and confidence to offer testing and discuss sexual health [ 59 ]. More education and training for PCPs and undergraduate students was frequently cited as a facilitator, as PCPs felt this would increase knowledge and confidence to change behaviour. Older male PCPs were identified as potentially in need of specific education on sexual health due to cultural differences with some patients receiving chlamydia testing [ 59 ]. Some PCPs reported that trustworthy and knowledgeable educational sources are important for PCPs to feel added value, with peer-led educational meetings given as an example [ 65 ].

Opportunity: social (COM-B domain)

Social influences (tdf domain).

PCP-patient relationship and patient-centred care (theme)

Identified as important by nine reviews [ 48 , 49 , 51 , 52 , 53 , 57 , 58 , 59 , 65 ] (average quality score 8.2/11).

Some PCPs reported that preservation of the PCP-patient relationship is prioritised over adherence to guidelines, particularly if guidelines recommend rationing services, or if PCPs feel empathetic towards anxious patients [ 48 ]. This dilemma was described as unpleasant and against the principles of patient-centred medicine, but sometimes necessary to avoid the potential litigation that rationing might bring [ 48 ] and loss of patients to other practices [ 53 ]. Similarly, the desire to maintain a good relationship is sometimes in competition with the PCP’s rationing role, leading some PCPs to give patients a “quick fix” when prescribing benzodiazepines [ 53 ]. Although not always reported as important, sensitive and emotive areas of medicine appear to be particularly affected, with PCPs reporting a concern for depriving patients of hope and/or damaging the relationship if they engage in the process of ACP [ 57 ], cancer care [ 58 ], or offer chlamydia testing [ 63 ]. Specifically, PCPs worried about appearing discriminatory and judgemental towards patients by offering chlamydia testing [ 63 ], and being too intrusive and paternalistic in recommending behaviour change to patients to prevent CVD [ 49 ]. This appears to be compounded by different religious and cultural norms between the PCP and patient, particularly if patients are of non-heteronormative orientation [ 63 ].

Establishing a rapport with patients and developing a long-standing, trusting doctor-patient relationship was identified as a facilitator for information-sharing, depression diagnosis [ 51 ], multimorbidity management [ 52 ], changing prescribing behaviour of benzodiazepines [ 53 ] and PCP engagement in ACP [ 57 ].

Patient/carer characteristics (theme)

Identified as important by eight reviews [ 49 , 50 , 53 , 54 , 56 , 59 , 64 , 65 ] (average quality score 8/11).

The majority of reviews identified perceptions of patient/carer perceived ideas, concerns, expectations and motivations as important barriers to preventing CVD [ 49 ], prescribing antibiotics [ 50 , 65 ] and benzodiazepines [ 53 ], chlamydia testing [ 59 ], cancer screening in older adults [ 54 ], and implementing EBM [ 64 ]. Attitudes were often born from stigma towards patients with mental health problems, and cultural diversity between the PCP and patient. For example, PCPs were found to have ambivalent attitudes towards working with depressed people, with some PCPs describing them as “burdens” and “people who bore you” [ 56 ]. Ethnic minorities were also felt to somatise their depression, and patients with social problems were seen to be avoiding work or seeking to medicalise their problems. This was compounded by a perception that management of patients presenting with social problems is complex. These beliefs were considered alongside other complex external factors, such as perceived pressure from parents to prescribe antibiotics [ 50 ], patient expectations different from the evidence [ 64 ], and a reluctance to medicalise unhealthy lifestyles [ 49 ].

Collaboration and communication with other health professionals (theme)

Identified as important by seven reviews [ 49 , 52 , 58 , 59 , 61 , 62 , 65 ] (average quality score 7.8/11).

Poor communication and uncoordinated care between PCPs and specialists were reported to hinder medication overviews, creating a feeling of uncertainty around the role of the PCP [ 52 ]. This was compounded by the perception of hierarchy between doctors and nurses [ 62 ] and negative attitudes towards handing over power [ 59 ]. Co-management with specialists was identified as an important facilitator in CVD prevention, to reinforce specialist advice and strengthen cohesive care [ 49 ]. Specialist input was desired by some PCPs to improve the awareness of the complexity of multimorbidity among specialists and ensure all doctors ‘speak with one voice’ to avoid provoking distrust [ 52 ]. Discussion with peers and personal or local prescribing feedback were identified as important facilitators to changing antibiotic prescribing [ 65 ]. Multiple facilitators to collaboration between nurse and medical practitioners in primary care were also identified [ 62 ]. These ranged from knowing the practitioner and having a good working relationship, reciprocity without hierarchy and control, effective communication including the use of technology, mutual trust and respect, shared responsibility and support from medical practitioners.

Norms, stigma and attitudes (theme)

Identified as important by five reviews [ 56 , 59 , 60 , 63 , 65 ] (average quality score 6.6/11).

The belief that patients would feel stigmatised or embarrassed was identified as an important barrier to depression diagnosis [ 56 ], chlamydia testing [ 59 , 63 ], discussing family history and genetics [ 60 ] and antibiotic prescribing for ARTIs [ 65 ]. Stigma towards depression was seen as an important barrier to addressing psychosocial aspects of depression and commencing treatment amongst patients from the Caribbean and South Asia [ 56 ]. Stigmatising attitudes towards depressed, obese and elderly people was also reported to impact clinical decision-making [ 49 , 56 ] (see ‘Patient and carer characteristics’ section). A major facilitator to reduce stigma and raise awareness was the normalisation of chlamydia testing [ 63 ]. This may include formal policy, guidelines or government programmes, feedback on testing rates, different methods of testing such as urine samples, and the use of non-heteronormative terminology.

BCW intervention functions and policy categories

COM-B components and intervention functions linked to the three TDF domains most frequently identified as important are shown in Table 5 . Based on this, five intervention functions from the BCW were identified as most likely to be successful in changing clinical behaviour by PCPs. Associated with improving ‘capability’ are education (increasing knowledge or understanding), training (imparting skills) and enablement (promoting collective action across networks to overcome barriers, such as behavioural support for smoking cessation) interventions. Associated with improving social and physical ‘Opportunity’ are restriction (using rules to engage in the target behaviour), environmental restructuring (changing the physical or social context) and enablement interventions. The TDF domains ‘intentions’, ‘goals’ and ‘optimism’, which all map to the ‘motivation’ domain of the COM-B, were perceived as the least influential on clinical behaviour change by PCPs. As a result, BCW intervention functions including persuasion, incentivisation, coercion and modelling may be perceived as less relevant by PCPs to change behaviour.

Using the BCW, the three policy categories most commonly associated with supporting the delivery of the five intervention functions identified include guidelines (creating documents that recommend or mandate practice, including all changes to service provision), regulation (establishing rules or principles of behaviour or practice, such as establishing voluntary agreements on advertising), and legislation (making or changing laws, such as prohibiting sale or use) (Table 5 ).

Summary of main results

Evidence across all reviews was heterogeneous, examining 16 different clinical behaviours across a range of primary care settings and healthcare systems. Most reviews were of moderate-to-high quality. All themes identified from the included reviews could be mapped to at least one domain from the TDF. Barriers, facilitators and factors most commonly reported by PCPs were related to ‘knowledge’, ‘environmental context and resources’ and ‘social influences’. Within these domains, ‘knowledge, awareness and uncertainty’ and ‘time, workload and general resources’ were by far the most important themes. Not only did factors affect various clinical behaviours such as diagnosis, management, and communication and collaboration with patients and other healthcare professionals, factors were also linked to each other in a complex way, often exacerbating each other in specific contexts and circumstances. For example, a lack of knowledge and uncertainty amongst PCPs is exacerbated by a poor or unestablished PCP-patient relationship, lack of time and resources, as well as patient characteristics, such as comorbidities and social problems.

Five out of nine intervention functions from the BCW (education, training, restriction, environmental restructuring and enablement) can be linked to the three TDF domains reported as most important by PCPs to help change clinical behaviour. These can be delivered through all seven policy categories of the BCW, although those most frequently associated policy categories with all five intervention categories are guidelines, regulation and legislation.

The TDF domains ‘intentions’, ‘goals’ and ‘optimism’, which all map to the ‘motivation’ domain of the COM-B, were perceived as the least influential on clinical behaviour change by PCPs. The TDF domains ‘behavioural regulation’, ‘memory, attention and decision processes’, ‘emotion’, ‘beliefs about consequences’, ‘reinforcement’, and ‘beliefs about capabilities’ were also perceived by PCPs as less important barriers or facilitators to behaviour change. ‘Behavioural regulation’, and ‘memory, attention and decision processes’ relate to the psychological aspect of ‘capability’ and the others, again, relate to the ‘Motivation’ domain of COM-B. This is a surprising finding, as the central premise of the TDF model is that domains linked to all three areas of the COM-B (capability, opportunity and motivation) model should interact to produce behaviour [ 25 ].

Linked to the automatic and reflective ‘motivation’ domain of COM-B are BCW interventions related to incentivisation, persuasion, coercion and modelling. It is therefore also surprising to find that PCPs did not identify these as important barriers and/or facilitators as substantial evidence exists regarding the widespread use of interventions associated with incentives (e.g. financial pay for performance or reputational league tables—albeit with mixed effects, and those which may utilise persuasion, modelling and even coercion (e.g. peer-to-peer outreach or public reporting) to change aspects of PCP behaviour [ 14 , 68 , 69 , 70 ].

The limited frequency and importance given to aspects of psychological ‘Capability’ and ‘Motivation’ raises questions as to whether PCPs may have less insight into these areas or less desire to identify them as barriers or facilitator. It is possible they may be neglecting the role of brain processes involved in developing psychological capabilities, i.e. the capacity to engage in the necessary thought processes using comprehension and reasoning, and those that energize or direct behaviour, such as habitual processes, emotional responding and automatic decision-making. With most studies using qualitative interviews or cross-sectional surveys, questions may also have focused on domains researchers and BCI designers believed to be relevant, such as external factors including time, guidelines and patients.

Key policy implications

Based on our findings, three TDF domains were most commonly reported across the majority of reviews, regardless of the type of behaviour change and context. This suggests that addressing these common factors through associated BCW intervention functions of education, training, restriction, environmental restructuring and enablement, and applying associated policy categories—namely guidelines, regulation and legislation, if not already addressed, could be prioritised to encourage PCPs to change clinical practice where needed across most clinical behaviours and settings.

Strengths and limitations

The robustness of our findings is supported by several features. A broad, sensitive search strategy maximised the number of eligible reviews identified. Although the extent to which findings are applicable to a specific healthcare system or clinical context is unclear, reviews meeting the inclusion criteria focused on 16 types of clinical behaviours across a breadth of healthcare systems and included over 72,000 PCPs, providing a good starting point to identify commonalities across PCPs from a variety of different primary care settings.

Large amounts of heterogeneous data was summarised in a clear way using two evidence-based frameworks, however precise mapping of barriers, facilitators and factors to the TDF proved challenging, owing to the complex interplay between factors and interpretation of the authors of where they fitted. The integration of the TDF and BCW means important barriers and facilitators can be linked to practical strategies to address them, which does, however, rely on the validity of the frameworks themselves.

Future research

Only a minority of reviews utilised a theory-based framework to synthesise evidence. To maximise the likelihood of intervention success and encourage the use of common terminology and understanding, future research should synthesise evidence using theory-informed frameworks, such as the TDF, paying particular attention to barriers and facilitators to behaviour change associated with PCPs’ own automatic and reflective motivation, and other aspects of psychological capability related to behaviour change. Methods exploring PCP motivation and aspects of psychological capability, as well as methods less reliant on PCPs’ insight, such as direct observation, may provide more valid conclusions.

To the best of our knowledge, this is the first theory-led systematic review of reviews examining barriers and facilitators to clinical behaviour change by PCPs across a variety of primary care settings using the TDF and BCW. From the evidence available, PCPs perceive that factors related to knowledge, environmental context and resources and social influences are influential across a variety of primary care contexts, often interacting with each other in a complex way. It is vital that future research utilises theory-based frameworks and appropriate design methods to explore factors relating to automatic and reflective motivation, such as habitual processes, emotional responding and automatic decision-making that energize or direct behaviour, as well as psychological capability of PCPs, including the capacity to engage in the necessary thought processes using comprehension, reasoning etc. With no ‘one size fits all’ intervention, these findings go some way to offering general, transferable lessons in how to approach changing clinical behaviour by PCPs and improve quality of care and population health outcomes.

Availability of data and materials

All data analysed during this study are included in this published article and its additional information files.

Abbreviations

Acute respiratory tract infection

Behaviour Change Intervention

  • Behaviour Change Wheel

Critical Appraisal Skills Programme

Capability Opportunity Motivation Behaviour

Cardiovascular disease

Evidence-based medicine

  • Family physician
  • General practitioner

Health Management Information Consortium

Joanna Briggs Institute

National Institute for Health and Care Excellence

Nurse practitioner

Primary care practitioner

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

Theoretical Domains Framework

Woolf SH. The meaning of translational research and why it matters. JAMA. 2008;299(2):211–3.

Article   CAS   PubMed   Google Scholar  

Sederer LI. Science to practice: making what we know what we actually do. Schizophr Bull. 2009;35(4):714–8.

Article   PubMed   PubMed Central   Google Scholar  

Runciman WB, Hunt TD, Hannaford NA, Hibbert PD, Westbrook JI, Coiera EW, et al. CareTrack: assessing the appropriateness of health care delivery in Australia. Med J Aust. 2012;197(2):100–5.

Article   PubMed   Google Scholar  

Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in patients’ care. Lancet. 2003;362(9391):1225–30.

Brookes-Howell L, Hood K, Cooper L, Little P, Verheij T, Coenen S, et al. Understanding variation in primary medical care: a nine-country qualitative study of clinicians’ accounts of the non-clinical factors that shape antibiotic prescribing decisions for lower respiratory tract infection. BMJ Open. 2012;2(4):e000796.

Jaramillo E, Tan A, Yang L, Kuo Y-F, Goodwin JS. Variation among primary care physicians in prostate-specific antigen screening of older men. JAMA. 2013;310(15):1622–4.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Shi L. The impact of primary care: a focused review. Scientifica. 2012;2012:432892.

Starfield B. Policy relevant determinants of health: an international perspective. Health Policy. 2002;60:201–18.

Macinko J, Starfield B, Shi L. The contribution of primary care systems to health outcomes within Organization for Economic Cooperation and Development (OECD) countries, 1970-1998. Health Serv Res. 2003;38(3):831–65.

Niti M, Ng TP. Avoidable hospitalisation rates in Singapore, 1991–1998: assessing trends and inequities of quality in primary care. J Epidemiol Community Health. 2003;57(1):17–22.

Starfield B. Primary care and health: a cross-national comparison. In: Isaacs SL, Knickman JR, editors. Generalist medicine and the US health system, Chapter 11. Princeton: Robert Wood Johnson; 2004. p. 187–96.

Google Scholar  

Abraham C, Kelly MP, West R, Michie S. The UK National Institute for Health and Clinical Excellence public health guidance on behaviour change: a brief introduction. Psychol Health Med. 2009;14(1):1–8.

Grimshaw JMSL, Thomas R, et al. Changing provider behavior: an overview of systematic reviews of interventions. Med Care. 2001;39(8):II2–45.

CAS   PubMed   Google Scholar  

Ahmed K, Hashim S, Khankhara M, Said I, Shandakumar AT, Zaman S, et al. What drives general practitioners in the UK to improve the quality of care? A systematic literature review. BMJ Open Qual. 2021;10(1):e001127.

Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. 1991;50(2):179–211.

Article   Google Scholar  

Laffont JJ, Martimort D. The theory of incentives: the principal-agent model. Princeton: Princeton University Press; 2009.

Becker G. The economic approach to human behavior. Chicago: University of Chicago Press; 1976.

Book   Google Scholar  

Johnson MJ, May CR. Promoting professional behaviour change in healthcare: what interventions work, and why? A theory-led overview of systematic reviews. BMJ Open. 2015;5(9):e008592.

Thornton PH, Ocasio W, Lounsbury M. The institutional logics perspective: a new approach to culture, structure and process. Oxford: Oxford University Press; 2012. p. 248.

Hernes T. A process theory of organization. Oxford: Oxford University Press; 2014.

Czarniawska B. A theory of organizing. Cheltenham: Edward Elgar Publishing Ltd; 2008.

Berenson RA, Rich EC. US approaches to physician payment: the deconstruction of primary care. J Gen Intern Med. 2010;25(6):613–8.

Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A. Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Saf Health Care. 2005;14(1):26–33.

Cane J, O’Connor D, Michie S. Validation of the theoretical domains framework for use in behaviour change and implementation research. Implement Sci. 2012;7(1):37.

Atkins L, Francis J, Islam R, O’Connor D, Patey A, Ivers N, et al. A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems. Implement Sci. 2017;12(1):77.

Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6:42.

Michie S, Pilling S, Garety P, Whitty P, Eccles MP, Johnston M, et al. Difficulties implementing a mental health guideline: an exploratory investigation using psychological theory. Implement Sci. 2007;2(1):8.

Islam R, Tinmouth AT, Francis JJ, Brehaut JC, Born J, Stockton C, et al. A cross-country comparison of intensive care physicians’ beliefs about their transfusion behaviour: a qualitative study using the Theoretical Domains Framework. Implement Sci. 2012;7:93.

McSherry LA, Dombrowski SU, Francis JJ, Murphy J, Martin CM, O’Leary JJ, et al. ‘It’s a can of worms’: understanding primary care practitioners’ behaviours in relation to HPV using the theoretical domains framework. Implement Sci. 2012;7(1):73.

Duncan EM, Francis JJ, Johnston M, Davey P, Maxwell S, McKay GA, et al. Learning curves, taking instructions, and patient safety: using a theoretical domains framework in an interview study to investigate prescribing errors among trainee doctors. Implement Sci. 2012;7:86.

Bussières AE, Patey AM, Francis JJ, Sales AE, Grimshaw JM, the Canada PPT. Identifying factors likely to influence compliance with diagnostic imaging guideline recommendations for spine disorders among chiropractors in North America: a focus group study using the Theoretical Domains Framework. Implement Sci. 2012;7(1):82.

Murphy K, O’Connor DA, Browning CJ, French SD, Michie S, Francis JJ, et al. Understanding diagnosis and management of dementia and guideline implementation in general practice: a qualitative study using the theoretical domains framework. Implement Sci. 2014;9(1):31.

Ogeil RP, Chakraborty SP, Young AC, Lubman DI. Clinician and patient barriers to the recognition of insomnia in family practice: a narrative summary of reported literature analysed using the theoretical domains framework. BMC Fam Pract. 2020;21(1):1.

Craig LE, McInnes E, Taylor N, Grimley R, Cadilhac DA, Considine J, et al. Identifying the barriers and enablers for a triage, treatment, and transfer clinical intervention to manage acute stroke patients in the emergency department: a systematic review using the theoretical domains framework (TDF). Implement Sci. 2016;11(1):157.

Richardson M, Khouja CL, Sutcliffe K, Thomas J. Using the theoretical domains framework and the behavioural change wheel in an overarching synthesis of systematic reviews. BMJ Open. 2019;9(6):e024950.

Chauhan BF, Jeyaraman MM, Mann AS, Lys J, Skidmore B, Sibley KM, et al. Behavior change interventions and policies influencing primary healthcare professionals' practice-an overview of reviews. Implement Sci. 2017;12(1):3.

Lau R, Stevenson F, Ong BN, Dziedzic K, Treweek S, Eldridge S, et al. Achieving change in primary care—causes of the evidence to practice gap: systematic reviews of reviews. Implement Sci. 2016;11(1):40.

Aromataris E, Fernandez R, Godfrey C, Holly C, Khalil H, Tungpunkom P. Chapter 10: Umbrella reviews. In: Aromataris E, Munn Z, editors. JBI manual for evidence synthesis; 2020.

Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines. 2020. Available from: http://www.prisma-statement.org/ . Accessed 1 June 2020.

PHE. Public Health England (PHE). https://www.gov.uk/government/organisations/public-health-england . Accessed 16 Apr 2020.

Change UCfB. UCL centre for behaviour change. https://www.ucl.ac.uk/behaviour-change/ . Accessed 16 Apr 2020.

NICE. National Institute for Health and Care Excellence (NICE) evidence search: https://www.evidence.nhs.uk/search . Accessed 16 Apr 2020.

Stern C, Jordan Z, McArthur A. Developing the review question and inclusion criteria. Am J Nurs. 2014;114(4):53–6.

Collins JA, Fauser BCJM. Balancing the strengths of systematic and narrative reviews. Hum Reprod Update. 2005;11(2):103–4.

Rycroft-Malone J, McCormack B, Hutchinson AM, DeCorby K, Bucknall TK, Kent B, et al. Realist synthesis: illustrating the method for implementation research. Implement Sci. 2012;7(1):33.

Pope C, Mays N. Qualitative research in health care. Hoboken: Wiley; 2013.

Graham-Rowe E, Lorencatto F, Lawrenson JG, Burr JM, Grimshaw JM, Ivers NM, et al. Barriers to and enablers of diabetic retinopathy screening attendance: a systematic review of published and grey literature. Diabet Med. 2018;35(10):1308–19.

Carlsen B, Glenton C, Pope C. Thou shalt versus thou shalt not: a meta-synthesis of GPs’ attitudes to clinical practice guidelines. Br J Gen Pract. 2007;57(545):971–8.

Ju I, Banks E, Calabria B, Ju A, Agostino J, Korda RJ, et al. General practitioners’ perspectives on the prevention of cardiovascular disease: systematic review and thematic synthesis of qualitative studies. BMJ Open. 2018;8(11):e021137.

Lucas PJ, Cabral C, Hay AD, Horwood J. A systematic review of parent and clinician views and perceptions that influence prescribing decisions in relation to acute childhood infections in primary care. Scand J Prim Health Care. 2015;33(1):11–20.

Schumann I. Physicians’ attitudes, diagnostic process and barriers regarding depression diagnosis in primary care: a systematic review of qualitative studies; 2012.

Sinnott C, McHugh S, Browne J, Bradley C. GPs’ perspectives on the management of patients with multimorbidity: systematic review and synthesis of qualitative research. BMJ Open. 2013;3(9):e003610.

Sirdifield C, Anthierens S, Creupelandt H, Chipchase SY, Christiaens T, Siriwardena AN. General practitioners’ experiences and perceptions of benzodiazepine prescribing: systematic review and meta-synthesis. BMC Fam Pract. 2013;14:191.

Vedel I, Puts MTE, Monette M, Monette J, Bergman H. Barriers and facilitators to breast and colorectal cancer screening of older adults in primary care: a systematic review. J Geriatr Oncol. 2011;2(2):85–98.

Vogt F, Hall S, Marteau TM. General practitioners’ and family physicians’ negative beliefs and attitudes towards discussing smoking cessation with patients: a systematic review. Addiction. 2005;100(10):1423–31.

Barley EA, Murray J, Walters P, Tylee A. Managing depression in primary care: a meta-synthesis of qualitative and quantitative research from the UK to identify barriers and facilitators. BMC Fam Pract. 2011;12:47.

De Vleminck A, Houttekier D, Pardon K, Deschepper R, van Audenhove C, Stichele RV, et al. Barriers and facilitators for general practitioners to engage in advance care planning: a systematic review. Scand J Prim Health Care. 2013;31(4):215–26.

Lawrence RA, McLoone JK, Wakefield CE, Cohn RJ. Primary care physicians’ perspectives of their role in cancer care: a systematic review. J Gen Intern Med. 2016;31(10):1222–36.

McDonagh LK, Saunders JM, Cassell J, Curtis T, Bastaki H, Hartney T, et al. Application of the COM-B model to barriers and facilitators to chlamydia testing in general practice for young people and primary care practitioners: a systematic review. Implement Sci. 2018;13(1):130.

Mikat-Stevens NA, Larson IA, Tarini BA. Primary-care providers’ perceived barriers to integration of genetics services: a systematic review of the literature. Genet Med. 2015;17(3):169–76.

O'Brien D, Harvey K, Howse J, Reardon T, Creswell C. Barriers to managing child and adolescent mental health problems: a systematic review of primary care practitioners’ perceptions. Br J Gen Pract. 2016;66(651):e693–707.

Schadewaldt V, McInnes E, Hiller JE, Gardner A. Views and experiences of nurse practitioners and medical practitioners with collaborative practice in primary health care - an integrative review. BMC Fam Pract. 2013;14:132.

Yeung A, Temple-Smith M, Fairley C, Hocking J. Narrative review of the barriers and facilitators to chlamydia testing in general practice. Aust J Prim Health. 2015;21(2):139–47.

Zwolsman S, te Pas E, Hooft L, Wieringa-de Waard M, van Dijk N. Barriers to GPs’ use of evidence-based medicine: a systematic review. Br J Gen Pract. 2012;62(600):e511–21.

Tonkin-Crine S, Yardley L, Little P. Antibiotic prescribing for acute respiratory tract infections in primary care: a systematic review and meta-ethnography. J Antimicrob Chemother. 2011;66(10):2215–23.

Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097.

CASP. Critical Appraisal Skills Programme (CASP) appraisal checklists. Available from: https://casp-uk.net/casp-tools-checklists/ . Accessed 1 June 2020.

Khan N, Rudoler D, McDiarmid M, Peckham S. A pay for performance scheme in primary care: Meta-synthesis of qualitative studies on the provider experiences of the quality and outcomes framework in the UK. BMC Fam Pract. 2020;21(1):142.

Eijkenaar F, Emmert M, Scheppach M, Schoffski O. Effects of pay for performance in health care: A systematic review of systematic reviews. Health Policy. 2013;110(2-3):115–30.

Campanella P, Vukovic V, Parente P, Sulejmani A, Ricciardi W, Specchia ML. The impact of public reporting on clinical outcomes: a systematic review and meta-analysis. BMC Health Serv Res. 2016;16:296.

Download references

Acknowledgements

LP is funded by a National Institute for Health Research (NIHR) Doctoral Research Fellowship. The views expressed are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care.

This review was led by MM as her Master’s in Public Health thesis at the London School of Hygiene and Tropical Medicine (LSHTM), no funding was received for this. LP is funded by a NIHR Doctoral Research Fellowship. No funding was received by SN.

Author information

Authors and affiliations.

Maidstone and Tunbridge Wells NHS Trust, Tunbridge Wells Hospital, Tonbridge Road, Pembury, Tunbridge Wells, Kent, TN2 4QJ, UK

Melissa Mather

Department of Health Services Research and Policy, London School of Hygiene and Tropical Medicine, 15-17 Tavistock Pl, London, WC1H 9SH, UK

Luisa M. Pettigrew

UCL Department of Primary Care and Population Health, UCL Medical School, Upper Third Floor, Rowland Hill Street, London, NW3 2PF, UK

Northern Devon Healthcare NHS Trust, North Devon District Hospital, Raleigh Heights, Barnstaple, EX31 4JB, UK

Stefan Navaratnam

You can also search for this author in PubMed   Google Scholar

Contributions

MM co-designed the project title and methods, conducted the searches, screening, selection, quality appraisal, data extraction, data analysis and synthesis, and drafted the report. LP co-designed the project title and methods and revised the draft. SN conducted screening, selection, quality appraisal, data extraction and data analysis. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Melissa Mather .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

PRISMA checklist.

Additional file 2.

Search strategy. Search concepts, keywords and MeSH terms used to derive search strategies. Search strategy.

Additional file 3.

Quality appraisal. Adapted scoring system for the Joanna Briggs Institute (JBI) Critical Appraisal Checklist for Systematic Reviews and Research Syntheses. Quality of empirical studies: appraisal instruments and quality scores. Quality appraisal criteria.

Additional file 4.

Additional data. Characteristics of included reviews.

Additional file 5.

Evidence mapping. Mapping of emergent themes to the Theoretical Domains Framework (TDF). Evidence table.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mather, M., Pettigrew, L.M. & Navaratnam, S. Barriers and facilitators to clinical behaviour change by primary care practitioners: a theory-informed systematic review of reviews using the Theoretical Domains Framework and Behaviour Change Wheel. Syst Rev 11 , 180 (2022). https://doi.org/10.1186/s13643-022-02030-2

Download citation

Received : 13 April 2022

Accepted : 17 July 2022

Published : 30 August 2022

DOI : https://doi.org/10.1186/s13643-022-02030-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Primary care
  • Family medicine
  • General practice
  • Family doctor
  • Theoretical domains framework
  • Behaviour change
  • Quality improvement

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research framework of design

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.14(11); 2022 Nov

Logo of cureus

The Application of a Design-Based Research Framework for Simulation-Based Education

Beheshta momand.

1 Health Sciences, Ontario Tech University, Oshawa, CAN

Masuoda Hamidi

Olivia sacuevo, adam dubrowski.

In research, the adoption of a framework is essential. It enables researchers to operate with specified parameters and provides structure and assistance with research projects, programs, and technologies. The incorporation of a framework also facilitates the organizing and planning of our research efforts with respect to the breadth and depth of what we want to discover. Frameworks are equally important in research focused on simulation-based education. Simulation-based education is a form of experiential learning that provides participants with the opportunity to acquire or improve real-world-like knowledge and skills in a simulated environment. The Medical Research Council framework, historically developed to guide clinical research, has been proposed as a framework to guide simulation research as well. However, because simulation-based education is positioned at the intersection of clinical and educational sciences, certain questions cannot be addressed using a clinical research framework. Thus, in this paper, we introduce an alternative framework, derived from educational sciences, to be considered and possibly adapted into simulation research. The design-based research (DBR) framework consists of four stages centered on design, testing, evaluation, and reflection. This editorial asserts that the DBR is an excellent framework for projects and programs of research in simulation.

Introduction

In research, frameworks help investigators generalize the various aspects of an observed phenomenon by simply describing it and identifying its limits. Frameworks are defined as a structure that can hold or support a theory of a research study [ 1 ]. As researchers, when we validate and challenge theoretical assumptions, it advances new knowledge by simplifying concepts and variables in accordance with the presented definitions. A common analogy used to define a framework is providing a blueprint to a builder so that they can construct a home [ 1 ]. For researchers, the blueprint provides the parameters for constructing their research. Therefore, frameworks are considered one of the most important aspects of the research process. Frameworks are also important in simulation-based education (SBE). SBE is a type of experiential learning where participants are tasked with solving complex problems in a controlled environment through replicated “real-life scenarios” [ 2 ]. SBE can be used to generate awareness and improve clinical skills and attitudes while protecting patients from unnecessary risks. In recent years, SBE has attracted a lot of attention and is growing rapidly. Therefore, due to the increase in popularity of SBE, it has become critical that health professionals have guidance on how to build effective simulation research programmes that provide evidence and subsequently best practice guidelines to help educators to optimize the benefits of SBE.

The Medical Research Council (MCR) framework was proposed as a potential framework in simulation research [ 3 ]. Typically, the MCR framework is used to construct and evaluate complex clinical interventions. Although it has been suggested that simulation is a complex intervention regardless of whether it is delivered as a stand-alone experience or as an argumentation of other educational modalities [ 3 ], some research questions are difficult to address following this framework. This is primarily because SBE is positioned at the intersection of clinical and educational sciences.

MRC and design-based research (DBR) explained further

In the context of educational sciences, the MRC framework poses a few limitations. For instance, the evaluation phase precedes the implementation phase [ 3 ]. This may be problematic because the implementation phase allows researchers to comprehend the various components required to produce the desired effects or outcomes in the real world, as opposed to the tightly controlled research environment. If the development and validation phases of innovation in SBE take several years, finding that there are factors limiting the implementation process would inevitably require the researchers and developers to go back to the early stages of research. This type of setback is disadvantageous, and as a result, the outcomes may not be as significant. Thus, in this paper, we intend to introduce an alternative framework, derived from educational sciences, to be considered and possibly adapted into simulation research, which is called the DBR framework. The DBR framework is typically used by researchers to improve educational practices and learn through instructional tools that can be tested, evaluated, and reflected upon [ 4 ]. The DBR framework contains four phases: design, test, evaluate, and reflect. In advance of these phases, it is essential to identify and explore information to a learning problem in order to implement DBR. This will help gather the necessary information in order to execute the design phase. The design phase utilizes existing research and theories to develop appropriate educational resources that will address the theoretical and hands-on issues that surround a learning problem [ 4 ]. Instructional tools can include focus groups, worksheets, or different forms of technology that will aid in the development of learning and understanding. During this phase, stakeholders including educators and simulation technologists can contribute to the design phase using a Delphi approach. This will help us determine what considerations must be made when designing the intervention or programme to make it simpler for participants. The testing phase encompasses real-world exposure to instructional tools through implementation in order to assess their effectiveness and determine if they are feasible enough to enhance learning [ 4 ]. This phase may differ from the researcher’s initial expectations and may involve continued revision. During the testing phase, stakeholders including learners, educators, and simulation technologists can evaluate the system and content using tools such as the System Usability Scale and the Michigan Standard Simulation Experience Scale framework for assessing the intervention's content. The evaluation phase determines how well the instructional tools were able to produce successful results and what areas need improving [ 4 ]. This phase uses evidence from the learner’s deployed experiences and is subject to scrutiny. During this phase, a focus group interview with stakeholders (e.g., learners, educators, and simulation technologists) would be conducted to determine which aspects of the intervention were successful and which ones require improvement. The final phase, known as the reflection phase, involves retrospective examinations of the methodology used to determine if the desired goals were met [ 4 ]. During this phase of the framework, Triple Loop Learning can be utilized during a focus group interview with stakeholders to comprehend and evaluate the established design principles and the process.

Benefits of using DBR

When DBR is used, researchers and other stakeholders, such as educators and learners, are all involved at every stage of the process [ 4 ]. The objective is for them to interact with programmes, activities, or simulation technologies to improve the learning process and outcomes. This is exceptionally important in simulation research because learners can test their knowledge and learn new ways to solve complex problems. The DBR framework also acknowledges the importance of the evaluation phase, which is a common phrase between the DBR and the MRC. As the DBR testing phase progresses, data collection will be continuously carried out, and new designs and revisions will be implemented to determine what tools best improve learning [ 4 ]. Next, in the evaluation phase, the researchers analyze how learners adapt and use educational resources to enhance or hinder their learning and how that information can be used to create a more effective educational tool and programme that promotes learning and understanding rather than impeding it. What is important to our community (i.e., simulation researchers and decision makers) is that the evaluation methodology used in this phase yields the evidence that can inform policies and change in practice. For example, randomized control trials are designed to be the most robust test of effectiveness in the MRC framework, and although not typically used in research framed by the DBR, this design should be considered in the adaptation of DBR to simulation research. Therefore, in summary, there are a series of differences, but also similarities between the MRC and the DBR framework. The main distinction between the two frameworks is seen in the DBR framework's testing and reflecting phases. The testing phase combines the MRC modeling, exploratory trials (i.e., piloting), and implementation phases into a single phase that involves testing the simulation programme or intervention in realistic settings, and this is done very early in the research process. The benefits of incorporating all of this into one phase are that it improves authenticity by testing the programmes in real-world settings, the stakeholders are more engaged in such settings, and feedback generated can be used to improve the technology early in the research and development process, before the formal evaluation and final implementation. The commonalities between these frameworks include the initial theory inform design phases and formal evaluations, although here we would urge the simulation researchers who would like to adapt the DBR to guide their research may want to consider designs, such as randomized control trials, that yield results that are valued by the simulation community most.

How to implement the DBR framework within SBE

In this section, we provide a working example of how DBR can be used to improve nurses’ communication skills when they are conversing with elderly patients who take multiple medications. We highlight the utility of DBR to conduct a programme of research that focuses on the development, modeling, testing, and evaluation of a hypothetical simulation programme and technology that can improve communication skills. In the design stage, the focus is on determining how the programme is constructed, specifically addressing issues of the program’s theory, content, and simulation modalities. In this phase, a focus group interview was held with nurses to understand how to effectively design the intervention. Other techniques, such as design thinking and consensus-building methods (e.g., the Delphi method), can also be used to crowdsource opinions from various stakeholders, such as technologists, researchers, and end-point users. Next, during the testing phase, the programme is implemented in a real-world setting, such as a long-term home, and representatives of the program's target end-point users, in this case, nurses, were recruited to test the programme and technologies and provide feedback on the acceptability, feasibility, and usability. During the evaluation, a cohort and small-scale randomized control trial design involving nurses as participants were used to determine the effectiveness of the programme and supporting educational technologies. Finally, during the reflection phase, a review of the methodology and theories used was conducted. In this step, researchers critically evaluate what had been developed, how it was developed, and whether it was appropriate [ 3 ]. 

  In research, frameworks are used to facilitate the execution of research projects, programmes, and technologies that aim to answer complex research questions. In SBE, frameworks are especially useful in creating effective simulation-based programmes that provide best practices and guidelines to health educators, which is becoming increasingly important as SBE grows in popularity. In the past, the MRC framework was proposed to be utilized in SBE. However, this framework has limitations because simulation-based research frequently focuses on educational issues. The DBR framework, on the other hand, has its roots in the field of education science and offers several novel approaches to designing and implementing research programmes and technologies. One of the most significant contributions of DBR in the interdisciplinary field such as simulation-based education is its focus on early testing of the invention or a programme in an authentic environment. This allows for assessing a potential fit with future implementation context, as well as developing a coherent implementation strategy. Finally, DBR permits reflection on every aspect of the programme in order to inform future research.

This editorial was intended to provide a high-level overview of the DBR framework and compare it to the MRC framework. Intentionally, no concrete tools and protocols are provided, as these will be developed in the future by the simulation research community.

The content published in Cureus is the result of clinical experience and/or research by independent individuals or organizations. Cureus is not responsible for the scientific accuracy or reliability of data or conclusions published herein. All content published within Cureus is intended only for educational, research and reference purposes. Additionally, articles published within Cureus should not be deemed a suitable substitute for the advice of a qualified health care professional. Do not disregard or avoid professional medical advice due to content published within Cureus.

The authors have declared that no competing interests exist.

  • Computer Vision
  • Federated Learning
  • Reinforcement Learning
  • Natural Language Processing
  • New Releases
  • AI Dev Tools
  • Advisory Board Members
  • 🐝 Partnership and Promotion

Logo

The proposed framework’s effectiveness is underscored by its ability to recover constraints utilized in GDL, demonstrating its potential as a general-purpose framework for deep learning. GDL, which uses a group-theoretic perspective to describe neural layers, has shown promise across various applications by preserving symmetries. However, it encounters limitations when faced with complex data structures. The category theory-based approach overcomes these limitations and provides a structured methodology for implementing diverse neural network architectures.

The Centre of this research is applying category theory to understand and create neural network architectures. This approach enables the creation of neural networks that are more closely aligned with the structures of the data they process, enhancing both the efficiency and effectiveness of these models. The research highlights the universality and flexibility of category theory as a tool for neural network design, offering new insights into the integration of constraints and operations within neural network models.

In conclusion, this research introduces a groundbreaking framework based on category theory for designing neural network architectures. By bridging the gap between the specification of constraints and their implementations, the framework offers a comprehensive approach to neural network design. The application of category theory not only recovers and extends the constraints used in frameworks like GDL but also opens up new avenues for developing sophisticated neural network architectures. 

Check out the  Paper .  All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on  Twitter . Join our  Telegram Channel ,   Discord Channel , and  LinkedIn Gr oup .

If you like our work, you will love our  newsletter..

Don’t Forget to join our  39k+ ML SubReddit

research framework of design

Sana Hassan

Sana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.

  • Sana Hassan https://www.marktechpost.com/author/sana-hassan/ Researchers at Stanford and MIT Introduced the Stream of Search (SoS): A Machine Learning Framework that Enables Language Models to Learn to Solve Problems by Searching in Language without Any External Support
  • Sana Hassan https://www.marktechpost.com/author/sana-hassan/ Sigma: Changing AI Perception with Multi-Modal Semantic Segmentation through a Siamese Mamba Network for Enhanced Environmental Understanding
  • Sana Hassan https://www.marktechpost.com/author/sana-hassan/ Claude vs ChatGPT: A Comparison of AI Chatbots
  • Sana Hassan https://www.marktechpost.com/author/sana-hassan/ Researchers from KAUST and Harvard Introduce MiniGPT4-Video: A Multimodal Large Language Model (LLM) Designed Specifically for Video Understanding

RELATED ARTICLES MORE FROM AUTHOR

Researchers at the university of cambridge propose anchoral: a unique machine learning method for active learning in unbalanced classification tasks, this ai paper introduces reasoneval: a new machine learning method to evaluate mathematical reasoning beyond accuracy, huggingface releases parler-tts: an inference and training library for high-quality, controllable text-to-speech (tts) models, researchers at stanford and mit introduced the stream of search (sos): a machine learning framework that enables language models to learn to solve problems..., the “zero-shot” mirage: how data scarcity limits multimodal ai, speechalign: transforming speech synthesis with human feedback for enhanced naturalness and expressiveness in technological interactions, researchers at the university of cambridge propose anchoral: a unique machine learning method for..., this ai paper introduces reasoneval: a new machine learning method to evaluate mathematical reasoning..., researchers at stanford and mit introduced the stream of search (sos): a machine learning..., speechalign: transforming speech synthesis with human feedback for enhanced naturalness and expressiveness in technological....

  • AI Magazine
  • Privacy & TC
  • Cookie Policy

🐝 FREE AI Courses on RAG + Deployment of an Healthcare AI App + LangChain Colab Notebook all included

Thank You 🙌

Privacy Overview

An official website of the United States government

Here's how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS. A lock ( Lock Locked padlock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

design element

  • Search Awards
  • Recent Awards
  • Presidential and Honorary Awards
  • About Awards
  • How to Manage Your Award
  • Grant General Conditions
  • Cooperative Agreement Conditions
  • Special Conditions
  • Federal Demonstration Partnership
  • Policy Office Website

research framework of design

Note:   When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval). Some links on this page may take you to non-federal websites. Their policies may differ from this site.

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

This work was focused on the development of new software, MoSDeF-GOMC, which is a Python interface for the Monte Carlo software GOMC to the Molecular Simulation Design Framework (MoSDeF) ecosystem.  The intent of the effort was to significantly lower barriers to entry for new users of GOMC, while providing support for the automation of a large number of calculations and improving the reproducibility of simulations.   These goals were all attained during the project.

MoSDeF-GOMC automates the process of generating initial coordinates, assignment of force field parameters, and writing coordinate (PDB), connectivity (PSF), force field parameter, and simulation control files.  All required simulation parameters are encoded within the workflow, ensuring the results can be reproduced by anyone simply by rerunning the same workflow.  User developed workflows can be distributed publically via GitHub or equivalent systems.  When used with a workflow manager, such as signac, MoSDeF-GOMC allows users to create complex HPC workflows that include simulation setup, simulation execution, and subsequent data analysis for thousands of calculations.   The software provides an abstraction layer between users and the native simulation input files, significantly lowering barriers to the use of computer simulation by novices. 

py-MCMD was created, which is an open-source Python software that enables users to combine a Monte Carlo simulation engine (GOMC) with a molecular dynamics software (NAMD) to perform hybrid Monte Carlo Molecular dynamics simulations.  Our implementation allows users to perform MC/MD simulations in a wide range of ensembles, including with the Gibbs ensemble Monte Carlo method.  The approach used in this work had a computational efficiency that varied between 2 and 100 times that of standard Monte Carlo, or molecular dynamics simulations.

Another software was created: MoSDeF-dihedral-fit, which is an open-source, transparent, and lightweight Python package capable of fitting dihedrals potentials for molecular mechanics force fields.  The software is able to optimize dihedral parameters with respect to ab initio data for any Lennard-Jones + fixed point charge force field, and convert them to a variety of functional forms (e.g. periodic, Ryckaert-Bellemans ) in appropriate units for GOMC, or other simulation engines that use CHARMM-style parameter files. 

This project produced three peer reviewed publications and six conference presentations.  This project contributed to the training and professional development of one graduate student, one post-doctoral researcher and three undergraduate researchers. The software developed in this work was used in a graduate thermodynamics course, and hands-on workshops. 

Last Modified: 02/01/2024 Modified by: Jeffrey J Potoff

Please report errors in award information by writing to: [email protected] .

  • Casos prácticos

PATAC Automates Digital Engine Research Framework to Improve Efficiency

Team aims to develop “one-click” 3d engine models.

“We discovered that Optimization Toolbox greatly reduces the time and effort of our engineers to develop the framework, using algorithms that are readily available.… Using this MATLAB toolbox has greatly improved our efficiency.”

Key Outcomes

  • MATLAB tools enabled the creation of a complete framework for the digital development of engines
  • Integration of intelligent optimization algorithms improved decision-making efficiency in the engine development process
  • Easy-to-use interfaces in the research and development app allowed for seamless updates and potential integration into the development process of other products
  • Modeling, architecture, and visualization with MATLAB helped transform traditional engine team development capabilities

As electric and hybrid vehicles become more common, the automotive industry demands rapid design updates, which can be difficult for human engineers to complete alone. To solve this problem, the Pan Asia Technical Automotive Center (PATAC) has developed a complete framework—including a research app—to automate the process for digital engine development.

Using a combination of MATLAB ® tools, including Requirements Toolbox™, System Composer™, and App Designer, the team was able to develop a unified environment to capture the engine architecture design process from start to finish. They began by modeling knowledge from engineers using logic functions in MATLAB. Engineers working on engine development then used these trained functions with Optimization Toolbox™ to support optimal design decisions.

This intelligent framework allows engineers and designers to input requirements, such as cost or fuel consumption, and easily output specific design solutions. In the future, the team imagines a seamless “one-click” interface to output 3D model solutions for rapidly changing engine designs.

Products Used

  • Optimization Toolbox
  • Requirements Toolbox
  • System Composer

Seleccione un país/idioma

Seleccione un país/idioma para obtener contenido traducido, si está disponible, y ver eventos y ofertas de productos y servicios locales. Según su ubicación geográfica, recomendamos que seleccione: .

También puede seleccionar uno de estos países/idiomas:

Cómo obtener el mejor rendimiento

Seleccione China (en idioma chino o inglés) para obtener el mejor rendimiento. Los sitios web de otros países no están optimizados para ser accedidos desde su ubicación geográfica.

  • América Latina (Español)
  • Canada (English)
  • United States (English)
  • Belgium (English)
  • Denmark (English)
  • Deutschland (Deutsch)
  • España (Español)
  • Finland (English)
  • France (Français)
  • Ireland (English)
  • Italia (Italiano)
  • Luxembourg (English)
  • Netherlands (English)
  • Norway (English)
  • Österreich (Deutsch)
  • Portugal (English)
  • Sweden (English)
  • United Kingdom (English)

Asia-Pacífico

  • Australia (English)
  • India (English)
  • New Zealand (English)

Comuníquese con su oficina local

COMMENTS

  1. What Is a Research Design

    Step 2: Choose a type of research design. Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research. Types of quantitative research designs. Quantitative designs can be split into four main types.

  2. Research Design

    The purpose of research design is to plan and structure a research study in a way that enables the researcher to achieve the desired research goals with accuracy, validity, and reliability. Research design is the blueprint or the framework for conducting a study that outlines the methods, procedures, techniques, and tools for data collection ...

  3. Research design: the methodology for interdisciplinary research framework

    The first kind, "Research into design" studies the design product post hoc and the MIR framework suits the interdisciplinary study of such a product. In contrast, "Research for design" generates knowledge that feeds into the noun and the verb 'design', which means it precedes the design (ing).

  4. Research Design

    Step 2: Choose a type of research design. Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research. Types of quantitative research designs. Quantitative designs can be split into four main types.

  5. What Is Research Design? 8 Types + Examples

    Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data. Research designs for quantitative studies include descriptive, correlational, experimental and quasi-experimenta l designs. Research designs for qualitative studies include phenomenological ...

  6. A Design Research Framework

    A Design Research Framework. Oct 25, 2022. Written By Erika Hall. Design Research Process Model (PDF) |. Alternative Style for Color Perception/B&W Printing (PDF) Recent discussions have been swirling around the phrase " democratization of research " concerning who should participate in what kind of research in design (product/service ...

  7. Research Design: What it is, Elements & Types

    Research design is the framework of research methods and techniques chosen by a researcher to conduct a study. The design allows researchers to sharpen the research methods suitable for the subject matter and set up their studies for success. Creating a research topic explains the type of research (experimental,survey research,correlational ...

  8. Study designs: Part 1

    Research study design is a framework, or the set of methods and procedures used to collect and analyze data on variables specified in a particular research problem. Research study designs are of many types, each with its advantages and limitations. The type of study design used to answer a particular research question is determined by the ...

  9. Full article: Design-based research: What it is and why it matters to

    Although design research typically applies to the gathering of information that feeds into a creative process of design, design itself creates knowledge. ... M. J. (2016). A design framework for enhancing engagement in student-centered learning: Own it, learn it, and share it. Educational Technology Research and Development, 64(4), 707-734 ...

  10. A Method Framework for Design Science Research

    The proposed method framework for design science research includes five main activities that range from problem investigation and requirements definition, through artefact design and development, to demonstration and evaluation. A design science project needs to make use of rigorous research methods. Any research strategy or method can be used ...

  11. (PDF) Basics of Research Design: A Guide to selecting appropriate

    The selection of an appropriate research design is guided by a careful analysis of the research problem, questions, theoretical framework, and relevant literature (Asenahabi, 2019). Furthermore ...

  12. What Is a Conceptual Framework?

    Developing a conceptual framework in research. Step 1: Choose your research question. Step 2: Select your independent and dependent variables. Step 3: Visualize your cause-and-effect relationship. Step 4: Identify other influencing variables. Frequently asked questions about conceptual models.

  13. What is a research framework and why do we need one?

    A research framework provides an underlying structure or model to support our collective research efforts. Up until now, we've referenced, referred to and occasionally approached research as more of an amalgamated set of activities. But as we know, research comes in many different shapes and sizes, is variable in scope, and can be used to ...

  14. Basic Research Design

    What is Research Design? Definition of Research Design: A procedure for generating answers to questions, crucial in determining the reliability and relevance of research outcomes. ... Uses theories to provide a framework for understanding the social context and meanings. The focus is on constitutive relationships rather than causal ones.

  15. (PDF) What Is Quality in Research? Building a Framework of Design

    The framework can be also useful to design new exercises or procedures of research evaluation based on a multidimensional view of quality. Attributes associated to Research Design (D). Attributes ...

  16. Research Design and Methodology

    2. Research design. The research design is intended to provide an appropriate framework for a study. A very significant decision in research design process is the choice to be made regarding research approach since it determines how relevant information for a study will be obtained; however, the research design process involves many interrelated decisions [].

  17. Design Science Research Frameworks

    A mental model for the conduct and presentation of DS research will help researchers to conduct it effectively. The DS process includes six steps: problem identification and motivation; definition of the objectives for a solution, design, and development; demonstration; evaluation; and communication. Activity 1.

  18. Literature Reviews, Theoretical Frameworks, and Conceptual Frameworks

    Other studies have presented a research logic model or flowchart of the research design as a conceptual framework. These constructions can be quite valuable in helping readers understand the data-collection and analysis process. However, a model depicting the study design does not serve the same role as a conceptual framework.

  19. (PDF) Research Design

    research design is the conceptual framework/structure within which the research shall be conducted. It is a kind of 'blue of print' t o proceed in a clear direction smoothly.

  20. What Is Research Design? Features, Components

    A research design is created or developed when the researcher prepares a plan, structure and strategy for conducting research. Research design is the base over which a researcher builds his research. A good research design provides vital information to a researcher with respect to a research topic, data type, data sources and techniques of data ...

  21. Research design: the methodology for interdisciplinary research framework

    The first kind, "Research into design" studies the design product post hoc and the MIR framework suits the interdisciplinary study of such a product. In contrast, "Research for design" generates knowledge that feeds into the noun and the verb 'design', which means it precedes the design (ing).

  22. Barriers and facilitators to clinical behaviour change by primary care

    Future research should apply theory-based frameworks and appropriate design methods to explore these factors. With no 'one size fits all' intervention, these findings provide general, transferable insights into how to approach changing clinical behaviour by PCPs, based on their own views on the barriers and facilitators to behaviour change.

  23. Development of a competency framework for advanced practice nurses: A

    This study consisted of two consecutive stages (November 2020-December 2021): (1) developing a competency framework for advanced practice nurses in Belgium by the research team, based on literature and (2) group discussions or interviews with and written feedback from key stakeholders. 11 group discussions and seven individual interviews were ...

  24. Designing the digitalized guest experience: A comprehensive framework

    Surma‐Aho, A., Hölttä‐Otto, K. (2022). Conceptualization and operationalization of empathy in design research. Design Studies, 78, 101075. Thompson, C. J. (1997). Interpreting consumers: A hermeneutical framework for deriving marketing insights from the text of consumers' consumption stories. Journal of Marketing Research, 34(4), 438-455.

  25. The Application of a Design-Based Research Framework for Simulation

    The design-based research (DBR) framework consists of four stages centered on design, testing, evaluation, and reflection. This editorial asserts that the DBR is an excellent framework for projects and programs of research in simulation. Keywords: medical research council framework, design based research, educational research, clinical ...

  26. The GenAI Compass: a UX framework to design generative AI experiences

    It's the difference between calling an AI-powered design tool "Magic Design Assistant" versus a sterile "Design Tool 5.0". The former, much like Canva's Magic Studio, promises an experience; the latter, merely a function. We are starting to see the proliferation of "assistants" and "co-pilots" as common terminology to AI ...

  27. (Pdf) the Research Design

    If your design is poor, the results of the research also will not be promising.[ 2 ] Research design is defined as a framework of methods and techniques chosen by a researcher to combine various ...

  28. Unifying Neural Network Design with Category Theory: A Comprehensive

    In deep learning, a unifying framework to design neural network architectures has been a challenge and a focal point of recent research. Earlier models have been described by the constraints they must satisfy or the sequence of operations they perform. This dual approach, while useful, has lacked a cohesive framework to integrate both perspectives seamlessly. The researchers tackle the core ...

  29. NSF Award Search: Award # 1835713

    In this project, nine research groups from eight universities are combining their expertise to create a software environment, called the Molecular Simulation Design Framework (MoSDeF) that will enable the automation of molecular-based computer simulations of soft materials (such as fluids, polymers, and biological systems) and will enable MGI ...

  30. PATAC Develops Automated Engine Design for Electric Vehicles

    This intelligent framework allows engineers and designers to input requirements, such as cost or fuel consumption, and easily output specific design solutions. In the future, the team imagines a seamless "one-click" interface to output 3D model solutions for rapidly changing engine designs.